Phew! It took me around an hour to figure this mess out but I did and I am so glad I did. I just hope my post is helpful for anybody who might be facing similar issues when trying to proxy requests via Burp and eventually end up getting the damned “Received fatal alert: handshake_failure” error every time.
I tried searching everywhere but none of the forums were helpful. So, I had to combine tit bits from different forums and a little bit of my brain to get this sorted out.
The Burp forum here – http://forum.portswigger.net/thread/717/burp-ssh-tunnelling along with the error messages in the Alerts tab in Burp were helpful. Specially, the comment from a Burp developer in the above thread. But, they don’t mention any details and leave it upto the users to figure it out. So, I will hopefully try to help those who are still stuck with this error.
So, assuming you are trying to proxy requests to a website, and end up getting the “Received fatal alert: handshake_failure” error message, pay close attention to the error logs under the Alerts tab in Burp. You will notice a message saying “You have limited key lengths available. To use stronger keys, please download and install the JCE unlimited strength jurisdiction policy files, from Oracle.”
If you ignore that, you are going nowhere. So, lets get the stronger keys as mentioned in the above error message. But, before you do that, you need to first figure out the JRE version that is installed on your machine. I have a Macbook and the following command helped me determine the JRE version that was being used. This command can be found here – http://docs.oracle.com/javase/7/docs/webnotes/install/mac/mac-jre.html under “Determining the Installed Version of the JRE” section. The command is:
/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java -version
I was running java version 1.7.0_60 which corresponds to JRE 7. The next step is to get the JCE unlimited strength jurisdiction policy files corresponding to the JRE version. So, I searched for “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 7 Download”. Notice the 7 because I was running JRE 7. Depending on the JRE version you are running, you will have to search for the appropriate JCE policy files. Download the zip file, unzip it. You will notice a folder with a bunch of files. The 2 files that we need are “US_export_policy.jar” and “local_policy.jar”.
Once, we have those files, navigate to /Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/security. You will notice that there are already these 2 files present in that directory. But, these are the old ones that we need to replace with the new ones that we just downloaded. So, to be safe, create backup of the old “US_export_policy.jar” and “local_policy.jar” files. And, replace them with the new ones.
Voila! you should be done at this point. Fire up burp again and navigate to the website that was causing problems. You should be able to access it without any problems. You just replaced the older jar files with the newer ones with much stronger keys that would help in the SSL negotiation.
PS – Finding the damn folder to replace the jar files was the hardest part. There were tons of folders, at least in my case, where these jar files were located. But, replacing them didn’t help. I had to find the right path and eventually, the docs.oracle.com link pasted above came to the rescue. There were a lot of threads about changing Java versions, running different Burp versions, etc. but none of them were helpful.
The Squareup website has a feature where users can add mobile staff on their team to collect payments on their behalf, issue refunds, etc. Let’s refer to this user as an admin and his mobile staff members as staff.
The way this works is that an admin sends an invitation to the people he wishes to add as mobile staff as shown in the above pic. It does not matter if the invited staff already has an existing account on Squareup. As soon as the invitation is sent, the invited staff receives a link in an email. When that link is clicked, the staff is asked to create a new account to accept payments as shown in the pic below:
If that staff already has an account on Squareup, unfortunately he would have to use a different email address, as he will get an error message “Email has already been taken” if he tries to use his existing email. It sounds weird but this is how it has been designed to work. A user cannot use the same email address to be a legitimate Squareup user as well as a mobile staff on somebody else’s team simultaneously.
So, the staff now uses a different email address to create an account so that he can accept payments on behalf of the admin. After the account is created, the staff sees the screen below:
Notice how all the different tabs (Home, Sales, Items, Orders, etc.) in the top row don’t appear anymore. So, basically the staff is only supposed to accept payments and do nothing else.
Well guess what – that’s not true. With forceful browsing, this staff can easily navigate to any page in the dashboard he wants to. There is no check being performed at all. So, the staff has as much access to his “own” dashboard as any other Square user might have access to theirs. This means that one of the many actions that a staff can do is send invoices to the admin’s customers, which he is not technically supposed to do. There is a blessing in disguise here though. When the staff sends this invoice, the invoice appears to be sent from the staff and not the admin. So, the staff cannot impersonate the admin and send the invoice on the admin’s behalf.
But if you think about it in a real world scenario, if I as an admin have assigned a person A to be my staff member, I would certainly inform all my customers of this new addition to my team and my customers would trust A henceforth. I would not go beyond that and inform them that “Do not pay invoices that are not sent by me”. This means if my customers see an invoice coming from my staff member, they would not hesitate in paying that. I feel this is a very viable situation where a rogue staff with a little bit of social engineering can literally go berserk without the admin even knowing about it.
Square qualified this as an “UI aesthetic bug” and not really a security issue. They went ahead and fixed it too as it is not reproducible anymore. They said this has no security implications whatsoever. If you ask me, I’d say forceful browsing is a known security issue regardless of the impact. By design, if a staff is not supposed to see their dashboard yet they can, I’d like to call it a security issue. Whatever happened to least privilege access? Not to mention, the sending invoice feature discussed above is just one thing out of the many things that a staff could possibly do as a result of this. I don’t think any of those actions are intended to be performed by a mobile staff member.
Moving further, if the admin removes this staff from his team, the staff member can now see his entire dashboard, which is fine. But, this behavior would just confirm my suspicion that it was not intended for the staff to access anything in his dashboard in the first place as long as he was a mobile staff member. Why would you ask existing users to create a new account otherwise? Why not just allow existing users to have full access to their dashboard as well as act as mobile staff for other teams?
Sending Invoice Bypass
As soon as a user creates an account on the Square website, he is taken to the page https://squareup.com/begin#step/0/us_business_information which looks like below:
This is the first step for getting a new user’s account activated. But, this step along with the next few steps can be ignored/bypassed by directly navigating to the dashboard at the URL – https://squareup.com/dashboard that looks like below:
If you then navigate to Sales -> Invoices -> Create Invoice, you see the following screen:
Now, it is quite obvious from the UI that Square wants you to activate your account first before doing operations like sending invoices, creating your business public profile, etc. Isn’t it? At least from the client perspective it is.
Using a proxy tool such as Burp, send the request that looks like below. Replace the redacted values accordingly – Use the X-CSRF-Token, _sessionv2 values of the newly created user. Use any invoice number that looks sequential like 000006, and the payer_email as the email address of the person who you wish to send the invoice to:
You will notice that you will get a successful response.
So, what happened? This newly created user just sent an invoice to a person for an item that is not even owned/created by this user. The above item is actually created by a totally different user. The email received looks like below:
So, essentially there are no server side authorization checks being performed on:
- Checking whether a user is actually supposed to be sending invoices or not without activating his account, adding his bank details, etc.
- Sending invoices for items that are not owned by the user.
Not to mention, if this issue is coupled with the first issue of Forceful Browsing, it is quite evident how one issue can lead to a chaining attack and make other attacks feasible.
This bug has been fixed so it cannot be reproduced anymore. I responsibly disclosed this to Square and they fixed it.
Password Reset Token
In the Square Cash application (https://cash.square.com/cash/login), on requesting multiple password-reset links, the old links are still valid. Now, I believe it is acceptable to have old links valid if the latest links have not been used because it is confusing and a hassle for end users to know which one is the latest if they haven’t reset the password yet but requested for multiple links.
But, one would imagine that if the new links have been used, at the very least, all the old links get invalidated, correct? That’s not the case here. The old links continue to be valid and if an attacker gains access to them, he would be able to reset the victim’s password. Not to mention, these tokens are all present in a GET request (https://square.com/cash/login/set-password/<token>), which means they often, get logged in browser’s history, proxies, logs, etc.
The only positive thing about this flow is that the tokens get invalidated after 12 hours. But, if I were a determined attacker, 12 hours is a lot of time I’d say. I’d also like to mention here that the same behavior is not observed on their flagship Squareup website – https://squareup.com/password/reset/<token>. The password reset functionality on that website is working properly. Old tokens get invalidated once new tokens are generated. So, if they have it there, why not here? I don’t know.
PS – I reported one more issue that was pretty interesting. I am waiting for Square to fix it before I blog about it.
I believe this issue affects a lot of applications that have friction-less signup flows i.e. creating accounts without first confirming it via email. This can affect applications in different ways depending upon the functionalities that exist in that application.
For instance, this can lead to a Logical Denial of Service (which is what I will be discussing in this blog), spamming legitimate victim users of activities that they are not aware of, etc.
For the Logical DoS, the attack is actually pretty simple. The only assumption I am going to do here is that username enumeration is possible – which we see is a case for a lot of websites as it is considered by design and a feature provided to aid users:
- An attacker enumerates a legitimate user account email in the vulnerable application. Let’s say this is email@example.com. This of course means that the application uses email addresses as their username.
- The attacker then locks this user’s account by providing the right email and wrong password multiple times.
- The legitimate user tries to login his account by providing his password but cannot login because his account is locked out.
- Meanwhile, an attacker creates multiple dummy accounts that have email address in the format victim+<id>@gmail.com. Remember there is no email confirmation required for an attacker to confirm the creation of such accounts.
- Now, the victim user obviously has to request password reset since he is locked out. So, he goes ahead and requests one.
- At the same time, the attacker also requests password resets for the multiple accounts that he created i.e. victim+<id>@gmail.com. There is nothing stopping the attacker from doing this. The rate limitation on password reset is a moot point here because the attacker is not requesting the password reset for the same account multiple times. But, instead, he is requesting password resets for multiple different accounts that eventually correspond to the same email address. See the risk of friction-less signup yet?
- All these password reset emails will go to the victim i.e. firstname.lastname@example.org since victim+<id>@gmail.com is the same as email@example.com from an email perspective. An important point to note here is that common email providers like Gmail clubs these emails into one single thread making it even more confusing to determine which email corresponds to which email address. In Gmail particularly, unless one clicks on a small arrow to see the email address, all the emails say “To me”:
Thus, if there is no clear distinction in the body of these emails as to what account the password-reset email corresponds to, the victim will have a difficult time finding the right password reset link for his original account and will continue to be locked out. This results in a Logical DoS.
With the inception of bug bounty programs, anybody with some knowledge of application security i.e. basic understanding of common web vulnerabilities like Cross-Site Scripting, Cross-Site Request Forgery, etc. could pretty much find bugs in any website out there and while doing so, make some $$$. I mean, if I tell my mom to enter <script>alert(1)</script> in every input box out there and see if she gets a pop-up or not, even she can do it without understanding what she is doing. That’s how bad security was (it still is but is definitely improving) maybe a couple of years ago.I actually started participating in some of these programs quite late (sometime in 2013) so my experience with bug bounty programs has been slightly different. By that time, people were already listed in multiple Hall of Fames.
Needless to say, it is not the same anymore. I have realized that it can actually be quite time consuming. And, if you already have a full time job like me, then bug bounty hunting is like a new job all together. Having said that, there has suddenly been an influx of bug hunters in the InfoSec industry these days. People have been claiming themselves as “security researchers” aka “hackers” on their blogs, twitter, linkedin. They have Hall of Fame acknowledgements as Honors & Rewards in their LinkedIn profiles. I think it is getting a bit too much now. I can understand having the HoF listed in Bugcrowd’s profile page, tweeting about a bug discovered and the amount of bounty received for it but listing 20 HoF acknowledgements on LinkedIn? Are you kidding me? Oh and by the way, do you really think you are a “security researcher” just because you have discovered CSRF’s in 20 different websites? I just don’t get it. Anyways, this was a rant I had been wanting to do since a long time and I will stop here.
Getting back to the topic of the blog, these bug bounty programs, obviously come with their fair share of either duplicates or rejected bugs. I will try to cover some of the rejected bugs I have had recently.
Session Not Invalidated On Logout
This was a shocker to me when Coindrawer rejected this. I was actually surprised it even existed in the first place because I’d have thought somebody must have already reported it considering that they have been live for quite some time now. I am not going to elaborate what the vulnerability is and how it can be exploited and such but it is a pretty well known vulnerability. Every time I have reported it, it has actually been accepted and fixed because it is indeed a security vulnerability and presents considerable risk. This is what they had to say about it:
We currently don’t consider this to be a threat.
Thank you for your submission. We are constantly making improvements
to our site and invite you to continue to test its security.
I don’t know what to say to this. If they are really making improvements to their site, they would fix this damn thing and not just send me an email template.
In the ManageWP website, when you change your password, an AJAX GET request is sent to the server with the new password value as a query parameter. Now, unlike a normal HTTP GET request, since this is an AJAX request, I do understand the argument that it is not going to be stored in browser’s history or log files. However, I have not seen this before. And, I strongly believe that sensitive information like passwords should not be sent as query parameters in a GET request. Again, no comments on this one.
I wanted to mention a couple of more bugs but I will save it for the next post!
SSL Not Enforced for web service calls made from the Prezi Desktop application
SSL is not enforced on some web service calls made between the Prezi Desktop application and the server.
- The authentication from the Prezi desktop application happens over HTTPS as expected.
- As soon as the user is authenticated, the server assigns a session cookie ‘prezi-auth’ and sets in the response header. An important point to note here is that there is no ‘Secure’ flag on this cookie. As per Prezi folks, they said the site is not fully functional over HTTPS as of now and hence there is no secure flag. This makes sense not implementing the secure flag. However, this introduces further risks as I will discuss below.
- Once the authentication is done, the Desktop application makes some web service calls over HTTPS. This can be verified by setting up a proxy and having the traffic from Internet Explorer go through the proxy. Now, since SSL is not being enforced here, by simply replacing the HTTPS with HTTP yielded valid responses. And, since the session cookie ‘prezi-auth’ did not have a secure flag, this was visible over HTTP.
You may ask so what? Isn’t that how the system is designed? Isn’t it just the web service calls where you are just retrieving information (that might not be sensitive enough)? What can an attacker possibly do intercepting the HTTP traffic? All these are valid questions. And, yes till now, it is not that high of a risk.
But, taking it one step further, if an attacker can intercept this traffic and capture the ‘prezi-auth’ session cookie, he could then use this value to impersonate the prezi user on the web application too. And, by doing that, the attacker has complete access to the prezi user’s account where he could update profile information, create prezi’s and do everything that a normal prezi user can.
There is an option where you can implement HTTPS throughout the web application and once you do that, all the requests over HTTP are redirected to HTTPS. So, essentially, SSL is enforced on the web application but not on the Desktop application. And, because of this, by using the session cookie from the Desktop application, an attacker can potentially gain complete control of the prezi user’s web application account.
Some web services where SSL was not being enforced:
1. A prezi user opens the Desktop application and logs in. He is assigned a “prezi-auth” session cookie by the server. This is over HTTPS which is fine. Also assume that an attacker is eavesdropping on this network and can see any requests/responses being transmitted over HTTP.
2. Attacker sends an email to the prezi user with a link which auto-submits a POST request on the user’s behalf to the URL http://prezi.com/api/token/objectlibraryservice/list/. Note that this is meant to be over HTTPS but the attacker is making the user go to the HTTP link. Since, SSL is not enforced, this request will be successfully processed by the server.
NOTE: You can consider this like a CSRF attack or tricking the user in order to click a link and send a request to the server over HTTP. I am sure there can be other avenues where a user can be tricked to do this. It doesn’t have to be a CSRF attack vector. I am considering CSRF here just for the sake of it and a PoC. Also notice that there are no CSRF tokens present either in the header or in the POST body of this request which makes it even easier.
3. Now, the attacker eavesdropping sees this request over HTTP and he can capture the “prezi-auth” session cookie. This cookie should really have the Secure flag to avoid this but since it does not have the secure flag, this value will be transmitted over HTTP since the user is logged in his Prezi account.
4. The attacker can then use the captured prezi-auth session cookie and impersonate the user in the web application.
A valid response is received when a request is sent over HTTP. Notice that I had the SSL option enabled in the website.
Sending a request to the web application at the URL /settings/ with just the captured ‘prezi-auth’ session cookie yielded the profile page of the prezi user account
They said it was a nice finding but since the site is not fully over HTTPS, they cannot give me a bounty. But, they will send me some swag. I agree with them and it is up to their discretion to judge this. I am happy with the token of appreciation in form of tshirt, sunglasses, etc. Something is better than nothing.
This post does not contain any new information. I have just read Buffer’s Blog and the blog at  and tried to put my thoughts down or rather tried to present in such a way that it would make sense to me. If you are looking for the original post, please refer to .
MongoHQ is breached -> Unencrypted OAuth Tokens stored in Buffer’s DB hosted on MongoHQ is stolen -> Spam Posts on Buffer users’ Twitter/Facebook account
The attackers were also able to steal Buffer’s source code from GitHub. This had unencrypted secret tokens hardcoded which made the Twitter spam posts possible (Details below).
Yes, they had their source on GitHub -> FAIL. Buffer suspects that the attackers were able to get access to their GitHub account by using the passwords leaked from Adobe’s breach for one of their employees -> Epic FAIL?
On October 26, 2013, Buffer was hacked.
Attackers stole OAuth tokens stored in Buffer’s database and posted spam posts on Buffer users’ Facebook and Twitter accounts.
This database was hosted on MongoHQ which made it all possible since MongoHQ was breached earlier.
Both Facebook (v 2.0) and Twitter (v1.0a) use OAuth to authorize 3rd party websites to post on user’s behalf once the user has granted access to the 3rd party website.
Buffer did some serious security mistakes on their part which made it possible:
- Stored OAuth tokens unencrypted.
- Did not use the optional feature of using app_secret for implementing Facebook integration. This app_secret is a token that works like an authentication token between Facebook and the developer (Buffer). So, in order to post something, the attacker would need both the OAuth tokens and this app_secret. Buffer developers did not utilize this setting. Only, the OAuth tokens were required to post.
- Twitter by default makes it mandatory to use the above mentioned app_secret. So, the attackers had to use it in order to tweet on user’s behalf. So, how did the attackers gain access to it you ask? Buffer stored this unencrypted in their source code hosted on GitHub.
What Buffer basically said can be found here – http://open.bufferapp.com/buffer-has-been-hacked-here-is-whats-going-on/
- They revoked all pre-existing Twitter tokens.
- OAuth tokens are now encrypted.
They never mentioned the following:
- Is it HSM based?
- Are layers of infrastructure partitioned? All encryption-related remedies are completely isolated from Buffer’s MongoHQ?
To this, NopSec CTO Michelangelo Sidagni said – “A one-time control fix is not usually enough to cover an entire security program. Instead of detailing just that aspect, [Buffer] should have talked about their renewed and revamped approach on their overall security program. Making a statement that this is going to fix all their problems without covering their entire security posture is just an invitation for the attacker to strike again….just for the sake of it.” -> RIGHT!!
Recommended Way of implementing OAuth tokens:
- Using Hardware Security Module (HSM). Developers are not going to do this. Too much work??
- If the servers are hosted on a third party cloud, they can’t add HSM Module anywhere (It’s all in the cloud somewhere DUH). So, this introduces a Catch 22 situation. To address this, Amazon launched AWS CloudHSM. But, from what I have read, this is not feasible as its too expensive.
- One approach to mitigate but not completely eradicate would be to store keys in files that are not accessible through the Web.
- Encrypt at rest on Buffer’s servers, before it gets sent to MongoHQ.
- Buffer has enabled 2 factor authN for their team now which is good and adds an additional layer. But, why wasn’t this done earlier??
- Be cautious when authorizing 3rd party websites/apps to log us in by using our Facebook/Twitter credentials. We don’t know how they have implemented their OAuth.
- Cloud Computing risks as mentioned earlier. You can never trust your data with a third party cloud. You need to do your due diligence.
- OAuth faulty implementation. There seems to be a lot of different OAuth implementations out there. Some secure, some not. So, can we trust them?
- API Security issues.
- It is incredible to see how one breach in one company can cause multiple breaches.
I wrote a post about my experience with the Shopify Bug Bounty program yesterday.
Soon after that, folks from Shopify commented on that post saying that it is still reproducible and that they did not change anything in the code and that it is still not considered a valid finding.
After further investigation, I tried to reproduce it again and was able to do so successfully. I realized what I was doing wrong and how I reproduced it in the first place. In the process of doing so, I realized a different attack vector which I hadn’t thought of earlier. I still think this is an issue but the folks at Shopify don’t seem to agree with me.
So, this is what happens during the checkout process:
1. When a customer (C) proceeds to checkout, he is asked to authenticate into the website. This is because the shop admin had the setting where only registered customers would be able to check out. The URL looks like this – http://<myshop >.myshopify.com/account/login/xxxxxxxxxxxx?checkout_url=https%3A%2F%2Fcheckout.shopify.com%2Fcarts%2F<shop_id>%2F<cart_token>
2. After authenticating, the cart_token or the cart_id gets associated with that customer (C) and the shop. The customer C is then redirected to the URL https://checkout.shopify.com/carts/<shop_id>/<cart_token>. This is the key step. The customer has to authenticate to the shop at least once in order for this attack to work. As we will see from the steps below, this is trivial. The attacker doesn’t have to trick a user to authenticate.
3. After authenticating, lets assume that the customer decides that he does not want to shop anymore and he simply closes that browser. No matter how tech savvy this customer is, he is left with no choice but to close the browser because there is no logout option on that page.
Attack Vector 1 – If the customer is shopping on a shared workstation, an attacker comes to that workstation, reopens the browser, looks through the browser history and navigates to the above URL. Boom, all the information is right there.
Attack Vector 2 – Since there is no session associated with this request, there are no session cookies either. All the attacker needs is the URL. And, he can navigate to it from a different computer all together and access the information.
Once, the attacker navigates to the above URL, he is directly taken to the checkout page of the customer C where he can see the customer’s email address (masked below) and his billing address. The attacker simply enters his shipping address and continues to next step.
4. On the next step, the attacker chooses one of the many options of payment. Since, he would not want to use his own credit card, he chooses Bank Deposit and Completes the purchase.
5. And, its done..
So, the attacker was able to successfully place an order to be shipped to his address without entering any credit card details.
The folks at Shopify replied back with the following:
“First, this is no different from someone forgetting to log out of any other site, there’s not much we can do here (it’s the user’s responsibility to protect their account, just like any other site). Secondly, keeping the person logged-in is not a bug, it’s the expected behaviour. The purpose of logging in before placing an order isnot to store payment information, which greatly reduce the risk of forgetting to log out. An attacker who “find” an active session from another user would still have to pay for that order with a valid credit card which is what we really want to protect here (credit card information).”
First, this is no different from someone forgetting to log out of any other site, there’s not much we can do here (it’s the user’s responsibility to protect their account, just like any other site)
I am not sure how other e-commerce sites (eg Amazon) implement shopping carts but this one is definitely weird.
Why isn’t there a logout option on the checkout page once the customer is authenticated? As I mentioned earlier, even if the customer wants to logout, there is NO LOGOUT option on the checkout page. If there was this option and the customer still decides to close the browser, I can see what they are trying to say. But, providing this logout option on the checkout page can definitely help. On logging out, it can be used to disassociate the cart_token so that it cannot be used by the attacker in the future.
Secondly, keeping the person logged-in is not a bug, it’s the expected behaviour.
I understand. But, does this mean that disclosing information like the email address and billing address of customers is also expected? I don’t think this is the case with other e-commerce websites. Aren’t we supposed to protect customer’s privacy as much as we can?
The purpose of logging in before placing an order isnot to store payment information, which greatly reduce the risk of forgetting to log out. An attacker who “find” an active session from another user would still have to pay for that order with a valid credit card which is what we really want to protect here (credit card information).”
I just showed how its possible to place an order without entering any credit card information. The money order option can be chosen as well and it will result the same. Now, I dont know how the bank deposit and money order options are supposed to be setup in the admin console, but the attacker definitely does not require any of that information to place an order. It is good in a way that the customer’s credit card details are not populated like his email and billing address, but that doesn’t mean that it reduces the risk.
I would like to know what you guys think about this. I have spent a lot of time thinking this through. I haven’t seen a lot of e-commerce websites out there so I am not sure if this is something that is acceptable or not.