With the inception of bug bounty programs, anybody with some knowledge of application security i.e. basic understanding of common web vulnerabilities like Cross-Site Scripting, Cross-Site Request Forgery, etc. could pretty much find bugs in any website out there and while doing so, make some $$$. I mean, if I tell my mom to enter <script>alert(1)</script> in every input box out there and see if she gets a pop-up or not, even she can do it without understanding what she is doing. That’s how bad security was (it still is but is definitely improving) maybe a couple of years ago.I actually started participating in some of these programs quite late (sometime in 2013) so my experience with bug bounty programs has been slightly different. By that time, people were already listed in multiple Hall of Fames.
Needless to say, it is not the same anymore. I have realized that it can actually be quite time consuming. And, if you already have a full time job like me, then bug bounty hunting is like a new job all together. Having said that, there has suddenly been an influx of bug hunters in the InfoSec industry these days. People have been claiming themselves as “security researchers” aka “hackers” on their blogs, twitter, linkedin. They have Hall of Fame acknowledgements as Honors & Rewards in their LinkedIn profiles. I think it is getting a bit too much now. I can understand having the HoF listed in Bugcrowd’s profile page, tweeting about a bug discovered and the amount of bounty received for it but listing 20 HoF acknowledgements on LinkedIn? Are you kidding me? Oh and by the way, do you really think you are a “security researcher” just because you have discovered CSRF’s in 20 different websites? I just don’t get it. Anyways, this was a rant I had been wanting to do since a long time and I will stop here.
Getting back to the topic of the blog, these bug bounty programs, obviously come with their fair share of either duplicates or rejected bugs. I will try to cover some of the rejected bugs I have had recently.
Session Not Invalidated On Logout
This was a shocker to me when Coindrawer rejected this. I was actually surprised it even existed in the first place because I’d have thought somebody must have already reported it considering that they have been live for quite some time now. I am not going to elaborate what the vulnerability is and how it can be exploited and such but it is a pretty well known vulnerability. Every time I have reported it, it has actually been accepted and fixed because it is indeed a security vulnerability and presents considerable risk. This is what they had to say about it:
We currently don’t consider this to be a threat.
Thank you for your submission. We are constantly making improvements
to our site and invite you to continue to test its security.
I don’t know what to say to this. If they are really making improvements to their site, they would fix this damn thing and not just send me an email template.
In the ManageWP website, when you change your password, an AJAX GET request is sent to the server with the new password value as a query parameter. Now, unlike a normal HTTP GET request, since this is an AJAX request, I do understand the argument that it is not going to be stored in browser’s history or log files. However, I have not seen this before. And, I strongly believe that sensitive information like passwords should not be sent as query parameters in a GET request. Again, no comments on this one.
I wanted to mention a couple of more bugs but I will save it for the next post!
SSL Not Enforced for web service calls made from the Prezi Desktop application
SSL is not enforced on some web service calls made between the Prezi Desktop application and the server.
- The authentication from the Prezi desktop application happens over HTTPS as expected.
- As soon as the user is authenticated, the server assigns a session cookie ‘prezi-auth’ and sets in the response header. An important point to note here is that there is no ‘Secure’ flag on this cookie. As per Prezi folks, they said the site is not fully functional over HTTPS as of now and hence there is no secure flag. This makes sense not implementing the secure flag. However, this introduces further risks as I will discuss below.
- Once the authentication is done, the Desktop application makes some web service calls over HTTPS. This can be verified by setting up a proxy and having the traffic from Internet Explorer go through the proxy. Now, since SSL is not being enforced here, by simply replacing the HTTPS with HTTP yielded valid responses. And, since the session cookie ‘prezi-auth’ did not have a secure flag, this was visible over HTTP.
You may ask so what? Isn’t that how the system is designed? Isn’t it just the web service calls where you are just retrieving information (that might not be sensitive enough)? What can an attacker possibly do intercepting the HTTP traffic? All these are valid questions. And, yes till now, it is not that high of a risk.
But, taking it one step further, if an attacker can intercept this traffic and capture the ‘prezi-auth’ session cookie, he could then use this value to impersonate the prezi user on the web application too. And, by doing that, the attacker has complete access to the prezi user’s account where he could update profile information, create prezi’s and do everything that a normal prezi user can.
There is an option where you can implement HTTPS throughout the web application and once you do that, all the requests over HTTP are redirected to HTTPS. So, essentially, SSL is enforced on the web application but not on the Desktop application. And, because of this, by using the session cookie from the Desktop application, an attacker can potentially gain complete control of the prezi user’s web application account.
Some web services where SSL was not being enforced:
1. A prezi user opens the Desktop application and logs in. He is assigned a “prezi-auth” session cookie by the server. This is over HTTPS which is fine. Also assume that an attacker is eavesdropping on this network and can see any requests/responses being transmitted over HTTP.
2. Attacker sends an email to the prezi user with a link which auto-submits a POST request on the user’s behalf to the URL http://prezi.com/api/token/objectlibraryservice/list/. Note that this is meant to be over HTTPS but the attacker is making the user go to the HTTP link. Since, SSL is not enforced, this request will be successfully processed by the server.
NOTE: You can consider this like a CSRF attack or tricking the user in order to click a link and send a request to the server over HTTP. I am sure there can be other avenues where a user can be tricked to do this. It doesn’t have to be a CSRF attack vector. I am considering CSRF here just for the sake of it and a PoC. Also notice that there are no CSRF tokens present either in the header or in the POST body of this request which makes it even easier.
3. Now, the attacker eavesdropping sees this request over HTTP and he can capture the “prezi-auth” session cookie. This cookie should really have the Secure flag to avoid this but since it does not have the secure flag, this value will be transmitted over HTTP since the user is logged in his Prezi account.
4. The attacker can then use the captured prezi-auth session cookie and impersonate the user in the web application.
A valid response is received when a request is sent over HTTP. Notice that I had the SSL option enabled in the website.
Sending a request to the web application at the URL /settings/ with just the captured ‘prezi-auth’ session cookie yielded the profile page of the prezi user account
They said it was a nice finding but since the site is not fully over HTTPS, they cannot give me a bounty. But, they will send me some swag. I agree with them and it is up to their discretion to judge this. I am happy with the token of appreciation in form of tshirt, sunglasses, etc. Something is better than nothing.
This post does not contain any new information. I have just read Buffer’s Blog and the blog at  and tried to put my thoughts down or rather tried to present in such a way that it would make sense to me. If you are looking for the original post, please refer to .
MongoHQ is breached -> Unencrypted OAuth Tokens stored in Buffer’s DB hosted on MongoHQ is stolen -> Spam Posts on Buffer users’ Twitter/Facebook account
The attackers were also able to steal Buffer’s source code from GitHub. This had unencrypted secret tokens hardcoded which made the Twitter spam posts possible (Details below).
Yes, they had their source on GitHub -> FAIL. Buffer suspects that the attackers were able to get access to their GitHub account by using the passwords leaked from Adobe’s breach for one of their employees -> Epic FAIL?
On October 26, 2013, Buffer was hacked.
Attackers stole OAuth tokens stored in Buffer’s database and posted spam posts on Buffer users’ Facebook and Twitter accounts.
This database was hosted on MongoHQ which made it all possible since MongoHQ was breached earlier.
Both Facebook (v 2.0) and Twitter (v1.0a) use OAuth to authorize 3rd party websites to post on user’s behalf once the user has granted access to the 3rd party website.
Buffer did some serious security mistakes on their part which made it possible:
- Stored OAuth tokens unencrypted.
- Did not use the optional feature of using app_secret for implementing Facebook integration. This app_secret is a token that works like an authentication token between Facebook and the developer (Buffer). So, in order to post something, the attacker would need both the OAuth tokens and this app_secret. Buffer developers did not utilize this setting. Only, the OAuth tokens were required to post.
- Twitter by default makes it mandatory to use the above mentioned app_secret. So, the attackers had to use it in order to tweet on user’s behalf. So, how did the attackers gain access to it you ask? Buffer stored this unencrypted in their source code hosted on GitHub.
What Buffer basically said can be found here – http://open.bufferapp.com/buffer-has-been-hacked-here-is-whats-going-on/
- They revoked all pre-existing Twitter tokens.
- OAuth tokens are now encrypted.
They never mentioned the following:
- Is it HSM based?
- Are layers of infrastructure partitioned? All encryption-related remedies are completely isolated from Buffer’s MongoHQ?
To this, NopSec CTO Michelangelo Sidagni said – “A one-time control fix is not usually enough to cover an entire security program. Instead of detailing just that aspect, [Buffer] should have talked about their renewed and revamped approach on their overall security program. Making a statement that this is going to fix all their problems without covering their entire security posture is just an invitation for the attacker to strike again….just for the sake of it.” -> RIGHT!!
Recommended Way of implementing OAuth tokens:
- Using Hardware Security Module (HSM). Developers are not going to do this. Too much work??
- If the servers are hosted on a third party cloud, they can’t add HSM Module anywhere (It’s all in the cloud somewhere DUH). So, this introduces a Catch 22 situation. To address this, Amazon launched AWS CloudHSM. But, from what I have read, this is not feasible as its too expensive.
- One approach to mitigate but not completely eradicate would be to store keys in files that are not accessible through the Web.
- Encrypt at rest on Buffer’s servers, before it gets sent to MongoHQ.
- Buffer has enabled 2 factor authN for their team now which is good and adds an additional layer. But, why wasn’t this done earlier??
- Be cautious when authorizing 3rd party websites/apps to log us in by using our Facebook/Twitter credentials. We don’t know how they have implemented their OAuth.
- Cloud Computing risks as mentioned earlier. You can never trust your data with a third party cloud. You need to do your due diligence.
- OAuth faulty implementation. There seems to be a lot of different OAuth implementations out there. Some secure, some not. So, can we trust them?
- API Security issues.
- It is incredible to see how one breach in one company can cause multiple breaches.
I wrote a post about my experience with the Shopify Bug Bounty program yesterday.
Soon after that, folks from Shopify commented on that post saying that it is still reproducible and that they did not change anything in the code and that it is still not considered a valid finding.
After further investigation, I tried to reproduce it again and was able to do so successfully. I realized what I was doing wrong and how I reproduced it in the first place. In the process of doing so, I realized a different attack vector which I hadn’t thought of earlier. I still think this is an issue but the folks at Shopify don’t seem to agree with me.
So, this is what happens during the checkout process:
1. When a customer (C) proceeds to checkout, he is asked to authenticate into the website. This is because the shop admin had the setting where only registered customers would be able to check out. The URL looks like this – http://<myshop >.myshopify.com/account/login/xxxxxxxxxxxx?checkout_url=https%3A%2F%2Fcheckout.shopify.com%2Fcarts%2F<shop_id>%2F<cart_token>
2. After authenticating, the cart_token or the cart_id gets associated with that customer (C) and the shop. The customer C is then redirected to the URL https://checkout.shopify.com/carts/<shop_id>/<cart_token>. This is the key step. The customer has to authenticate to the shop at least once in order for this attack to work. As we will see from the steps below, this is trivial. The attacker doesn’t have to trick a user to authenticate.
3. After authenticating, lets assume that the customer decides that he does not want to shop anymore and he simply closes that browser. No matter how tech savvy this customer is, he is left with no choice but to close the browser because there is no logout option on that page.
Attack Vector 1 – If the customer is shopping on a shared workstation, an attacker comes to that workstation, reopens the browser, looks through the browser history and navigates to the above URL. Boom, all the information is right there.
Attack Vector 2 – Since there is no session associated with this request, there are no session cookies either. All the attacker needs is the URL. And, he can navigate to it from a different computer all together and access the information.
Once, the attacker navigates to the above URL, he is directly taken to the checkout page of the customer C where he can see the customer’s email address (masked below) and his billing address. The attacker simply enters his shipping address and continues to next step.
4. On the next step, the attacker chooses one of the many options of payment. Since, he would not want to use his own credit card, he chooses Bank Deposit and Completes the purchase.
5. And, its done..
So, the attacker was able to successfully place an order to be shipped to his address without entering any credit card details.
The folks at Shopify replied back with the following:
“First, this is no different from someone forgetting to log out of any other site, there’s not much we can do here (it’s the user’s responsibility to protect their account, just like any other site). Secondly, keeping the person logged-in is not a bug, it’s the expected behaviour. The purpose of logging in before placing an order isnot to store payment information, which greatly reduce the risk of forgetting to log out. An attacker who “find” an active session from another user would still have to pay for that order with a valid credit card which is what we really want to protect here (credit card information).”
First, this is no different from someone forgetting to log out of any other site, there’s not much we can do here (it’s the user’s responsibility to protect their account, just like any other site)
I am not sure how other e-commerce sites (eg Amazon) implement shopping carts but this one is definitely weird.
Why isn’t there a logout option on the checkout page once the customer is authenticated? As I mentioned earlier, even if the customer wants to logout, there is NO LOGOUT option on the checkout page. If there was this option and the customer still decides to close the browser, I can see what they are trying to say. But, providing this logout option on the checkout page can definitely help. On logging out, it can be used to disassociate the cart_token so that it cannot be used by the attacker in the future.
Secondly, keeping the person logged-in is not a bug, it’s the expected behaviour.
I understand. But, does this mean that disclosing information like the email address and billing address of customers is also expected? I don’t think this is the case with other e-commerce websites. Aren’t we supposed to protect customer’s privacy as much as we can?
The purpose of logging in before placing an order isnot to store payment information, which greatly reduce the risk of forgetting to log out. An attacker who “find” an active session from another user would still have to pay for that order with a valid credit card which is what we really want to protect here (credit card information).”
I just showed how its possible to place an order without entering any credit card information. The money order option can be chosen as well and it will result the same. Now, I dont know how the bank deposit and money order options are supposed to be setup in the admin console, but the attacker definitely does not require any of that information to place an order. It is good in a way that the customer’s credit card details are not populated like his email and billing address, but that doesn’t mean that it reduces the risk.
I would like to know what you guys think about this. I have spent a lot of time thinking this through. I haven’t seen a lot of e-commerce websites out there so I am not sure if this is something that is acceptable or not.
UPDATE: I have written a second blog post now following this.
Shopify recently announced their Bug Bounty program. And, I jumped onto the hunt as soon as it was launched. As I was informed by them, I was the second person to register for it.
This blog post is about what I reported, why they did not consider it as a valid finding then and now it looks like it has been fixed.
The bug I reported was on the page - https://<myshop>.myshopify.com/admin/settings/payments
Basically, on this page, an admin can change/edit/add settings related to the admin’s shop. There is one particular setting called “Customer Accounts” which looks like this:
Now, if I were a shop admin, this is pretty self explanatory. If I choose “Accounts are required”, customers can ONLY check out if they have an account created for them by me. If they don’t have an account, they should not be able to check out. Right? We will come back to this later.
Let’s proceed to the finding:
1. Assuming the shop in question is <myshop>.myshopify.com, navigate to this site as an attacker.
2. View a product and add it to the cart. Click on the cart.
3. Proceed to checkout.
4. Notice that as per our setting above, the attacker is asked to authenticate before being allowed to checkout.
5. Observe that the URL looks something like this -
6. Now, without authentication, directly navigate to the URL
7. Notice that the attacker can now see the name, email address of a registered customer of the shop (the email address masked is the email address for the test1 user that was registered as a customer of this shop by the admin), a form to enter the billing address or choose from an existing billing address of the customer. The screenshot is attached below:
So, this is what I reported. I feel disclosing information like name, email address and billing address of customers of a particular shop to an unauthenticated attacker is just unnecessary and exposes unwanted risk and more importantly, discloses information about the shop customers.
Now, I checked it today again and this is what I observed:
1. The URL in step 5 above now has an additional parameter called “sid”. I dont know what this is being used for but this was not present when I tested originally.
2. When I try to navigate to the URL mentioned in Step 6 above directly, I am not being redirected to the checkout page anymore. This was certainly not the case when I tested it.
This clearly shows that there were some changes made in the code base.
Below is what I received regarding my submission:
“the customer account required is not intended to prevent the actions you outlined, just to make sure that an order is placed from an account.“
I don’t understand how the above justification makes sense. If that functionality is just used to ensure that the order is placed from an account, why would you disclose account details like name, email and billing address to someone who is not even a customer of the shop?
Secondly, if that functionality is not intended to prevent the actions that I outlined, why does it say that? Am I just blind in reading what its supposed to do? Or, is it just to make the admins feel safe about their customers? I am confused here.
Needless to say, I did not get any credit for reporting this. And, now it looks like they have fixed it/made changes and this is not reproduce-able anymore. I am disappointed.
- Check if the scripts are using absolute or relative paths. Using absolute paths wherever possible is recommended as a good practice.
- Check all input/output. This is probably the most common thing to look for even in shell scripts. It is recommended to take the input in double quotes eg. If ["$FOO" = "foo"].
- Check to make sure passwords, keys and other secrets are not present as environment variables or hardcoded in scripts. Search/grep for keywords. Some of the keywords to look for are given below.
- Check to see if any secret data is being passed in an external command’s arguments. It is recommended to pass such data via a pipe or redirection instead.
- Check to make sure $PATH is always set carefully. PATH inherited from the caller should not be trusted, specially if the script is running as root. In fact, whenever an environment variable inherited from the caller is used, think about what could happen if the caller put something misleading in the variable, e.g., if the caller set $HOME to/etc.
- If one of the major usecases is “user logs in via SSH, and executes a script via sudo” think about using SSH command restriction instead. That way users don’t get a shell on the target system at all, they can just execute that one command remotely.
- Check for Ownership, execution and permissions issues. Does the script contain info that others shouldn’t be able to view? If so, make sure it’s only readable by the owner. Check if the scripts have permissions to be executed by anybody or just by the owner of the script.
• chmod 0700 is a good option • chmod u+x scriptname – executable bit is set for the user who owns the script only
- Be wary of symbolic links. These might need to be explored more.
- Temporary File Attack – look for secret data or anything sensitive being written to a fixed temp file. Writing anything sensitive to temporary files is never a good idea. Specially, when these files are generated with a fixed syntax i.e. test.tmp
- Input File Attack – see if contents from an input file (fixed path) is being sent over a different machine using “nc” or something like it. The input file can be manipulated and sensitive data can be made to sent to the attacker instead.
- Authentication Attacks – see if they are using standard variables like UID, USER and HOME to authenticate. These values can often be set and is not the right way to retrieve the user values.
These are some of the tools / techniques which helped me to pentest a Silverlight application recently:
- .NET Reflector / Silverlight Spy = These are great for decompiling .xap files stored on the client side. One can look for hardcoded credentials or any piece of code which might help in better understanding of the functionality of the Silverlight application.
- Soap UI = This is again a great tool to play with web services. If it is a Silverlight application, there is a great chance it is making some interesting web service calls to the server. In my case, this wasn’t really of much help because the application under testing was using Microsoft Binary encoding (content-type = application/soap+msbin1) for SOAP messages and SOAP UI does not presently support this.
- Clientaccesspolicy.xml = If you are pentesting a Silverlight application, you have to look at the Clientaccesspolicy.xml for cross domain access.
- Isolated Storage = Check for isolated storage when decompiling the .xap file. You might find something interesting there.
- Burp/Fiddler plugins = For the above Microsoft Binary encoding formats, Burp and Fiddler have some interesting plugins. With the Fiddler plugin, although I wasn’t able to fiddle with the SOAP requests/responses, I was able to see what was going on with the web service calls in a very user friendly format. With Burp, you have to chain two burp instances to be able to intercept the request and responses.
- WSDL = Observe the web services being called by using any proxy tool (Burp, Fiddler, etc). Look for their WSDL files and discover some interesting stuff.