Skip to content

Implementing App Level Encryption in Google PubSub using Google Cloud KMS

If you need any sort of application level encryption while using Google PubSub as a distributed messaging queue, it is not obvious on how to implement it because there is no documentation from Google that I could find around the exact same thing. There was some code but I had to make sense of it and combine a few other things to get it working finally.

I was able to implement it using Google Cloud KMS and it was pretty easy to do so. So, here goes..

You would need a service account in order to get this working. Next, copy paste the below two files locally:

  • –
  • –

Make sure you have all the libraries installed like gcloud to run these python files. Also, replace the variables that are marked as “<>” as per your GCloud environment. You would need a project_id, location, keyring, cryptokey, testtopic and testsub. testsub would be the subscription for the testtopic.

You would also need to add “Cloud KMS CryptoKey Encrypter/Decrypter” permissions for your service account by navigating to IAM&Admin -> Key Management in the Console.

Now, when you run python, that file basically encrypts a json – {test:”test”} and drops it in the testtopic.

Next, when you run python, that file subscribes to testsub that is a subscription for testtopic. It then pulls the data and gets the encrypted data. It then uses the KMS API with the service account credentials to decrypt the message.

Thus, anybody who is able to grab the messages from the PubSub topic cannot really decrypt any messages because they wouldn’t be authorized to do so! Simple and easy..


Brutesubs – An automation framework for running multiple subdomain bruteforcing tools in parallel via Docker

EDIT: I made some changes in the code (added ALTDNS in the mixture as well) and the blogpost (removed the end part where I described the approach and moved that to the github repo) after the first release. The overall idea of why this framework is still here and applicable. Everything else around the working of the framework has been migrated to the github page now – Moving on, this blog won’t be updated with the new features in the framework. I will push all the updates directly on the Github repo.

Read more…

Dockerizing Lair Framework

If you are not familiar with the Lair framework, I highly recommend you check it out. It is a nice GUI to visualize scans and triage them without having to maintain all the information / database elsewhere.

And, with Docker, the best part is that you don’t have to worry about starting the different pieces of the LAIR framework individually like the documentation asks us to here. Just follow me along!

The first thing is to clone the repo from my github –

Once you have the repo locally, change the <IP> value to the external IP wherever your docker daemon is listening. The places where this needs to change are:

  • .env file
  • proxy/Dockerfile
  • rs.initiate command (towards the end of this guide)


Also, make sure you have docker and docker-compose installed on your local system.

Next, run “docker-compose build” from the docker-lair directory that has the docker-compose.yml file.

Go grab a beer!! No, seriously do it. The build process will take some time.

Once the docker build process finishes, run “docker images” and you should see the following images:

Screen Shot 2016-06-09 at 8.09.09 PM

Then, as root check if the “db” and “db/_data” folders exists in “/var/lib/docker/volumes/” folder or not. If they don’t, you need to create them:

  • mkdir /var/lib/docker/volumes/db

  • mkdir /var/lib/docker/volumes/db/_data

Moving along, we are now going to start a container from the “dockerlair_mongo” image. We have to do this because we need to initiate a replication set in the mongo database before we could bring the entire environment up. It is something that we have to do to get LAIR up and running. Don’t ask me why! So, the way we do that is:

docker run -d -p 27017:27017 -v db:/data/db dockerlair_mongo /bin/bash -c ‘/usr/bin/mongod –quiet –nounixsocket –replSet rs0’

If you’re familiar with the docker run command, you will notice that we are running the container as a daemon (-d flag), exposing port 27017 (-p flag), associating a data volume with this container (-v flag) and then once the container is started we are running the command “/usr/bin/mongod –quiet –nounixsocket –replSet rs0”. This will basically start the mongo daemon on the server with the replication set. It won’t initiate the replication set. We will do that after this step. Notice that we are creating a persistent data volume for our database so if we need to migrate to a different environment, just copy pasting the “/var/lib/docker/volumes/db” directory in the new environment will help us get our data back. Also, grab the container name that gets started after running the above command.

Next, we need to run

docker exec -it <container_name> /bin/bash

We are now trying to enter into a bash shell inside that mongo db container because like I said above, we need to initiate a replication set. So, once you are inside the bash shell, run


and you should be able to see the mongodb shell. Type the following commands one after the other in that shell:

use admin

rs.initiate({_id:”rs0″, members: [{_id: 1, host: “<IP>:27017”}]})


The output should look like:

Screen Shot 2016-06-09 at 9.31.52 PM

In this step, we switched to the db admin and then initiated the replication set command. We finally checked the status to make sure everything looks good. You can now quit from the mongodb shell and exit from the root prompt of that container:



Once you are done with this step, stop and remove the docker container that you started above. Our work is mostly done by now.

We had to do the steps above because we wanted the status of the replication set to propagate in our persistent data volume in the folder /var/lib/docker/volumes/db. If there is an easy way to bootstrap all of this with Docker, please let me know. I am more than happy to avoid these extra steps above.

One good thing about doing the extra steps above is that you only have to do it once when you first start the entire environment of the LAIR framework. You won’t have to do it again unless you move your containers to a new environment and the IP changes. Now, that is a whole different issue that we can dive into later. It is a painful process.


Our final step would be to just bring up the entire LAIR environment by typing the below command from the docker-lair directory because our docker-compose.yml file is there, remember?

docker-compose up -d

And, you can browse to your LAIR API at https://<IP&gt;:11013

It will look like:

Screen Shot 2016-06-09 at 9.59.15 PM

So, that’s it folks!!

The section below will have some steps that need to be followed ONLY if you want to migrate your lair database to a new environment.

In order to do this, we need to make sure that the replication set we initiated above in our old environment with the old IP needs to change. The old IP needs to be changed to the new IP. Unfortunately, this is not straightforward either.

You would begin first by copying the entire /var/lib/docker/volumes/db directory into the new environment to make sure you get your data back.

Then, you would need to obviously change the <IP> in all the files accordingly.

After that, you would start the mongo db container again like above and get into the mongo shell. If you do a rs.status() then, you would see that the replication set has already been initiated with the old IP. This is because of the database that we just copied over from the old environment.

So, in order to change this, you have to run the following commands from the mongo shell:

> use local

> cfg = db.system.replset.findOne({_id:”rs0″})

> cfg.members[0].host=”<newIP>:27017″

> db.system.replset.update({_id:”rs0″},cfg)

> use admin

> db.shutdownserver()

We just replaced the old IP with the new one and shutdown the server. Whenever we start the mongodb container again and start the mongod daemon again, this change would be reflected and you would be up and running in the new environment!


Running the Dradis Pro Appliance in an OpenStack environment

We use OpenStack as our lab IaaS and We use Dradis Pro for report generation and general note keeping. Prior to moving our entire lab in the OpenStack environment, we had deployed our Dradis Pro appliance in a VMWare environment which is what it supports, so that was no problem. But, now since we were moving everything into OpenStack, we had to migrate the Dradis appliance as well.

But, apparently nobody has tried installing Dradis Pro appliance in their OpenStack environment so I couldn’t really find anything online or in their forums/guides about it so I decided to give it a shot. Surprisingly, it was really easy.

  • The first step is to download the .ova file from your Dradis pro account at
  • You have to then tar the .ova file

    tar xvf dradis-professional-x86_64-20151122.ova

  • This generates a .vmdk and .ovf file. Ignore the .ovf file and convert the .vmdk file into a qcow2 formatted .img file. Install the qemu-img utility if you don’t have it already.

    qemu-img convert -O qcow2 dradis-professional-x86_64-20140331-disk1.vmdk dradis-appliance.img

  • At this point, you are pretty much done creating the image. You just have to upload this .img file to your OpenStack environment using the Glance API

    glance image-create –name=’dradis-appliance’ –container-format=bare –disk-format=qcow2 < dradis-appliance.img

    PS – The above step is not as trivial as I made it seem like. You would need to have your OpenStack environment variables set to those values that are required to connect to your OpenStack environment. You would also need the glance API installed. But, I am assuming you have already done all the leg work since this blog is not about connecting to your OpenStack environment.

  • The above glance image-create command might take some time depending upon how big the .img file is and how fast/slow your connection is. But, once it’s done, you should see the details of the image uploaded to your OpenStack environment.
  • So, now you have a valid image in your OpenStack lab and are ready to launch an instance from that image. I chose a m1.medium flavor (2 VCPUs, 80 GB root disk, 8192 MB RAM). Choose your access security groups and key pairs accordingly. Launch the instance.
  • Wait for a while. You should then see the console of this instance with “Enter the passphrase: “. Enter the default passphrase which you can find in your account at
  • Login as root and default credentials.
  • At this point, if you try navigating to https://<dradis-IP>/setup/upgrade, you might not see what you are expecting to see. So, in order to troubleshoot more, please follow this guide. I had to do this as well where I had to restart the god service after logging in the console before I could get to the dradis UI in the browser.

    sudo /etc/init.d/god restart

  • At this point, you should be all set to complete the remaining steps from the guide here.

PS – Don’t forget to change the default passphrase, login credentials, etc.



How I ran my first half marathon!


A few months ago, the thought of running even 3 miles seemed very daunting. I have always hated running. There have been times when I have literally forced myself to run just because I wanted to run but I have never gotten myself to enjoy it like many runners do. They call it the Runners high I think? I still don’t know what that is by the way. I would rather play a sport or do some HIIT type of workouts. I love those. I would opt for them any day over running, if given an option. I am a big fan of CrossFit.

Anyways, from what my memory serves, the longest I could run non-stop before I started to train for the SD Half was maybe 5 minutes tops at a speed of 5 on a treadmill. That’s like 0.4 miles or something like that. After that, I would drag myself to reach the 3 mile mark by running/walking in tandem. That was my relationship with running until I decided I am going to run a half marathon NO MATTER WHAT. Half Marathons or any marathon or even 5K for that matter, in general, have been a fleeting idea to me. I have always been so intrigued by seeing people who do it. I always thought maybe some day, I will actually have the courage to do it myself. But, I never took it seriously. It was one of those things that I really wanted to accomplish but I wasn’t really driven towards it. It was, utmost, a dream!

So, after I moved to San Diego, some of my close friends were going to run the SD Half Marathon and they asked me if I was interested as well. I thought this was probably the best opportunity that I will have to train together, run together (not literally but just be there for each other as support) and actually be focussed and dedicated towards something that I have only thought about accomplishing all these years. And, that was it. I didn’t want to let it go.

So, on January 25, I signed up for it and there was no looking back after that. I had roughly 7 weeks to train for it. I found a training schedule online and tried sticking to it as much as I could. It started really slow the first 2 weeks with 2-3 mile runs during the week (walking+running) and the long 5-6 mile runs during the weekends. Not to mention, those long runs were brutal. I hated them to the core. I wanted to give up so bad. It took me forever to get them done. Even after forcing myself to run during the training weeks, I never really enjoyed it. I just pretended to be nonchalant about it to my conscience and went about it. It was a goal that I wanted to complete so I just sucked it in and ran without thinking.

Weeks passed by. I could definitely see a lot of improvement in my stamina. I could run non-stop for much longer distances than I could originally when I first started. One day, I think I was surprised and couldn’t believe myself when I ran 1 mile non-stop and still have some energy in the tank to run some more. That was truly, groundbreaking lol. Seeing the improvement, I kept going on. The shorter runs during the week suddenly started to increase to 3-4 miles and the longer runs kept getting brutal between 7-8 miles. My time per mile started to improve as well. I was really happy and I already felt like I had won a huge battle against my mental block of not being able to run for long distances. The support from friends and the curiosity to know how everyone was doing in their training, their running times, their weekend long runs, etc. was a huge advantage as well to keep myself motivated throughout the 7 weeks.

Fast forward to the D-day, by now, the maximum distance I had run was 9 miles at one go (some walking but mostly running). The longest distance I ran non-stop without walking was about 3.4 miles in 35 mins. That was also my fastest min/mile clocking at an average pace of 10’19” min/mile. So, I could technically run a 5K with ease. The total number of miles I had run since Jan 25 during the training was a staggering 97 miles. That’s almost 14 miles/week and 2 miles/day for 7 weeks. Not bad eh?!

But, not everything goes right all the time now, does it? On the D-day, I started fine. I was clocking at around 12:00 min/mile for the first 7 miles. I even ran the first 5 miles non-stop without stopping which I had not done before even in the training. I was all set to get the 13.1 miles done between 2:35 and 2:45 which was my goal. But, I started cramping really bad around the 8th mile. My right thigh and my left foot arch started hurting every time I landed my foot on the ground. On top of that, the 10th mile was the most difficult part of the course. It was all uphill. Miles 11-13.1 were probably the easiest but by that time, I was hurting so bad I literally couldn’t run or even walk for that matter without limping. So, my pace and timing was all gone by then. I had race guards come up to me twice and make sure I was okay because they saw me struggle towards the final miles. But, I survived somehow. It sucked to the core. The pain in my foot and thigh was excruciating but I managed to get it done finally in 3:10. That was almost 25 mins more than what I originally aimed for.

Needless to say, I am disappointed with the timing. But, I guess there were a few mistakes I did (These are not excuses. I didn’t completely achieve what I set for and I am just making some self points to keep in mind for posterity) and I hope to not repeat them again, incase I decide to run a half again:

  • I did not stretch all my muscles properly before the race.
  • I stopped running outside after the first few weeks of training so the transition from running on a treadmill to outside suddenly was a little off and unexpected.
  • I didn’t run my last long run of 10 miles the weekend before the D-day because I didn’t feel like. The last long run I had run before that was 9 miles the weekend before that. So, basically it had been 2 weeks since I had any long runs so maybe my body wasn’t completely prepared to take the 13.1 miles on the D-day.

So, whats next?! I honestly don’t know. I am going to take some well earned rest and concentrate more on HIIT type workouts. I am still not a big fan of long distance running and if ever I run again, I will only do it because I want to challenge myself.



Ability to send payment requests inspite of being blocked by the recipient

TL;DR – I, as an attacker could send payment requests to anyone on Facebook even if:

  • I am not a friend of the victim recipient
  • The victim recipient has explicitly blocked me from sending any messages in Facebook Messenger

And, if you are interested in the details, here goes..

Payment requests are normally sent as messages from the Messenger (and can only be sent to a friend) but if you are blocked from sending messages by somebody (whether a friend or not a friend), you can’t technically send payment requests or any messages for that matter from the Facebook Messenger UI.

I observed that this wasn’t completely true. If you could capture a request to send payment requests (to lets say a legit friend who hasn’t blocked you from sending messages), it was possible to just replay that same request using a proxy tool such as Burp (and changing the recipient ID to the victim’s ID or for that matter anyone on Facebook) and it would be sent successfully. Another problem with this was that the victim would receive an email saying that “Attacker has sent you a payment request”. So, this was also abusing the Facebook platform to spam anyone on Facebook and/or carry a spear phishing campaign.

The request looked like below:

POST /p2p/payment_requests/_create/ HTTP/1.1
Cookie: c_user=<redacted>; xs=<redacted>;
Connection: close


Facebook rewarded $1500 for this bug.


Running ZAP against an application with proxied test cases inside Docker containers + Reporting in JIRA + Notification in Slack

Building off of my last blog post, I could start two containers – ZAP and the application and then run ZAP against the application and generate a report. However, I observed that this was not getting me good results. Most of the issues reported were missing header issues which weren’t that useful to begin with.

So, I had to figure out a way to make this more meaningful and effective if we were to actively deploy such an automated scanning process in our DevOps pipeline.

The way I figured this would work was to get some legitimate traffic in ZAP before beginning the scan. And, the way we would get any traffic inside ZAP would be to use it as a proxy first before the actual scanning. Now, this would mean that the application would have to be browsed manually in order to generate the data. But, we were trying to automate this entire process so any form of manual intervention was not desired, at least till the point when the reports need to be triaged.

Luckily, we had some custom test cases written in Python and Lua that would generate and send some API requests to the application. So, I had to simply do that first (send the test cases to the application proxying it via ZAP) before beginning my ZAP scan. The results of doing this were slightly better. It also meant it took care of the authentication part because the requests that were sent as a result of running the test cases also contained some headers that were used to authenticate to the application so I didn’t have to change anything specifically within the ZAP API code.

We got a few more issues than just the header stuff so I was happy the approach worked. There is still a lot of work to be done but since the application I am dealing with is not a traditional web application, I am looking at a slightly different route now and maybe do something more than just running the ZAP scan. More on that later as and when I have something to blog about.

A few more things I added to this process were that the reports are now being automatically sent to our JIRA instance and a ticket gets created with the reports as attachments. We also have a webhook for Slack built in so whenever this ticket is created, we get a nice notification in our Slack channel notifying everyone that the scan was run and the report was generated attached to the JIRA ticket for auditing purposes. I also added some exception handling and cleaned the code a little bit.

Overall, the complete automated process looks like:

Screen Shot 2015-08-24 at 8.42.38 PM can be found here. can be found here. can be found here. For this script, you will also need to create a folder called “data” in $pwd and then add 2 files in that directory – credentials.json and data.json. Credentials.json will have your username and password to authenticate to JIRA. It will look something like this:


  “username”: “<username>”,

  “password”: “<password>”


Data.json will have the ID of your JIRA project, summary, description, issue type and label for the issue that will get created. This information can easily be obtained from your JIRA installation using the REST API browser plugin in JIRA. It will look something like this:


    “fields”: {



          “id”: “<id>”


       “summary”: “ZAP Scan Result”,

       “description”: “This issue contains the scan results when ZAP is run against the app”,

       “issuetype”: {

          “id”: “<>”


        “labels”: [





And, that’s it! A fully automated process of running OWASP ZAP in your Devops build pipeline with test cases being proxied via ZAP inside Docker containers and reporting in JIRA with notifications sent to Slack.

Feel free to reach out to me if you have any questions or just to share your experiences if you have been trying to do something similar.