Skip to content

A little note about Slack’s Bug Bounty program

I reported a bug to Slack via HackerOne on December 13, 2014. Slack closed it as N/A. Considering it was N/A, I went ahead and blogged about it here on December 18, 2014. I gave them a heads up as well on the submission at HackerOne that I will be disclosing it before I actually disclosed it. They kept radio silence so I assumed they didn’t have any issues. They never said not to disclose or anything like that which would make sense because it was marked as N/A meaning they are not interested in the bug in the first place.

Around the same time or rather a day earlier on December 12, 2014, I had reported another bug to Slack via HackerOne. And, they closed it as Duplicate. The entire submission along with the conversations can be found here. In a nutshell, they wanted to do a coordinated disclosure once the issue was fixed. I was perfectly fine with it. I completely understand and respect the ethics of a bug bounty program and I agreed to that. But, after that, there was complete radio silence. I tried following up multiple times but nobody cared to respond or update me regarding the fix as is evident from the document. I also left a comment (1 month and then 4 days before the disclosure) that if I don’t hear back with any update or anything, I would go ahead and disclose it 90 days after the initial submission. According to industry standards, that seems to be the trend these days so I chose to stick with it. I finally disclosed it here on this blog.

On March 12, 2015, I reported yet another bug to Slack, again via the HackerOne platform. This bug was closed today March 16, 2015 as N/A without any explanation or reasoning. The entire conversation along with the bug submission can be found here. Consider this document as a public disclosure for this bug since it is marked and closed as N/A and they don’t seem to be interested in it anyways.

As evident from the latest bug submission document, I have been told that I have “gone against the spirit of a bug bounty program by disclosing things without consent”. They feel that for the second bug described above, “the disclosure is owned by the original reporter.” and, that “By disclosing this without coordinating” I have stolen “the original reporter’s opportunity to disclose a finding.” They have apparently spoken to HackerOne last week and asked to remove me from participating in their bug bounty program. I was apparently supposed to receive some communication regarding this (which btw I never did).

Final Thoughts:

I am honestly very disappointed with how things have been handled. I personally don’t think I did anything against the spirit of a bug bounty program. I am all for coordinated disclosure but if the program owners fail to coordinate or communicate in a timely manner, there is no such thing called coordinated disclosure. Combined with their responses on all my bug submissions and their decision to ban me from participating in their bug bounty program, this is probably the worst experience I have had so far and I feel this is a perfect example of how not to operate a bug bounty program.

I would love to get some feedback and thoughts on this. I am open to criticism and improving anything that I could have done better from my side to make this less painful.

Static Token used for authentication in the Slack iOS application

When I register for a Slack team from the Safari browser in my iPhone, the final request for registering a team looks like:

1

The response to this request is a redirect to the URL https://slack.com/checkcookie?redir=https%3A%2F%2Fn00bgiri.slack.com%2F%3Ffresh which is then redirected to https://n00bgiri.slack.com/?fresh which is then redirected to https://n00bgiri.slack.com/app. The series of these requests/responses can be seen below:

2

The final response for the request https://n00bgiri.slack.com/app looks like:

3

This screenshot is taken from the Safari browser in the iPhone.

An important thing to notice here is the option “Open Slack”. That is actually a hyperlink that looks like: <a href="slack://login/<redacted>/xoxo-<redacted>-<redacted>" class="btn btn-primary btn-large">Open Slack</a>

The value xoxo-<redacted>-<redacted> in the above URL is the keys to the kingdom. It can be essentially considered as a replacement for the username/password combination. It is a static value that does not change or get invalidated even if the account is logged out. This brings us to the first issue i.e. If an attacker gets hold of this value of a victim (by different attack vectors which is out of scope for the purposes of this discussion), he can essentially gain complete access of a victim’s account perpetually. It does not matter if the victim is logged in or not since it is a static token and does not get invalidated on logout. Please note that the above value should not be confused with another token value that looks similar but is of the form xoxs-<redacted>-<redacted>-<redacted>-<redacted>. I will describe what this other value is in a moment. The xoxo value is only created/sent in the response once when the team is first registered so that’s important to know here.

Now, the normal authentication flow in the Slack iOS app is something like below:

  • The first authentication request is sent to the URL https://slack.com/api/auth.signin with the POST parameters email, password and team.
  • In response to the above request, the server assigns and sends a token (xoxs-<redacted>-<redacted>-<redacted>-<redacted>) in the JSON response.
  • Then, a request is sent to the URL https://slack.com/api/users.login with the POST parameters agent,set_active and token. The token sent here is the xoxs token received above. This completes the authentication flow.
  • The xoxs token is then used in all subsequent requests.

Now, if you logout of the iOS application, this xoxs token gets invalidated (as it should be) but the static xoxo token discussed earlier does not. And, that’s the problem.

This brings me to the second attack aka Login CSRF:

Normally, in a Login CSRF attack, an attacker tricks a victim to submit an authentication request with the username and password as parameters in the request. If there are no CSRF tokens present in this request, it becomes possible to trick victims to authenticate to an attacker controlled account.

So, we now know that the xoxo token is a static token and can be treated as username/password. Therefore, the authentication request would look something like this:

Screen Shot 2015-03-12 at 2.08.49 AM

Notice there is nothing that can be considered as a CSRF token in the above request.

I have created a video PoC for this attack as well.

Exploiting the Login CSRF is extremely easy in this case.
What I essentially did was that as an attacker, I noted down the hyperlink that the server sent when I first registered my team: slack://login/<redacted>/xoxo-<redacted>-<redacted>

I, then sent, the victim an email with this link above as a hyperlink. When the victim clicks on that, the Slack iOS app opens up and sends the above authentication request automatically. I didn’t even have to craft a HTML that sends a POST request to the /api/users.login endpoint. It was as simple as tricking the victim to click on a GET URL. The Slack app does all the leg work for the attackers.

So, that’s if folks. To summarize, I described 2 issues above:

  1. Static tokens that don’t get invalidated
  2. Login CSRF

 

Remediation:

I am not an expert in iOS pentesting but I googled the correct way to handle iOS URL schemes and I saw these websites:

I think they are worth looking into. The premise is essentially that, you should be asking the victim user before opening up the slack:// URL automatically in the Slack application to mitigate the Login CSRF issue.

For the static token issue, I think it’s a bad idea to associate static tokens with user accounts all together. So, that should be looked into as well and tried to get rid of. If not, I don’t see any reason of sending that value in the response in clear text after registering.

 

Slack’s Response:

Thanks for your extensive report. Both of these issues are already known and being fixed.

1) The static tokens are something we are moving away from for all apps, including iOS. We hope to have this completed soon.
2) There is not much security implication of logging a user in this way. Because Slack groups are closed to the public, it would be difficult to convince the user they are in the correct group if you manage to log them in. We have an open bug to add CSRF to the login page,but this is low priority.

 

Cheers!

 

 

A bug in Facebook that violated my privacy

The bug that I am going to describe here was actually discovered accidentally while I was checking my privacy settings in Facebook. And, it is so simple that one doesn’t need to be technical at all to find it. It could have been discovered by anybody (literally). I guess I just got lucky and the fact that I have been a Facebook user since 2007 aided in the discovery as well. But, the bottom line is that you just need to be looking at the right place at the right time to earn bounties from the various bug bounty programs out there.

Anyways, let’s get to the bug now.

Privacy Violation Bug#1

This bug allowed disclosure of “parents” information (to the public) of some Facebook users inspite of the privacy settings being explicitly set to not allow that information to be viewed by the public or friends. I believe this affected certain Facebook users and not all. Specially, those that have been Facebook users around 2007 or so.

I’ve had my Facebook account since 2007 and I believe Mark Zuckerberg did too :)

Both, Zuk and I were affected because of this. I am sure there were others affected as well.

I’ll let you watch this video http://youtu.be/UFd68EG3E98 to show this in action.

It was as simple as clicking a hyperlink for the “BORN” highlight on your timeline. That would take you to a page that looks something like https://www.facebook.com/<user-id>/posts/<post-id>/. And, you would see yourself tagged with your parents.

This bug was worth $5000.  I think this is a pretty generous amount for this bug. But, I am sure they rewarded this considering the ease of how this information could be leaked and the privacy violation for a lot of Facebook users.

Cheers!

Hidden Feature in Slack leads to Unauthorized Information Leakage of Files

Before I get started, following is a legend:

  • Victim – V
  • Attacker – A
  • public URL – PU
  • Shared URL – SU

Now, let’s get to the issue.

There is a hidden feature in Slack that is not directly accessible from the UI. It is not documented either. But, it is a pretty simple call to an API endpoint that can be made via a proxy tool such as Burp. This API call is basically used to “unshare a file shared with a Slack user”. An important point to note here is that this vulnerability is regarding sharing a file with a different user and NOT within a channel. The sharing-unsharing aspect of files within a channel is a legitimate feature in the UI. It is also mentioned in the tweet from Slack here. But, this vulnerability is not about that. It is about sharing-unsharing files with *users* directly and not within channels.

So, due to this hidden feature, it is possible to share a file from V to A and then unshare it again from V to A (assuming V changes mind and does not want to share the file anymore with A) rendering the file inaccessible to A via a SU.

It was observed that it is possible to get past this control by accessing the now unshared file via a different URL – PU. Please see the video PoC or the Reproduction steps on how A can find PU and store it before V decides to unshare the file with A.

So, now after the file is unshared with A, A accesses PU (stored earlier) and the file now becomes public to everyone in the team without V’s knowledge. You can think of it as an Insecure Direct Object Reference vulnerability. This is the first problem.

Then, assuming V happens to navigate to that file again,  V suddenly notices that the unshared file now has been made public via the PU without V’s knowledge or consent. But, V does not freak out because V can still revoke the PU and it won’t be accessible by A or anybody else anymore. This revoking feature is provided in the UI as well. This is true. The PU indeed gets revoked and becomes inaccessible and it appears that this file could not be accessed/viewed by A or any other team member by any other means.

But, the problem does not end just yet. On A’s Slack homepage, on the right hand pane, A notices that this file is still visible. A clicks on the file, refreshes the UI and can still view the contents of this file with whatever changes V has made or makes in the future. This is the second problem.

So, this is clearly a security vulnerability where an attacker can view a file despite of it being unshared repeatedly.

I also sent them a video PoC demonstrating this in action. If you are interested, you can view it here. The video is a bit long (~9 mins) and the volume is a little bit low so you would need some kind of headphones to listen to my irritating voice :-)

The report along with the comments on HackerOne is available here.

 

Conclusion

I am disappointed with how Slack dismissed my original report without even bothering to read the report properly and making any sense out of it or ask me questions if they didn’t understand anything. I totally understand and respect their decision that this falls outside the scope of their Bug Bounty program but I wasn’t asking to be rewarded in the first place. I was simply reporting a security vulnerability. The scope and whether to reward a certain bug or not is completely on them and I understand that as a researcher, I need to respect that. Oh btw, they have not mentioned anything about “Undocumented APIs” in their scope so how would a researcher know what is in scope and what is out of scope? All I can see in their guidelines is “Our security team will assess each bug to determine if it qualifies.” But, they failed to assess the bug properly in the first place.

Anyways, some takeaways for both programs and researchers from this are:

  • Read the bug report once. If its confusing or doesn’t make sense, read it again. Ask the researcher if its still not clear. Make an effort to watch/read the PoC provided. Don’t just assume things.
  • Document features/functions/API calls if you allow them. Not documenting something and yet silently allowing them can be an issue as is evident from this case. They are relying on the fact that this feature is not being used by Slack users. This is naive IMHO.
  • Revise your scope to make it fine grained and much clearer. Scoping is a constant learning/revision process.
  • Don’t ignore the underlying problem which, in this case, I *believe* is the fact that the “permalink_public” URL is generated without the need of it. For instance, why would they want to generate this URL even before a file is revoked? And, even if they are generating it before its revoked, why send it to the client? It is like opening a can of worms. I don’t think its necessary to do that but they failed to even acknowledge that fact or reason as to why they are doing that.
  • Researchers need to submit quality reports and should not be discouraged by dismissing responses. We need to change the general thought most Bug Bounty Programs have these days – that all researchers want is a bounty for a crappy report.

That’s it folks.

 

Cheers!

Analysis of the BrowserStack breach – A classic example of “Pivoting in the Clouds”

BrowserStack was recently breached and it was all over the news as is the case with almost all breaches these days.

In this blog post, I will briefly describe what happened to make everybody aware that things can go really wrong in the Cloud if proper measures are not taken.

 

The Tl;DR version:

BrowserStack’s infrastructure is hosted on the Amazon Web Services (AWS).

They had one particular machine (virtual instance in this case) on the AWS that was not patched against the ShellShock vulnerability.

The attacker leveraged that to pivot through the various moving parts within their AWS setup and steal some information from their production database.

The attacker then used the stolen data and credentials of their AWS SES (See below) service to send emails to some BrowserStack users stating that BrowserStack is shutting down. Ouch!!

 

The longer version:

Attackers took advantage of the un-patched instance -> logged in that instance -> created an IAM user (See below) and generated a key-pair by using the secret keys stored on that instance -> spawned a new instance using the newly created credentials -> mounted one of the production backup disks to this instance -> retrieved config file with database password from this backup -> copied database tables partially and stole some data. While the database tables were being copied, it triggered an alert and the BrowserStack folks acted immediately blocking the IP.

 

But, by this time, the attacker already had stolen some data and the SES (See below) credentials which helped them send a fake email to some BrowserStack users.

 

IAM

This is the AWS Identity Access Management solution where you can create multiple users in an organization and assign the appropriate access rights to them following the minimum privilege access model. In other words, just give the amount of access to an individual that the individual’s role demands. Nothing more than that.

 

SES

This is the AWS Simple Email Service which is a service for sending out emails.

 

Below is purely my analysis on this incident. Please feel free to comment/ask questions/criticize:

 

Some of the poor practices done by BrowserStack on the AWS Cloud:

1. AWS Secret keys were stored on the un-patched instance. Secret keys should be stored securely following the AWS Best Practices.

2. I don’t think they had an inventory of all their running AWS instances. Maybe, they did because it could be obtained from their AWS Console. But, I cannot be sure. Assuming they knew about this running instance, they should have really patched it against ShellShock. This was the root cause and could have prevented the breach all together even if other protections didn’t exist.

3. They did have some alerts but they should have really built a lot more alerts like while creating a new IAM user, creating key pairs, etc.

4. Allowing IAM users to be created with elevated privileges. This is an educated guess. If they allowed the newly created IAM user to start a new instance, mount a backup to it, etc., I am guessing this IAM user had elevated privileges. Was this really necessary?

5. 2-factor authentication. AWS provides the capability to implement 2-factor authentication which I don’t think was being leveraged here.

6. Storage of sensitive information. The database password was stored in a config file that was readable. This could have been locked down better. Was the backup disk the only place where they stored the database password?

7. There is no mention of how the attacker obtained the SES credentials. I am guessing that was stored on the backup disk as well.

 

Having talked about the poor practices, there were some good things that BrowserStack did as well:

1. Passwords hashed using bcrypt. This is a biggie!! Never store passwords in cleartext.

2. Alerts triggered at some point. Due to the alert that got triggered while copying database tables, the magnitude of impact was reduced drastically so that’s good.

3. They mention auditing by AWS Cloud Trail that helped them track the attacker’s movements.

4. Credit Card data processed through 3rd party so the credit card details were not stored on their instances. Again, this is a biggie when it comes to dealing with Credit Card data on the cloud. Leveraging a 3rd party to do this often helps as evident from this case.

5. Locking database when copied. A good fail-over mechanism which helped them in this case to some extent.

6. Other instances were patched against ShellShock so the attack surface was reduced.

7. Instances protected by OS firewall in addition to network firewalls. Defense in depth

8. They mention implementing “security groups” which is a AWS good practice. This helps segregate and isolate different moving parts.

9. Most importantly, they were pretty quick in responding to this breach. So, that was a big plus!

10. They did some more improvements as mentioned in the link below like encrypting backups, auditing logs more, revoking all existing AWS keys, improving monitoring and requesting a 3rd party to conduct a security audit. All these things are definitely going to improve their security posture.

 

Reference URL:

http://www.nextbigwhat.com/browserstack-hack-attack-explanation-297/

Sending “promoted” tweets as a notification to followers without paying anything

Edit: After I wrote this post, I found out this link – http://www.cnet.com/news/twitter-bug-makes-users-fear-invasion-of-push-notification-ads/

In that link, you will notice that Dick Costolo (CEO of Twitter) claims that “We don’t send ads via push notification. Will look into it.” This is dated way back in Sep’13.

So, after a year, they are still doing that, aren’t they?

I also wanted to clarify that even though this is only aimed towards followers (I haven’t tested against people who are not followers), it is still an ad/promotion being actively sent out as a push notification. That doesn’t happen for normal tweets. Followers don’t get notified actively when somebody tweets. Same thing with promoted tweets. It just shows up on the follower’s timeline. But, this is not the case here. The promoted tweet that I mention below doesn’t appear as a regular tweet on the timeline.

—————

I recently discovered an interesting quirk on Twitter. Sadly, it is a Won’t Fix. I have requested public disclosure so it probably will go live soon. The HackerOne report number is #31073. But, below is what was reported in the meantime:

It was observed that I could promote ads on twitter without paying anything for them.

Steps to Reproduce:

  • Sign up for a twitter account and enable Ads & Analytics on your profile. For the sake of PoC, this is abtest66.
  • Create a campaign. The one that I did was “Website clicks or conversions” “Targeting interests and users”. I chose all locations and for targeting, I chose the following:
    • Added two of my own accounts (abtest67, anshuman_bh)
    • Targeted all my followers
    • Targeted users like my followers
  • Don’t select any promoted tweets as of now. Go ahead and launch the campaign. You will be taken to the payments page. Ignore that and navigate to the Campaign Dashboard. Notice that the Campaign shows as running.
  • Now, edit this campaign and under the Creative section, add a few promoted tweets. I added 6. Notice that inspite of not having any payment setup, the user is allowed to add promoted tweets. I think this is the main problem here.

The result was that in my account anshuman_bh (one of my targets of the above campaign), I got a notification of this promoted tweet. See Screenshots 1 (notification of the promoted tweet) and 2 (the actual promoted tweet when clicked on the notification).

1

Screenshot 1

2

Screenshot 2

Also, under abtest66’s Analytics Dashboard -> Promoted section, I did see some data. See Screenshots 3 and 4. I believe this shouldn’t have happened either.

3

Screenshot 3

4

Screenshot 4

Hope this helps!

Twitter folks were not able to reproduce following the steps above so I had to send a better Steps to Reproduce along with a video so here it goes:

I have tried reproducing it again and it works. Here are the steps.

  • Create a test account – @A1
  • Follow @A1 from another account @A2
  • Now, enable Ads and Analytics for @A1
  • For @A1, create a new campaign -> Promoted Tweets
    The URL will look likehttps://ads.twitter.com/accounts/<redacted>/campaigns/new_promoted_tweets?source=objective_picker
  • Enter the Campaign Name, choose Start immediately, target interests and followers.
  • Add @A2 as a target. Also check the box “Also target your followers”.
  • Choose Show ads in all available locations
  • Add a promoted tweet.
  • Set daily max 4.00 and max bid per engagement as 2.00
  • Click on save campaign -> Launch Campaign
  • Notice that @A1 is redirected to a payments page. Ignore the payments page
  • Navigate to https://ads.twitter.com/accounts/. Notice the campaign shows running but technicallyits not.
  • Now, go back to @A1‘s twitter homepage and tweet something.
  • Notice @A2 gets a notification (on his mobile phone for example) saying @A1 just tweeted for the first time. Welcome @A1 to Twitter!
    When clicked on that notification, it takes @A2 to the first tweet from @A1This is as expected. This tweet is also visible on @A1‘s timeline since it is an actual tweet.
  • Now, go back to the Campaign created by @A1 and click Edit.
  • Under tweets, add one more promoted tweet lets say test1
  • Notice @A2 gets the same notification again saying @A1 just tweeted for the first time. Welcome@A1 to Twitter!When clicked on that notification, it now takes @A2 to the promoted tweet from @A1 test1This is not as expected. This tweet does not appear on @A1‘s timeline either. It is a promoted tweet which shouldn’t have been promoted.Basically, @A1 just promoted a tweet to one of his followers @A2 without running a campaign or paying anything.

    Btw, this activity is captured in the Dashboard so you get all those numbers as well.

Video link – https://www.dropbox.com/s/ftcle365fx6cbs8/Video%20Oct%2013%2C%209%2004%2003%20PM.mov?dl=0

 

This was finally triaged and I got an initial reply stating:

“Thank you for your report. We believe it may be a valid security issue and will investigate it further. It could take some time to find and update the root cause for an issue, so we thank you for your patience.

Thank you for helping keep Twitter secure!”

 

But, a few days later, they replied back saying:

“Hello again. After consulting with the security team and the relevant engineering team, we decided since it only affects notifications of first tweet, the impact is so low that we aren’t going to fix it. Thanks again for looking at Twitter security.”

 

And, then another clarification saying:

“Hi, please let me clarify. I should say that it only happens when it shows up via a notification (such as first tweet notification). You should only be able to get notifications sent to people who follow you. So in this case you’re “promoting” tweets to people who follow you, in which case you could just have tweeted. Anyway, please let me know if I’m missing something.”

 

To this, my final replies were:

“The first tweet notification should technically be sent to my followers only when I tweet for the first time. It does not get sent anytime after that. If I can leverage this behavior to send promoted tweets to all my followers as and when I wish, then I’d say I am abusing the platform and doing something that I am not technically supposed to do.

Not to mention, I get all the numbers in the Analytics Dashboard as well under Promoted tweets like who clicked, who retweeted, etc. I am getting all the impressions without paying anything. Isn’t this foiling the whole purpose for promoted tweets?

Yes, you could have just tweeted but when you tweet, your followers are not actively notified. It just appears in their timeline. In this case, the followers are being actively notified about it in the form of a notification. It is more like a promotion than just regular tweeting.”

 

“In the end, I’d say this really boils down to the business decision and risk acceptance. If you guys are okay with this behavior, I don’t have any problems. In that case, do you mind changing the status to “Won’t Fix”? Thanks!”

 

Cheers!

Anshuman

How to fix “Received fatal alert: handshake_failure” for Burp

Phew! It took me around an hour to figure this mess out but I did and I am so glad I did. I just hope my post is helpful for anybody who might be facing similar issues when trying to proxy requests via Burp and eventually end up getting the damned “Received fatal alert: handshake_failure” error every time.

I tried searching everywhere but none of the forums were helpful. So, I had to combine tit bits from different forums and a little bit of my brain to get this sorted out.

The Burp forum here – http://forum.portswigger.net/thread/717/burp-ssh-tunnelling along with the error messages in the Alerts tab in Burp were helpful. Specially, the comment from a Burp developer in the above thread. But, they don’t mention any details and leave it upto the users to figure it out. So, I will hopefully try to help those who are still stuck with this error.

So, assuming you are trying to proxy requests to a website, and end up getting the “Received fatal alert: handshake_failure” error message, pay close attention to the error logs under the Alerts tab in Burp. You will notice a message saying “You have limited key lengths available. To use stronger keys, please download and install the JCE unlimited strength jurisdiction policy files, from Oracle.”

If you ignore that, you are going nowhere. So, lets get the stronger keys as mentioned in the above error message. But, before you do that, you need to first figure out the JRE version that is installed on your machine. I have a Macbook and the following command helped me determine the JRE version that was being used. This command can be found here – http://docs.oracle.com/javase/7/docs/webnotes/install/mac/mac-jre.html under “Determining the Installed Version of the JRE” section. The command is:

/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java -version

I was running java version 1.7.0_60 which corresponds to JRE 7. The next step is to get the JCE unlimited strength jurisdiction policy files corresponding to the JRE version. So, I searched for “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 7 Download”. Notice the 7 because I was running JRE 7. Depending on the JRE version  you are running, you will have to search for the appropriate JCE policy files. Download the zip file, unzip it. You will notice a folder with a bunch of files. The 2 files that we need are “US_export_policy.jar” and “local_policy.jar”.

Once, we have those files, navigate to /Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/security. You will notice that there are already these 2 files present in that directory. But, these are the old ones that we need to replace with the new ones that we just downloaded. So, to be safe, create backup of the old “US_export_policy.jar” and “local_policy.jar” files. And, replace them with the new ones.

Voila! you should be done at this point. Fire up burp again and navigate to the website that was causing problems. You should be able to access it without any problems. You just replaced the older jar files with the newer ones with much stronger keys that would help in the SSL negotiation.

PS – Finding the damn folder to replace the jar files was the hardest part. There were tons of folders, at least in my case, where these jar files were located. But, replacing them didn’t help. I had to find the right path and eventually, the docs.oracle.com link pasted above came to the rescue. There were a lot of threads about changing Java versions, running different Burp versions, etc. but none of them were helpful.