Rapid7 Blog

Jon Hart  



Cisco Smart Install Exposure

Cisco Smart Install (SMI) provides configuration and image management capabilities for Cisco switches. Cisco’s SMI documentation goes into more detail than we’ll be touching on in this post, but the short version is that SMI leverages a combination of DHCP, TFTP and a…

Cisco Smart Install (SMI) provides configuration and image management capabilities for Cisco switches. Cisco’s SMI documentation goes into more detail than we’ll be touching on in this post, but the short version is that SMI leverages a combination of DHCP, TFTP and a proprietary TCP protocol to allow organizations to deploy and manage Cisco switches. Using SMI yields a number of benefits, chief among which is the fact that you can place an unconfigured Cisco switch into an SMI-enabled (and previously configured) network and it will get the correct image and configuration without needing to do much more than wiring up the device and turning it on. Simple “plug and play” for adding new Cisco switches. But, with the great power and heightened privileges comes great responsibility, and that remains true with SMI. Since its first debut in 2010, SMI has had a handful of vulnerabilities published, including one that led to remote code execution (CVE-2011-3271) and several denial of service issues (CVE-2012-0385, CVE-2013-1146, CVE-2016-1349, CVE-2016-6385). Things got more interesting for SMI within the last year when Tenable Network Security, Daniel Turner of Trustwave SpiderLabs, and Alexander Evstigneev and Dmitry Kuznetsov of Digital Security disclosed a number of security issues in SMI during their presentation at the 2016 Zeronights security conference. Five issues were reported, the most severe of which easily rated as CVSS 10.0, if risk scoring is your thing. Or, put more bluntly, if you leave SMI exposed and unpatched and have not followed Cisco’s recommendations for securing SMI, effectively everything about that switch is at risk for compromise. Things get even more gnarly quickly when you consider what a successful attack against a Cisco switch exposing SMI would get an attacker. Even an otherwise well-protected network could be compromised if a malicious actor could arbitrarily reroute a target’s traffic at will. In direct response to last year’s research, Cisco issued a security response hoping to put the issue of SMI security to bed once and for all. They effectively claim that these issues are not vulnerabilities, but rather “misuse of the protocol”, even while encouraging customers to disable SMI if it was not in use. True, this largely boils down to a lack of authentication both in some of the underlying protocols (DHCP and TFTP) as well as SMI itself, which is a key part of achieving the aforementioned installation/deployment simplicities. However, every SMI-related security advisory published by Cisco has included recommendations to disable SMI unless needed. Most recently, they’ve provided various coverage for SMI abuse across their product lines, updated the relevant documentation that details how to properly secure SMI, and released a scanning tool for customers to use to determine if their networks are affected by these SMI issues. To further help Cisco customers secure their switching infrastructure, they’ve also made available several SMI related hardening fixes: CSCvd36820 automatically disables SMI if not used during bootup CSCvd36799 if SMI is enabled it must show in the running config CSCvd36810 periodically alerts to the console if SMI is enabled Ultimately, whether we call this protocol a vulnerability or a feature, exposed SMI endpoints present a very ripe target to attackers. And this is where things get even more interesting. Given that up until recently there was no Cisco provided documentation on how to secure SMI and no known tools for auditing SMI, it was entirely possible that scores of Cisco switches were exposing SMI in networks they shouldn’t be without the knowledge of the network administrators tasked with managing them. Sure enough, a preliminary scan of the public IPv4 Internet by these same original SMI researchers showed 251,801 systems exposing SMI and seemingly vulnerable to some or all of the issues they disclosed. The Smart Install Exploit Tool (SIET) was released to help identify and interact with exposed SMI endpoints, which includes exploit code for a variety of the issues they disclosed. As part of Cisco’s response to this research, they indicated that the SIET tool was suspected in active attacks against organizations’ networks. A quick look through Rapid7 Labs’ Project Heisenberg for 2017 shows only minimal background network noise on the SMI port and no obvious large-scale scanning efforts, though this does not rule out the possibility of targeted attacks. As with many situations like this in security, it's like a case of “if you build it, they will come” gone wrong, almost a “if you design it, they’ll misuse it.” There are any number of ways a human or a machine could mistakenly deploy or forget to secure SMI. Until recently, SMI had little mention in documentation, and, as evidenced by the three SMI related hardening fixes, it was difficult for customers to identify that they were even using SMI in the first place. Plus, even in the presence of a timely patching program, any organization exposing SMI to hostile networks but failing to do their security due diligence are easy targets for deep compromise. To top it all off, by following the recommended means of securing SMI when it is actually being used for deployment, you must add specific ACLs to control who can speak to SMI enabled devices, thereby severely crippling the ease of use that SMI was supposed to provide in the first place. With all of this in mind, we decided to reassess the public Internet for exposure of SMI with several questions in mind: Have things changed since the publication of the SMI research in 2016 and the resulting official vendor response in 2017? Are there additional clues that could explain why SMI is being exposed insecurely? Can Rapid7 assist? The methodology we used to assess public Internet for SMI exposure is almost identical to what the Zeronights researchers used in 2016, except that after the first-pass zmap scan to locate supposedly open SMI 4786/TCP endpoints, we utilized the logic demonstrated in Cisco’s smi_check to determine if the endpoint actually spoke the SMI protocol. Our study found ~3.2 million open 4786/TCP endpoints, the vast majority of which are oddly deployed SSH, HTTP and other services, as well as security devices. It is worth noting that while the testing methodologies between these two scans appear nearly identical, both are relying on possibly limited public knowledge about the proprietary SMI protocol. Future research may yield additional methods of identifying SMI. Using the same logic as in smi_check, we identified 219,123 endpoints speaking SMI in July and 215,317 endpoints in a subsequent scan in August 2017. Answering our first question, we see there was a ~13% drop in the number of exposed SMI endpoints between the Zeronights researcher’s original study and Rapid7 Labs’ Sonar scan in July of 2017, but it is hard to say what the cause was. For one, the composition of the Internet has changed in the last year. Two, Sonar maintains a blacklist of addresses whose owners have requested that we cease scanning their IPs and it is unknown if the Zeronights researchers had a similar list, which means that there were likely a few fundamental differences in the target space between these two disparate scans. So, despite a history of security problems and Cisco advising administrators to properly secure SMI, a year later things haven’t really changed. Why? Examining SMI exposing IPs by country, as usual it is no surprise that countries with a large number IPv4 IPs and significant network infrastructure are at the top of those exposed: Repeating this visualization but examining the organizations that own the IPs exposing SMI, a possible pattern appears -- ISPs: Unfortunately this is where things get a little complicated. The data above seems to imply that the bulk of the organizations exposing SMI are ISPs or similar, however that is also an artifact of how the attribution process happens here. In cases where a given organization is its own ASN and they expose SMI, our reporting would attribute any of the IPs that that ASN is responsible for to the organization in question. However, in cases where a given organization is not its own ASN, or if it uses IP space that it doesn’t control, for example it just gets a cable/DSL/etc router from their ISP, then the name of that organization will not be reflected in our data. Proprietary protocols are interesting from a security perspective and SMI is no exception. Being specific to Cisco switches, you’ll only find this in Cisco shops that didn’t properly secure the switch prior to deployment, so there are limited opportunities for researchers or attackers to explore this. While proprietary protocols are not necessarily closed, SMI does appear that way in that there is almost no public documentation on the protocol particulars beyond what the Zeronights researchers published in 2016. Despite the lack of documentation at the time, these researchers employed a simple method for understanding how the protocol works and it turned out to to be highly effective -- exercising SMI functionality while observing the live protocol communication with a network sniffer. There are several areas for future research which may provide value: When properly secured, including applying all the relevant IOS updates and following Cisco’s recommendations with regards to securing SMI, what risks remain in networks that utilize SMI, if any? When improperly secured, are there are additional risks to be explored? For example, what about the related SMI director service that exposes 4787/TCP? Are there safer or quieter ways to carry out the attacks described by the Zeronights researchers such that more accurate vulnerability coverage could exist? Is there a need for vulnerability coverage similar to SIET’s in Metasploit? What is current state of the art with regards to post-compromise behavior on network switches like this? What methods are (or could) attackers employ to establish advantageous footholds in the networks and devices serviced by a compromised switch? In order to ensure that Rapid7 customers are able to identify assets exposing SMI in their environments, we have added SMI fingerprinting and vulnerability coverage to both Metasploit as of September 1 in auxiliary/scanner/misc/cisco_smart_install, and InsightVM as of the August 17, 2017 release via cisco-sr-20170214-smi. Interested in this research? Looking to collaborate? Questions? Comments? Leave them below or reach out via research@rapid7.com!

Measuring SharknAT&To Exposures

On August 31, 2017, NoMotion’s “SharknAT&To” research started making the rounds on Twitter. After reading the findings, and noting that some of the characteristics seemed similar to trends we’ve seen in the past, we were eager to gauge the exposure of…

On August 31, 2017, NoMotion’s “SharknAT&To” research started making the rounds on Twitter. After reading the findings, and noting that some of the characteristics seemed similar to trends we’ve seen in the past, we were eager to gauge the exposure of these vulnerabilities on the public internet. Vulnerabilities such as default passwords or command injection, which are usually trivial to exploit, in combination with a sizable target pool of well-connected, generally unmonitored internet-connected devices, such as DSL/cable routers, can have a significant impact on the general health of the internet, particularly in the age of DDoS and malware for hire, among other things. For example, starting around this time last year and continuing until today, the internet has been dealing with the Mirai malware that exploits default passwords as part of its effort to replicate itself. The SharknAT&To vulnerabilities seemed so similar, we had to get a better idea of what we might be facing. What we found surprised us: the issues are actually not as universal as initially surmised. Indeed, we found that clusters of each of the vulnerabilities are found almost entirely in their own, distinct regional pockets (namely, Texas, California, and Connecticut). We also observed that these issues may not be limited to just one ISP deploying a particular model of Internet router but perhaps a variety of different devices that is complicated by a history of companies, products, and services being bought, sold, OEM’d and customized. For more information about these SharknAT&To vulnerabilities and Rapid7’s efforts to understand the exposure these vulnerabilities represent, please read on. Five Vulnerabilities Disclosed NoMotion identified five vulnerabilities that, at the time, seemed limited to Arris modems being deployed as part of AT&T U-Verse installations: SSH exposed to the Internet; superuser account with hardcoded username/password. (22/TCP) Default credentials “caserver” in https server NVG599 (49955/TCP) Command injection “caserver” in https server NVG599 (49955/TCP) Information disclosure/hardcoded credentials (61001/TCP) Firewall bypass no authentication (49152/TCP) Successful exploitation of even just one of these vulnerabilities would result in a near complete compromise of the device in question and would pose a grave risk to the computers, mobile devices, and IoT gadgets on the other side. If exploited in combination, the victim’s device would be practically doomed to persistent, near-undetectable compromise. Scanning to Gauge Risk NoMotion did an excellent job of using existing Censys and Shodan sources to gauge exposure; however, they also pointed out that some of the at-risk services on these devices are not regularly audited by scanning projects like this. In an effort to assist, Rapid7 Labs fired off several Sonar studies shortly after learning of the findings in order to get current information for all affected services, within reason. As such, we queued fresh analysis of: SSH on port 22/TCP to cover vulnerability 1 HTTPS on 49955/TCP and 61001/TCP, covering vulnerabilities 2-4 A custom protocol study on port 49152/TCP for vulnerability 5 Findings Vulnerability 1: SSH Exposure Not having a known vulnerable Arris device at our disposal, we had to take a bit of an educated guess as to how to identify affected devices. In NoMotion’s blog post, they cite Censys as showing 14,894 vulnerable endpoints. A search through Sonar’s SSH data from early August showed just over 7,000 hosts exposing SSH on 22/TCP with “ARRIS” in the SSH banner, suggesting that these may be made by Arris, one of the vendors involved in this issue. There are several caveats that could explain the difference in number, including the fact that Arris makes several other devices, which are unaffected by these issues, and that there is no guarantee that affected and/or vulnerable devices will necessarily mention Arris in their SSH protocol. A follow-up study today showed similar results with just over 8,000. It is assumed that the difference in Rapid7’s numbers as compared to NoMotion’s is caused by the fact that Sonar maintains a blacklist of IP addresses that we’ve been asked to not study, as well as normal changes to the landscape of the public Internet. A preliminary check of our Project Heisenberg honeypots showed no noticeable change in the patterns we observe related to the volume and variety of SSH brute force and default account attempts prior to this research. However, the day after NoMotion's research was published, our honeypots started to see exploitation attempts using the default credentials published by NoMotion. September 13, 2017 UPDATE on SSH exposure findings The researchers from NoMotion reached out to Rapid7 Labs after the initial publication of this blog and shared how they estimated the number of exposed, vulnerable SSH endpoints. They did so by searching for SSH endpoints owned by AT&T U-Verse that were running a particular version of dropbear. Repeating our some of our original research with this new information, we found nearly 22,000 seemingly vulnerable endpoints in that same study from early August concentrated in Texas and California. Armed with this new knowledge, we re-analyzed SSH studies from late August and early September and discovered that seemingly none of the endpoints that appeared vulnerable in early August were still advertising the vulnerable banner, indicating that something changed with regards to SSH on AT&T U-Verse modems that caused this version to disappear entirely. Sure enough, a higher level search for just AT&T U-Verse endpoints shows that there were nearly 40,000 AT&T U-Verse SSH endpoints in early August and just over 10,000 in late August and early September, with the previously seen California and Texas concentrations drying up. What changed here is unknown. Vulnerabilities 2 and 3: Port 49955/TCP Service Exposure US law understandably prohibits research that performs any exploitation or intrusive activity, which rules out specifically testing the validity of the default credentials, or attempting to exploit the command injection vulnerability. Combined with no affected hardware being readily available to us at the time of this writing, we had to get creative to estimate the number of exposed and potentially affected Arris devices. As mentioned in NoMotion’s blog, they observed several situations in which the HTTP(S) server listening on 49955/TCP would return various messages implying a lack of authorization, depending on how the request was made. Our first slice through the Sonar data from August 31, 2017 showed ~3.4 million 49955/TCP endpoints open, though only approximately 284,000 of those appear to be HTTPS. Further summarization showed that better than 99% of these responses were identical HTTP 403 forbidden messages, giving us high confidence that these were all the same types of devices and were all likely affected. In some HTTP research situations we are able to examine the HTTP headers in the response for clues that might indicate a particular vendor, product or version that would help narrow our search, however the HTTP server in question here returns no headers at all. Furthermore, by examining the organization and locality information associated with the IPs in question, we start to see a pattern that this is isolated almost entirely to AT&T-related infrastructure in the Southern United States, with Texas cities dominating the top results: The ~53k likely affected devices that we failed to identify a city and state for all report the same latitude and longitude, smack in the middle of Cheney Reservoir in Kansas. This is an anomaly introduced by MaxMind, our source of Geo-IP information, and is the default location used when an IP cannot be located any more precisely than being in the United States. As further proof that we were on the right track, NoMotion has two locations, both in Texas. It’s likely that these Arris devices were first encountered in day-to-day work and life by NoMotion employees, and not scrounged off of eBay for research purposes. We’ve certainly happened upon interesting security research this way at Rapid7—it’s our nature as security researchers to poke at the devices around us. Because this HTTP service is wrapped in SSL, Sonar also records information about the SSL session. A quick look at the same devices identified above shows another clear pattern -- that most have the same, default, self-signed SSL certificate: This presents another vulnerability. Because the vast majority of these devices have the same certificate, they will also have the same private key. This means that anyone with access to the private key from one of these vulnerable devices is poised to be able to decrypt or manipulate traffic for other affected devices, should a sufficiently-skilled attacker position themselves in an advantageous manner, network-wise. Because some of the SharknAT&To vulnerabilities disclosed by NoMotion allow filesystem access, it is assumed that access to the private key, even if password protected, is fairly simple. To add insult to injury, because these same vulnerable services are the very services an ISP would use to manage and update or patch affected systems against vulnerabilities like these, should an attacker compromise them in advance, all bets are off for patching these devices using all but a physical replacement. It is also very curious that outside of the top SSL certificate subject and fingerprint, there is still a clear pattern in the certificates: there is a common name with a long integer after it, which looks plausibly like a serial number. Perhaps at some point in their history, these devices used a different scheme for SSL certificate generation, and inadvertently included the serial number. Some simple testing with a supposedly unaffected device showed that this number didn’t necessarily match the serial number. Examining Project Heisenberg’s logs for any traffic appearing on 49955/TCP shows only a minimal amount of background noise, and no obvious widespread exploitation yet in 2017. Vulnerability 4: Port 61001/TCP Exposure Much like with vulnerabilities 2 and 3 on port 49955/TCP, Sonar is a bit hamstrung when it comes to its ability to test for the presence of this vulnerability on the public internet. Following the same steps as we did with 49955/TCP, we observed ~5.8 million IPs on the public IPv4 Internet with port 61001/TCP open. A second pass of filtering showed that nearly half of these were HTTPS. Using the same payload analysis technique as before didn’t pay dividends this time, because while the responses are all very similar -- large clusters of HTTP 404, 403, and other default looking HTTP response -- there is no clear outlier. The top response from ~874,000 endpoints looks similar to what we observed on 49955/TCP -- lots of Texas with some Cali slipping in: The vast majority of the remainder appear to be 2Wire DSL routers that are also used by AT&T U-Verse. The twist here is that Arris acquired 2Wire several years ago. Whether or not these 2Wire devices are affected by any of these issues is currently unknown: As shown above, there is still a significant presence in the Southern United States, but there is also a sizeable Western presence now, which really highlights the supply chain problem that NoMotion mentioned in their research. While the 49955/TCP vulnerability appears to be isolated to just one region of the United States, the 61001/TCP issue has a broader reach, further implying that this extends beyond just the Arris models named by NoMotion, but not necessarily beyond AT&T. Repeating the same investigation into the SSL certificates on port 61001/TCP shows that there are likely some patterns here, including the same exact Arris certificate showing up again, this time with over 45,000 endpoints, and Motorola making an appearance with 3/4 of a million: Examining Project Heisenberg’s logs for any traffic appearing on 61001/TCP shows there is only a minimal amount of background noise and no obvious widespread exploitation yet in 2017. Vulnerability 5: Port 49152/TCP Exposure The service listening on 49152/TCP appears to be used as a kind of a source-routing, application layer to MAC layer TCP proxy. By specifying a magic string, the correct opcode, a valid MAC and a port, the “wproxy” service will forward on any remaining data received during a connection to port 49152/TCP from (generally) the WAN to a host on the LAN with that specified MAC to the specified. Why exactly this needs to be exposed to the outside world with no restrictions whatsoever is unknown, but perhaps the organizations in question deploy this for debugging and maintenance purposes and failed to properly secure it. In order to gauge exposure of this issue, we developed a Sonar study that sends to the wproxy service a syntactically valid payload that elicits an error response. More specifically, the study sends a request with a valid magic string, an invalid opcode, an invalid MAC and an invalid port, which in turn generally causes the remote endpoint to return an error that allows us to positively identify the wproxy service. Because this vulnerability is inherent in the service itself due to a lack of any authentication or authorization, any endpoint exposing this service is at risk. As with the other at risk services described so far, our first step was to determine how many public IPv4 endpoints seemed to have the affected port open, 49152/TCP. A quick zmap scan showed nearly 8 million hosts with this port open. With our limited knowledge of the protocol service, we looked for any wproxy-like responses, which quickly whittled down the list to approximately 42,000 IPv4 hosts exposing the wproxy service. We had hoped that a quick application of geo-IP and we’d be done, but it wasn’t quite that simple. Using the same techniques as with other services, we grouped by common locations until something caught our eye, and immediately we knew something was up. Up until this point, all of this had landed squarely in AT&T land, clustering around Texas and California, but several different lenses into the 49152/TCP data pointed to one region—Connecticut: Sure, there are a few AT&T mentions and even 5 oddly belonging to Arris in Georgia, but otherwise this particular service seemed off. Why all Texas/California AT&T previously, but now Frontier in Connecticut? Guesses of bad geo-IP data wouldn’t be too far off, but in reality, Frontier acquired all of AT&T’s broadband business in Connecticut 3 years ago. This means that AT&T broadband customers who were at risk of having their internal networks swiss-cheesed by determined attackers with a penchant for packets for at least 3 years are now actually Frontier customers using AT&T hardware, almost certainly further complicating the supply chain problem and definitely putting customers at risk because of a service that should have never seen the public internet in the first place. Examining Project Heisenberg’s logs for any traffic appearing on 49152/TCP and there is largely just suspected background noise in 2017, albeit a little higher than port 49955/TCP and 61001/TCP. There are a few slight spikes back in February 2017, perhaps indicating some early scouting, but it is just as likely to have been background noise or probes for entirely different issues. Some high level investigation shows a deluge of blindly lobbed HTTP exploits at this port. Conclusions The issues disclosed by NoMotion are certainly attention-grabbing, since the initial analysis implies that AT&T U-Verse, a national internet service provider with millions of customers, is powered by dangerously vulnerable home routers. However, our analysis of what’s actually matching the described SharknAT&To indicators seems to point to a more localized phenomenon; Texas and other Southern areas are primarily indicated, with flare ups in California, Chicago, and Connecticut, with significantly lower populations in other regions of the U.S. These results seem to imply which vendor is in the best position to fix these bugs, but the supply chain problems detailed above add a level of complication that will inevitably leave some customers at risk unnecessarily. Armed with these Sonar results, we can say with confidence that these vulnerabilities are almost wholly contained in the AT&T U-Verse and associated networks, and not part of the wider Arris ecosystem of hardware. This, in turn, implies that the software was produced or implemented by the ISP, and not natively shipped by the hardware manufacturer. This knowledge will hopefully speed up remediation. Interested in further collaboration on this? Have additional information? Questions? Comments? Leave them here or reach out to research@rapid7.com!

Remote Desktop Protocol (RDP) Exposure

The Remote Desktop Protocol, commonly referred to as RDP, is a proprietary protocol developed by Microsoft that is used to provide a graphical means of connecting to a network-connected computer. RDP client and server support has been present in varying capacities in most every Windows…

The Remote Desktop Protocol, commonly referred to as RDP, is a proprietary protocol developed by Microsoft that is used to provide a graphical means of connecting to a network-connected computer. RDP client and server support has been present in varying capacities in most every Windows version since NT. Outside of Microsoft's offerings, there are RDP clients available for most other operating systems. If the nitty gritty of protocols is your thing, Wikipedia's Remote Desktop Protocol article is a good start on your way to a trove of TechNet articles. RDP is essentially a protocol for dangling your keyboard, mouse and a display for others to use. As you might expect, a juicy protocol like this has a variety of knobs used to control its security capabilities, including controlling user authentication, what encryption is used, and more. The default RDP configuration on older versions of Windows left it vulnerable to several attacks when enabled; however, newer versions have upped the game considerably by requiring Network Level Authentication (NLA) by default. If you are interested in reading more about securing RDP, UC Berkeley has put together a helpful guide, and Tom Sellers, prior to joining Rapid7, wrote about specific risks related to RDP and how to address them. RDP's history from a security perspective is varied. Since at least 2002 there have been 20 Microsoft security updates specifically related to RDP and at least 24 separate CVEs: MS99-028: Terminal Server Connection Request Flooding Vulnerability MS00-087: Terminal Server Login Buffer Overflow Vulnerability MS01-052: Invalid RDP Data Can Cause Terminal Service Failure MS02-051: Cryptographic Flaw in RDP Protocol can Lead to Information Disclosure MS05-041: Vulnerability in Remote Desktop Protocol Could Allow Denial of Service MS09-044: Vulnerabilities in Remote Desktop Connection Could Allow Remote Code Execution MS11-017: Vulnerability in Remote Desktop Client Could Allow Remote Code Execution MS11-061: Vulnerability in Remote Desktop Web Access Could Allow Elevation of Privilege MS11-065: Vulnerability in Remote Desktop Protocol Could Allow Denial of Service MS12-020: Vulnerabilities in Remote Desktop Could Allow Remote Code Execution MS12-036: Vulnerability in Remote Desktop Could Allow Remote Code Execution MS12-053: Vulnerability in Remote Desktop Could Allow Remote Code Execution MS13-029: Vulnerability in Remote Desktop Client Could Allow Remote Code Execution MS14-030: Vulnerability in Remote Desktop Could Allow Tampering MS14-074: Vulnerability in Remote Desktop Protocol Could Allow Security Feature Bypass MS15-030: Vulnerability in Remote Desktop Protocol Could Allow Denial of Service MS15-067: Vulnerability in RDP Could Allow Remote Code Execution MS15-082: Vulnerabilities in RDP Could Allow Remote Code Execution MS16-017: Security Update for Remote Desktop Display Driver to Address Elevation of Privilege MS16-067: Security Update for Volume Manager Driver In more recent times, the Esteemaudit exploit was found as part of the ShadowBrokers leak targeting RDP on Windows 2003 and XP systems, and was perhaps the reason for the most recent RDP vulnerability addressed in CVE-2017-0176. RDP is disabled by default for all versions of Windows but is very commonly exposed in internal networks for ease of use in a variety of duties like administration and support. I can't think of a place where I've worked where it wasn't used in some capacity. There is no denying the convenience it provides. RDP also finds itself exposed on the public internet more often than you might think. Depending on how RDP is configured, exposing it on the public internet ranges from suicidal on the weak end to not-too-unreasonable on the other. It is easy to simply suggest that proper firewall rules or ACLs restricting RDP access to all but trusted IPs is sufficient protection, but all that extra security only gets in the way when Bob-from-Accounting's IP address changes weekly. Sure, a VPN might be something that RDP could hide behind and be considerably more secure, but you could also argue that a highly secured RDP endpoint on the public internet is comparable security-wise to a VPN.  And when your security-unsavvy family members or friends need help from afar, enabling RDP is definitely an option that is frequently chosen. There have also been reports that scammers have been using RDP as part of their attacks, often convincing unwary users to enable RDP so that “remote support” can be provided.  As you can see and imagine, there are all manner of ways that RDP could end up exposed on the public internet, deliberately or otherwise. It should come as no surprise, then, to learn that we've been doing some poking at the global exposure of RDP on the public IPv4 internet as part of Rapid7 Labs' Project Sonar. Labs first looked at the abuse of RDP from a honeypot's perspective as part of the Attackers Dictionary research published last year. Around the same time, in early 2016, Sonar observed 10.8 million supposedly open RDP endpoints. As part of the research for Rapid7's 2016 National Exposure Index, we observed 9 million and 9.4 million supposedly open RDP endpoints in our two measurements in the second quarter of 2016. More recently, as part of the 2017 National Exposure Index, in the first quarter of 2017, Sonar observed 7.2 million supposedly open RDP endpoints. Exposing an endpoint is one thing, but actually exposing the protocol in question is where the bulk of the risk comes from. As part of running Sonar, we frequently see a variety of honeypots, tarpits, IPs or other security devices that will make it appear as if an endpoint is open when it really isn't—or when it really isn't speaking the protocol you are expecting. As such, I'm always skeptical of these initial numbers. Surely there aren't really 7-10 million systems exposing RDP on the public internet. Right? Recently, we launched a Sonar study in order to shed more light on the number of systems actually exposing RDP on the public internet. We built on the previous RDP studies which were simple zmap SYN scans, followed up with a full connection to each IP that responded positively and attempted the first in a series of protocol exchanges that occur when an RDP client first contacts an RDP server. This simple, preliminary protocol negotiation mimics what modern RDP clients perform and is similar to what Nmap uses to identify RDP. This 19-byte RDP negotiation request should elicit a response from almost every valid RDP configuration it encounters, from the default (less secure) settings of older RDP versions to the NLA and SSL/TLS requirements of newer defaults: We analyzed the responses, tallying any that appeared to be from RDP speaking endpoints, counting both error messages indicating possible client or server-side configuration issues as well as success messages. 11 million open 3389/TCP endpoints,and 4.1 million responded in such a way that they were RDP speaking of some manner or another. This number is shockingly high when you remember that this protocol is effectively a way to expose keyboard, mouse and ultimately a Windows desktop over the network. Furthermore, any RDP speaking endpoints discovered by this Sonar study are not applying basic firewall rules or ACLs to protect this service, which brings into question whether or not any of the other basic security practices have been applied to these endpoints. Given the myriad of ways that RDP could end up exposed on the public Internet as observed in this recent Sonar study, it is hard to say why any one country would have more RDP exposed than another at first glance, but clearly the United States and China have something different going on than everyone else: Looked at from a different angle, by examining the organizations that own the IPs with exposed RDP endpoints, things start to become much more clear: The vast majority of these providers are known to be cloud, virtual, or physical hosting providers where remote access to a Windows machine is a frequent necessity; it's no surprise, therefore, that they dominate exposure. We can draw further conclusions by examining the RDP responses we received. Amazingly, over 83% of the RDP endpoints we identified indicated that they were willing to proceed with CredSSP as the security protocol, implying that the endpoint is willing to use one of the more secure protocols to authenticate and protect the RDP session. A small handful in the few thousand range selected SSL/TLS. Just over 15% indicated that they didn't support SSL/TLS (despite our also proposing CredSSP…) or that they only supported the legacy “Standard RDP Security”, which is susceptible to man-in-the-middle attacks. Over 80% of exposed endpoints supporting common means for securing RDP sessions is rather impressive. Is this a glimmer of hope for the arguably high number of exposed RDP endpoints? Areas for potential future research could include: Security protocols and supported encryption levels. Nmap has an NSE script that will enumerate the security protocols and encryption levels available for RDP. While 83% of the RDP speaking endpoints support CredSSP, this does not mean that they don't also support less secure options; it just means that if a client is willing, they can take the more secure route. When TLS/SSL or CredSSP are involved, are organizations following best practices with regard to certificates, including self-signed certificates (perhaps leading to MiTM?), expiration, and weak algorithms? Exploring the functionality of RDP in non-Microsoft client and server implementations Rapid7's InsightVM and Metasploit have fingerprinting coverage to identify RDP, and InsightVM has vulnerability coverage for all of the above mentioned RDP vulnerabilities. Interested in this RDP research? Have ideas for more? Want to collaborate? We'd love to hear from you, either in the comments below or at research@rapid7.com.

Project Sonar - Mo' Data, Mo' Research

Since its inception, Rapid7's Project Sonar has aimed to share the data and knowledge we've gained from our Internet scanning and collection activities with the larger information security community.  Over the years this has resulted in vulnerability disclosures, research papers, conference presentations, community collaboration…

Since its inception, Rapid7's Project Sonar has aimed to share the data and knowledge we've gained from our Internet scanning and collection activities with the larger information security community.  Over the years this has resulted in vulnerability disclosures, research papers, conference presentations, community collaboration and data.  Lots and lots of data.Thanks to our friends at scans.io, Censys, and the University of Michigan, we've been able to provide the general public free access to much of our data, including:4 years of bi-weekly HTTP GET / studies. Over 6T of data. https://scans.io/study/sonar.http~1 year of bi-weekly HTTPS GET / studies. Over 2T of data. https://scans.io/study/sonar.https3 years and nearly 1000 ~monthly studies of common UDP services. Over 100G of juicy data.  https://scans.io/study/sonar.udp4 years, nearly 300G and 1500 bi-weekly studies of the SSL certificates obtained by examining commonly exposed SSL services.  https://scans.io/study/sonar.ssl and https://scans.io/study/sonar.moressl3 years and hundreds of ~weekly forward DNS (FDNS) and reverse DNS (RDNS) studies. Nearly 2T of data.  https://scans.io/study/sonar.fdns, https://scans.io/study/sonar.fdns_v2, https://scans.io/study/sonar.rdns, and https://scans.io/study/sonar.rdns_v2New!  zmap SYN scan results for any Sonar TCP study.  A little data now, but a lot over time.  https://scans.io/study/sonar.tcpAs project Sonar continues, we will continue to publish our data through the outlets listed above, perhaps in addition to others.Are you interested in Project Sonar?  Are you using this data?  If so, how?  Interested in seeing additional studies performed?  Have questions about the existing studies or how to use or interpret the data?  We love hearing from the community!  Post a comment below or reach out to us at research [at] rapid7 [dot] com.

Signal to Noise in Internet Scanning Research

We live in an interesting time for research related to Internet scanning. There is a wealth of data and services to aid in research. Scanning related initiatives like Rapid7's Project Sonar, Censys, Shodan, Shadowserver or any number of other public/semi-public projects have been around…

We live in an interesting time for research related to Internet scanning. There is a wealth of data and services to aid in research. Scanning related initiatives like Rapid7's Project Sonar, Censys, Shodan, Shadowserver or any number of other public/semi-public projects have been around for years, collecting massive troves of data.  The data and services built around it has been used for all manner of research. In cases where existing scanning services and data cannot answer burning security research questions, it is not unreasonable for one to slap together some minimal infrastructure to perform Internet wide scans.  Mix the appropriate amounts of zmap or masscan with some carefully selected hosting/cloud providers, a dash of automation, and a crash-course in the legal complexities related to "scanning" and questions you ponder over morning coffee can have answers by day's end. So, from one perspective, there is an abundance of signal.  Data is readily available. There is, unfortunately, a significant amount of noise that must be dealt with. Dig even slightly deep into almost any data produced by these scanning initiatives and you'll have a variety of problems to contend with that can waylay researchers. For example, there are a variety of snags related the collection of the scan data that could influence the results of research: Natural fluctuation of IPs and endpoint reachability due to things like DHCP, mobile devices, or misconfiguration. When blacklists or opt-out lists are utilized to allow IP "owners" to opt-out from a given project's scanning efforts, how big is this blacklist?  What IPs are in it?  How has it changed since the last scan? Are there design issues/bugs in the system used to collect the scan data in the first place that influenced the scan results? During a given study, were there routing or other connectivity issues that affected the reachability of targets? Has this data already been collected?  If so, can that data be used instead of performing an additional scan? Worse, even in the absence of any problems related to the collection of the scan data, the data itself is often problematic: Size.  Scans of even just a single port and protocol can result in a massive amount of data to be dealt with.  For example, a simple HTTP GET request to every 80/TCP IPv4 endpoint currently results in a compressed archive of over 75G.  Perform deeper HTTP 1.1 vhost scans and you'll quickly have to contend with a terabyte or more.  Data of this size requires special considerations when it comes to the storage, transfer and processing. Variety.  From Internet-connected bovine milking machines, to toasters, *** toys, appliances and an increasingly large number of "smart" or "intelligent" devices are being connected to the Internet, exposing services in places you might not expect them.  For example, pick any TCP port and you can guarantee that some non-trivial number of the responses will be from HTTP services of one type or another.  These potentially unexpected responses may need to be carefully handled during data analysis. Oddities.  There is not a single TCP or UDP port that wouldn't yield a few thousand responses, regardless of how seemingly random the port may be.  12345/TCP?  1337/UDP?  65535/TCP?  Sure.  You can believe that there will be something out there responding on that port in some way.  Oftentimes these responses are the result of some security device between the scan source and destination.  For example, there is a large ISP that responds to any probe on any UDP port with an HTTP 404 response over UDP.  There is a vendor with products and services used to combat DDoS that does something similar, responding to any inbound TCP connection with HTTP responses. Lastly there is the issue of focus.  It is very easy for research that is based on Internet scanning data to quickly venture off course and become distracted. There is seemingly no end to the amount of strange things that will be connected in strange ways to the public IP space that will tempt the typically curious researcher. Be careful out there!

The Internet of Gas Station Tank Gauges -- Final Take?

In early 2015, HD Moore performed one of the first publicly accessible research related to Internet-connected gas station tank gauges, The Internet of Gas Station Tank Gauges. Later that same year, I did a follow-up study that probed a little deeper in The Internet of…

In early 2015, HD Moore performed one of the first publicly accessible research related to Internet-connected gas station tank gauges, The Internet of Gas Station Tank Gauges. Later that same year, I did a follow-up study that probed a little deeper in The Internet of Gas Station Tank Gauges — Take #2. As part of that study, we were attempting to see if the exposure of these devices changed in the ~10 months since our initial study as well as probe a little bit deeper to see if there were affected devices that we missed in the initial study due to the study's primitive inspection capabilities at the time. Somewhat unsurprisingly, the answer was no, things hadn't really changed, and even with the additional inspection capabilities we didn't see a wild swing that would be any cause for alarm. Recently, we decided to blow the dust off this study and re-run it for old-time's sake in the event that things had taken a wild swing in either direction or if other interesting patterns could be derived.  Again, we found very little changed. Not-ATGs and the Signal to Noise Ratio What is often overlooked in studies like this is the signal to noise ratio seen in the results, the "signal" being protocol responses you expect to see and the "noise" being responses that are a bit unexpected.  For example, finding SSH servers running on HTTP ports, typically TCP-only services being crudely crammed over UDP, and gobs of unknown, intriguing responses that will keep researchers busy chasing down explanations for years. These ATG studies were no exception. In most recent zmap TCP SYN scan done against port 10001 on November 3, 2016, we found nearly 3.4 million endpoints responding as open. Of those, we had difficulty sending our ATG-specific probes to over 2.8 million endpoints — some encountered socket level errors, others simply received no responses. It is likely that a large portion of these responses, or lack thereof, are due to devices such as tar-pits, IDS/IPS, etc. The majority of the remaining endpoints appear to be a smattering of HTTP, FTP, SSH and other common services run on odd ports for one reason or another.  And last but not least are a measly couple of thousand ATGs. I hope to explore the signal and noise related problems related to Internet Scanning in a future post. Future ATG Research and Scan Data We believe that is important to be transparent with as much of our research as possible.  Even if a particular research path we take ends up a dead, boring end, by publishing our process and what we did (or didn't) find might help a future wayward researcher who ends up in this particular neck of research navigate accordingly. With that said, this is likely to be our last post related to ATGs unless new research/ideas arise or interesting swings in results occur in future studies. Is there more to be done here?  Absolutely!  Possible areas for future research include: Are there additional commands that exposed ATGs might support that provide data that is worth researching, for security or otherwise? Are there other services exposed on these ATGs?  What are the security implications? Are there advancements to be made on the offensive or defensive side relating to ATGs and related technologies? We have published the raw zmap TCP scan results for all of the ATG studies we've done to date here.  We have also started conducting these studies on a monthly basis and these runs will automatically upload to scans.io when complete. As usual, thanks for reading, and we welcome your feedback, comments, criticisms, ideas or collaboration requests here as a comment or by reaching out to us at research@rapid7.com. Enjoy!

NCSAM: Understanding UDP Amplification Vulnerabilities Through Rapid7 Research

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial…

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face. When we began brainstorming ideas for NCSAM, I suggested something related to distributed denial of service (DDoS) attacks, specifically with a focus on the UDP amplification vulnerabilities that are typically abused as part of these attacks.  Rarely a day goes by lately in the infosec world where you don't hear about DDoS attacks crushing the Internet presence of various companies for a few hours, days, weeks, or more.  Even as I wrote this, DDoS attacks are on the front page of many major news outlets. A variety of services that I needed to write this very blog post were down because of DDoS, and I even heard DDoS discussed on the only radio station I am able to get where I live. Timely. What follows is a brief primer on and a look at what resources Rapid7 provides for further understanding UDP amplification attacks. Background A denial of service (DoS) vulnerability is about as simple as it sounds -- this vulnerability exists when it is possible to deny, prevent or in some way hinder access to a particular type of service.  Abusing a DoS vulnerability usually involves an attack that consumes precious compute or network resources such as CPU, memory, disk, and network bandwidth. A DDoS attack is just a DoS attack on a larger scale, often using the resources of compromised devices on the Internet or other unwitting systems to participate in the attack. A distributed, reflected denial of service (DRDoS) attack is a specialized variant of the DDoS attack that typically exploits UDP amplification vulnerabilities.  These are often referred to as volumetric DDoS attacks, a more generic type of DDoS attack that specifically attempts to consume precious network resources. A UDP amplification vulnerability occurs when a UDP service responds with more data or packets than the initial request that elicited the response(s). Combined with IP packet spoofing/forgery, attackers send a large number of spoofed UDP datagrams to UDP endpoints known to be susceptible to UDP amplification using a source address corresponding to the IP address of the ultimate target of the DoS.  In this sense, the forged packets cause the UDP service to "reflect" back at the DoS target. The exact impact of the attack is a function of how many systems participate in the attack and their available network resources, the network resources available to the target, as well as the bandwidth and packet amplification factors of the UDP service in question. A UDP service that returns 100 bytes of UDP payload in response to a 1-byte request is said to have a 100x bandwidth amplification factor.  A UDP service that returns 5 UDP packets in response to 1 request is said to have a 5x packet amplification factor. Oftentimes a ratio is used in place of a factor. For example, a 5x amplification factor can also be said to have a 1:5 amplification ratio. For more information, consider the following resources: US-CERT's alert on UDP-Based Amplification Attacks The Spoofer project from the Center for Applied Internet Data Analysis (CAIDA) Sonar UDP Studies Rapid7's Project Sonar has been performing a variety of UDP scans on a monthly basis and uploading the results to scans.io for consumption by the larger infosec/network community for nearly three years. Infosec practitioners can use this raw scan data to research a variety of aspects related to UDP amplification vulnerabilities, including geographic/sector specific patterns, amplification factors, etc.  There are, however, some caveats: Although we do not currently have studies for all UDP services with amplification vulnerabilities, we have a fair number and are in the process of adding more. Not all of these studies specifically cover the UDP amplification vulnerabilities for the given service.  Some happen to use other probe types more likely to elicit a response.  In these cases, the existence of a response for a given IP simply means that it responded to our probe for that service, is likely running that service in question, but doesn't necessarily imply that the IP in question is harboring a UDP amplification vulnerability. Our current usage of zmap is such that we will only record the first UDP packet seen in response to our probe.  So, if a UDP service happens to suffer from a packet-based UDP amplification vulnerability, the Sonar data alone may not show the true extent. Currently, Rapid7's Project Sonar has coverage for a variety of  UDP services that happen to be susceptible to UDP amplification attacks. Dennis Rand, a security researcher from Denmark, recently reached out to us asking for us to provide regular studies of the qotd (17/UDP), chargen (19/UDP) and RIPv1 services (520/UDP). When discussing his use cases for these and the other existing Sonar studies, Dennis had the following to add: "I've been using the dataset from Rapid7 UDP Sonar for various research projects as a baseline and core part of the dataset in my research has been amazing. This data could be used by any ISPs out there to detect if they are potentially being part of the problem.  A simple task could be to setup a script that would pull the lists every month and the compare it against previous months, if at some point the numbers go way up, this could be an indication that you might have opened up for something you should not, or at least be aware of this fact in you internal risk assessment. Also it is awesome to work with people who are first of doing this for free, at least seen from my perspective, but still being so open to helping out in the research, like adding new service to the dataset help me to span even wider in my research projects." For each of the studies described below, the data provided on scans.io is gzip-compressed CSV with a header indicating the respective field values, which are, for every host that responded, the timestamp in seconds since the UNIX epoch, the source address and port of the response, the destination address and port (where Sonar does its scanning from), the IP ID, the TTL, and the hex-encoded UDP response payload, if any.  Precisely how to decode the data for each of the studies listed below is an exercise currently left for the reader that may be addressed in future documentation, but for now the descriptions below in combination with Rapid7's dap should be sufficient. DNS (53/UDP) This study sends a DNS query to 53/UDP for the VERSION.BIND text record in the CHAOS class. In some situations this will return the version of ISC BIND that the DNS server is running, and in others it will just return an error. Data can be found here in files with the -dns-53.csv.gz suffix.  In the most recent run of this study on 10/03/2016, there were 7,963,280 endpoints that responded. NTP (123/UDP) There are two variants of this study. The first sends an NTP version 2 MON_GETLIST_1 request, which will return a list of all recently connected NTP peers, generally up to 6 per packet with additional peers sent in subsequent UDP responses. Responses for this study can be found here in files with the ntpmonlist-123.csv.gz suffix.  This probe used in this study is the same as one frequently used in UDP amplification attacks against NTP.  In the most recent run of this study on 10/03/2016, 1,194,146 endpoints responded. The second variant of this study sends an NTP version 2 READVAR request and will return all of the internal variables known to the NTP process, which typically includes things like software version, information about the underlying OS or hardware, and data specific to NTP's time keeping. The responses can be found here in files with the ntpreadvar-123.csv.gz suffix. In the most recent run of this study on 10/03/2016, 2,245,681 endpoints responded. Other UDP amplification attacks in NTP that continue to enable DDoS attacks are described in R7-2014-12. NBNS (137/UDP) This study has been described in greater detail here, but the summary is that this study sends an NBNS name request.  Most endpoints speaking NBNS will return a wealth of metadata about the node/service in question, including system and group/domain names and MAC addresses.  This is the same probe that is frequently used in UDP amplification attacks against NBNS.  The responses can be found here in files with the -netbios-137.csv.gz suffix.  In the most recent run of this study on 10/03/2016, 1,768,634 endpoints responded. SSDP/UPnP (1900/UDP) This study sends an SSDP request that will discover the rootdevice service of most UPnP/SSDP-enabled endpoints.  The responses can be found here in files with the -upnp-1900.csv.gzsuffix.  UDP amplification attacks against SSDP/UPnP often involve a similar request but for all services, often resulting in a 10x packet amplification and a 40x bandwidth amplification.  In the most recent run of this study on 10/03/2016, 4,842,639 endpoints responded. Portmap (111/UDP) This study sends an RPC DUMP request to version 2 of the portmap service.  Most endpoints exposing 111/UDP that are the portmap RPC service will return a list of all of the RPC services available on this node.  The responses can be found here in files with the -portmap-111.csv.gz suffix.  There are numerous ways to exploit UDP amplification vulnerabilities in portmap, including the same one used in the Sonar study, a portmap version 3 variant that is often more voluminous, and a portmap version 4 GETSTAT request.  In the most recent run of this study on 10/03/2016, 2,836,710 endpoints responded. Quote of the Day (17/UDP) The qotd service is essentially the UNIX fortune bound to a UDP socket, returning quotes/adages in response to any incoming 17/UDP datagram.  Sonar's version of this study sends an empty UDP datagram to the port and records any responses, which is believed to be similar to the variant used in the UDP amplification attacks.  The responses can be found here in files with the -qotd-17.csv.gzsuffix.  In the most recent run of this newly added study on 10/21/2016, a mere 2,949 endpoints responded. Character Generator (19/UDP) The chargen service is a service from a time when the Internet was a wholly different place.  The UDP variant of chargen will send a random number bytes in response to any datagram arriving on 19/UDP.  While most implementations stick to purely ASCII strings of random lengths between 0 and 512, some are much chattier, spewing MTU-filling gibberish, packet after packet.  The responses can be found here in files with the -chargen-19.csv.gz suffix.  In the most recent run of this newly added study on 10/21/2016, only 3,791 endpoints responded. RIPv1 (520/UDP) UDP amplification attacks against the Routing Information Protocol version 1 (RIPv1) involve sending a specially crafted request that will result in RIP responding with 20 bytes of data for every route it knows about, with up to 25 routes per response for a maximum response size of 504 bytes, but RIP instances with more than 25 routes will split responses over multiple packets, adding packet amplification pains to the mix.  The responses can be found here in files with the -ripv1-520.csv.gzsuffix.  In the most recent run of this newly added study on 10/21/2016, 17,428 endpoints responded. metasploit modules Rapid7's Metasploit has coverage for a variety of these UDP amplification vulnerabilties built into "scanner" modules available to both the base metasploit-framework as well as the Metasploit community and Professional editions via: auxiliary/scanner/chargen/chargen_probe: this module probes endpoints for the chargen service, which suffers from a UDP amplification vulnerability inherent in its design. auxiliary/scanner/dns/dns_amp: in its default mode, this module will send an ANY query for isc.org to the target endpoints, which is similar to the query used while abusing DNS as part of DRDos attacks. auxiliary/scanner/ntp/ntp_monlist: this module sends the NTP MON_GETLIST request which will return all configured/connected NTP peers from the NTP endpoint in question.  This behavior can be abused as part of UDP amplification attacks and is described in more detail in US-CERT TA14-031a and CVE-2013-5211. auxiliary/scanner/ntp/ntp_readvar: this module sends the NTP READVAR request, the response to which can be used as part of UDP amplification attacks in certain situations. auxiliary/scanner/ntp/ntp_peer_list_dos: utilizes the NTP PEER_LIST request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_peer_list_sum_dos: utilizes the NTP PEER_LIST_SUM request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_req_nonce_dos: utilizes the NTP REQ_NONCE request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_reslist_dos utilizes the NTP GET_RESTRICT request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_unsettrap_dos: utilizes the NTP UNSETTRAP request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/portmap/portmap_amp: this module has the ability to send three different portmap requests similar to those described previously, each of which has the potential to be abused for UDP amplification purposes. auxiliary/scanner/upnp/ssdp_amp: this module has the ability to send two different M-SEARCH requests that demonstrate UDP amplification vulnerabilities inherent in SSDP. Each of these modules uses the Msf::Auxiliary::UDPScanner mixin to support scanning multiple hosts at the same time. Most send probes and analyze the responses with the Msf::Auxiliary::DRDoS mixin to automatically calculate any amplification based on a high level analysis of the request/response datagram(s). Here is an example run of auxiliary/scanner/ntp/ntp_monlist: msf auxiliary(ntp_monlist) > run [+] NTP monlist request permitted (5 entries) [+] NTP monlist request permitted (4 entries) [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): No packet amplification and a 37x, 288-byte bandwidth amplification [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): No packet amplification and a 46x, 360-byte bandwidth amplification [+] NTP monlist request permitted (31 entries) [+] NTP monlist request permitted (23 entries) [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 6x packet amplification and a 285x, 2272-byte bandwidth amplification [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 4x packet amplification and a 211x, 1680-byte bandwidth amplification [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 2x, 8-byte bandwidth amplification [*] Scanned 256 of 512 hosts (50% complete) [+] NTP monlist request permitted (10 entries) [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 92x, 728-byte bandwidth amplification [+] - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 2x, 8-byte bandwidth amplification [*] Scanned 512 of 512 hosts (100% complete) [*] Auxiliary module execution completed msf auxiliary(ntp_monlist) > There is also the auxiliary/scanner/udp/udp_amplification module recently added as part of metasploit-framework PR 7489 that is designed to help explore UDP amplification vulnerabilities and audit for the presence of existing ones. Nexpose coverage Rapid7's Nexpose product has coverage for all of the ntp vulnerabilities described above and in R7-2014-12, along with: netbios-nbstat-amplification dns-amplification chargen-amplification qotd-amplification quake-amplification steam-amplification upnp-ssdp-amplification snmp-amplification Additional information about Nexpose's capabilities with regards to UDP amplification vulnerabilities can be found here. Future Research UDP amplification vulnerabilities have been lingering since the publication of RFC 768 in 1980, but only in the last couple of years have they really become a problem.  Whether current and historical efforts to mitigate the impact that attacks involving UDP amplification have been effective is certainly debatable.  Recent events have shown that only very well fortified assets can survive DDoS attacks and UDP amplification still plays a significant role.  It is our hope that the open dissemination of active scan data through projects like Sonar and the availability of tools for detecting the presence of this class of vulnerability will serve as a valuable tool in the fight against DDoS. If you are interested in collaborating on Metasploit modules for detecting other UDP amplification vulnerabilities, submit a Metasploit PR. If you are interesting in having Sonar perform additional relevant studies, have interests in related research, or if you have questions, we welcome your comments here as well as by reaching out to us at research@rapid7.com. Enjoy!

Sonar NetBIOS Name Service Study

For the past several years, Rapid7's Project Sonar has been performing studies that explore the exposure of the NetBIOS name service on the public IPv4 Internet.  This post serves to describe the particulars behind the study and provide tools and data for future research in…

For the past several years, Rapid7's Project Sonar has been performing studies that explore the exposure of the NetBIOS name service on the public IPv4 Internet.  This post serves to describe the particulars behind the study and provide tools and data for future research in this area. Protocol Overview Originally conceived in the early 1980s, NetBIOS is a collection of services that allows applications running on different nodes to communicate over a network.  Over time, NetBIOS was adapted to operate on various network types including IBM's PC Network, token ring, Microsoft's MS-Net, Novell NetWare IPX/SPX, and ultimately TCP/IP. For purposes of this document, we will be discussing NetBIOS over TCP/IP (NBT), documented in RFC 1001 and RFC 1002. NBT is comprised of three services: A name service for name resolution and registration (137/UDP and 137/TCP) A datagram service for connectionless communication (138/UDP) A session service for session-oriented communication (139/TCP) The UDP variant of the NetBIOS over TCP/IP Name service on 137/UDP, NBNS, sometimes referred to as WINS (Windows Internet Name Service), is the focus of this study.  NBNS provides services related to NetBIOS names for NetBIOS-enabled nodes and applications.  The core functionality of NBNS includes name querying and registration capabilities and is similar in functionality and on-the-wire format to DNS but with several NetBIOS/NBNS specific details. Although NetBIOS (and, in turn, NBNS) are predominantly spoken by Microsoft Windows systems, it is also very common to find this service active on OS X systems (netbiosd and/or Samba), Linux/UNIX systems (Samba) and all manner of printers, scanners, multi-function devices, storage devices, etc.  Fire up wireshark or tcpdump on nearly any network that contains or regularly services Windows systems and you will almost certainly see NBNS traffic everywhere: The history of security issues with NBNS reads much like that of DNS.  Unauthenticated and communicated over a connectionless medium, some attacks against NBNS include: Information disclosure relating to generally internal/private names and addresses Name spoofing, interception, and cache poisoning. While not exhaustive, some notable security issues relating to NBNS include: Abusing NBNS to attack the Web Proxy Auto-Discovery (WPAD) feature of Microsoft Windows to perform man-in-the-middle attacks, resulting in [MS09-008](https://technet.microsoft.com/library/ security/ms09-008)/CVE-2009-0094. Hot Potato, which leveraged WPAD abuse via NBNS in combination with other techniques to achieve privilege escalation on Windows 7 and above. BadTunnel, which utilized NetBIOS/NBNS in new ways to perform man-in-the-middle attacks against target Windows systems, ultimately resulting in Microsoft issuing MS16-077. Abusing NBNS to perform amplification attacks as seen during DDoS attacks as warned by US-CERT's TA14-017a. Study Overview Project Sonar's study of NBNS on 137/UDP has been running for a little over two years as of the publication of this document.  For the first year the study ran once per week, but shortly thereafter it was changed to run once per month along with the other UDP studies in an effort to reduce the signal to noise ratio. The study uses a single, static, 50-byte NetBIOS "node status request" (NBSTAT) probe with a wildcard (*) scope that will return all configured names for the target NetBIOS-enabled node.  A name in this case is in reference to a particular capability a NetBIOS-enabled node has -- for example, this could (and often does) include the configured host name of the system, the workgroup/domain that it belongs to, and more.  In some cases, the presence of a particular type of name can be an indicator of the types of services a node provides.  For a more complete list of the types of names that can be seen in NBNS, see Microsoft's documentation on NetBIOS suffixes. The probe used by Sonar is identical to the probe used by zmap and the probe used by Nmap.  A Wireshark-decoded sample of this probe can be seen below: This probe is sent to all public IPv4 addresses, excluding any networks that have requested removal from Sonar's studies, leaving ~3.6b possible target addresses for Sonar.  All responses, NBNS or otherwise, are saved.  Responses that appear to be legitimate NBNS responses are decoded for further analysis. An example response from a Windows 2012 system: As a bonus for reconnaissance efforts, RFC 1002 also describes a field included at the end of the node status response that includes statistics about the NetBIOS service on the target node, and one field within here, the "Unit ID", frequently contains the ethernet or other MAC address. NetBIOS, and in particular NBNS, falls into the same bucket that many other services fall into -- they have no sufficiently valid business reason for being exposed live on the public Internet.  Lacking authentication and riding on top of a connectionless protocol, NBNS has a history of vulnerabilities and effective attacks that can put systems and networks exposing/using this service at risk.  Depending on your level of paranoia, the information disclosed by a listening NBNS endpoint may also constitute a risk. These reasons, combined with a simple, non-intrusive way of identifying NBNS endpoints on the public IPv4 Internet is why Rapid7's Project Sonar decided to undertake this study. Data, Tools and Future Research As part of Rapid7's Project Sonar, all data collected by this NBNS study is shared with the larger security community thanks to scans.io.  The past two years worth of the NBNS study's data can be found here with the -netbios-137.csv.gz  suffix.  The data is stored as GZIP-compressed CSV, each row of the CSV containing the metadata for the response elicited by the NBNS probe -- timestamp, source and destination IPv4 address, port, IP ID, TTL and, most importantly, the NBNS response (hex encoded). There are numerous ways one could start analyzing this data, but internally we do much of our first-pass analysis using GNU parallel and Rapid7's dap.  Below is an example command you could run to start your own analysis of this data.  It utilizes dap to parse the CSV, decode the NBNS response and return the data in a more friendly JSON format: pigz -dc ~/Downloads/20160801-netbios-137.csv.gz | parallel --gnu --pipe "dap csv + select 2 8 + rename 2=ip 8=data + transform data=hexdecode + decode_netbios_status_reply data + remove data + json" As an example of some of the output you might get from this, anonymized for now: {"ip":"","data.netbios_names":"MYSTORAGE:00:U WORKGROUP:00:G MYSTORAGE:20:U WORKGROUP:1d:U ","data.netbios_mac":"e5:d8:00:21:10:20","data.netbios_hname":"MYSTORAGE","data.netbios_mac_company":"UNKNOWN","data.netbios_mac_company_name":"UNKNOWN"} {"ip":"","data.netbios_names":"OFFICE-PC:00:U OFFICE-PC:20:U WORKGROUP:00:G WORKGROUP:1e:G WORKGROUP:1d:U \u0001\u0002__MSBROWSE__\u0002:01:G ","data.netbios_mac":"00:1e:10:1f:8f:ab","data.netbios_hname":"OFFICE-PC","data.netbios_mac_company":"Shenzhen","data.netbios_mac_company_name":"ShenZhen Huawei Communication Technologies Co.,Ltd."} {"ip":"","data.netbios_names":"DSL_ROUTE:00:U DSL_ROUTE:03:U DSL_ROUTE:20:U \u0001\u0002__MSBROWSE__\u0002:01:G WORKGROUP:1d:U WORKGROUP:1e:G WORKGROUP:00:G ","data.netbios_mac":"00:00:00:00:00:00","data.netbios_hname":"DSL_ROUTE"} There are also several Metasploit modules for exploring/exploiting NBNS in various ways: auxiliary/scanner/netbios/nbname: performs the same initial probe as the Sonar study against one or more targets but uses the NetBIOS name of the target to perform a follow-up query that will disclose the IPv4 address(es) of the target.  Useful in situations where the target is behind NAT, multi-homed, etc., and this information can potentially be used in future attacks or reconaissance. auxiliary/admin/netbios/netbios_spoof: attempts to spoof a given NetBIOS name (such as WPAD) targeted against specific system auxiliary/spoof/nbns/nbns_response: similar to netbios_spoof but listens for all NBNS requests broadcast on the local network and will attempt to spoof all names (or just a subset by way of regular expressions) auxiliary/server/netbios_spoof_nat: used to exploit BadTunnel For a protocol that has been around for over 30 years and has had its fair share of research done against it, one might think that there is no more to be discovered, but the discovery of two high profile vulnerabilities in NBNS this year (HotPotato and BadTunnel) shows that there is absolutely more to be had. If you are curious about NBNS and interested in exploring more, use the data, tools and knowledge provided above.  We'd love to hear your ideas or discoveries either here in the comments or by emailing research@rapid7.com. Enjoy!

ScanNow DLL Search Order Hijacking Vulnerability and Deprecation

Overview On November 27, 2015, Stefan Kanthak contacted Rapid7 to report a vulnerability in Rapid7's ScanNow tool.  Rapid7 takes security issues seriously and this was no exception.  In combination with a preexisting compromise or other vulnerabilities, and in the absence of sufficient mitigating measures, a…

Overview On November 27, 2015, Stefan Kanthak contacted Rapid7 to report a vulnerability in Rapid7's ScanNow tool.  Rapid7 takes security issues seriously and this was no exception.  In combination with a preexisting compromise or other vulnerabilities, and in the absence of sufficient mitigating measures, a system with ScanNow can allow a malicious party to execute code of their choosing leading to varying levels of additional compromise.  In order to protect the small community of users who may still be using ScanNow, Rapid7 has made the decision to remove ScanNow and advises any affected users to remove ScanNow from any system that still has it. Vulnerability Details Rapid7's ScanNow is a collection of several separate, standalone executables built over the past several years designed to give users and the community the ability to quickly audit themselves for some higher profile vulnerabilities that have made varying levels of headlines during this time: Security Flaws in Universal Plug and Play: Unplug, Don't Play Heartbleed CVE-2012-2122: A Tragically Comedic Security Flaw in MySQL The vulnerability disclosed to Rapid7 by Stefan is a generic vulnerability whose most official name is DLL Search Order Hijacking, but is also referred to as DLL side loading, DLL pre-loading, binary planting, binary carpet bombing, or similar names. DLL search order hijacking went more mainstream in 2010 when ACROS Security published extensive information about it here and has affected hundreds of products over the years and continues to do so. This class of vulnerability occurs when a Windows application attempts to load a DLL or other library and does so with an unqualified search path. When the search path for the DLL is not sufficiently qualified, Windows has a predefined list of places that it will check for the library in question.  Included in that list are locations that could be untrusted or insecure, and if the library in question is found in one of those locations, it is possible that the loaded library contains malicious code which could lead to arbitrary code execution when the target executable is launched. Upon investigating this vulnerability report, Rapid7 discovered that none of code written by Rapid7 for ScanNow utilizes any of the potentially vulnerable Windows API code for loading libraries or executing processes, let alone in such a way that allows the vulnerability to be exploitable in any scenario.  Instead, the vulnerability is present in ScanNow because of they way ScanNow is packaged and distributed, part of which includes 7-zip.  7-zip can be used for several things.  In the case of ScanNow, it is used to create a standalone self-extracting archive executable (SFX), which is basically just an executable that, when run, unpacks the actual ScanNow executable along with any resources it needs, and then runs ScanNow itself.  The root cause appears to be the same thing that affected the Mozilla Foundation in 2012 after a posting to full-disclosure.  It appears that this vulnerability remains unfixed, and another advisory posted by Stefan shortly after he contacted us indicates that in addition to 7-zip itself being vulnerable to DLL search order hijacking, all self-extracting archives created with 7-zip are also vulnerable. At the current time, Rapid7 has assigned a CVSS vector of (AV:N/AC:H/Au:N/C:C/I:C/A:C) with a corresponding score of 7.6 to this vulnerability, but we also realize the particulars around the real risk posed by this vulnerability are complicated and not easily reflected in a CVSS vector. Exploitation Scenarios When DLL search order hijacking vulnerabilities are discussed, the topic of whether or not these are remotely exploitable is an almost inevitable point that is raised and quickly detracts from the issue at hand due to the reaction it typically invokes.  While we are of the opinion that calling this a remotely exploitable vulnerability is a bit inaccurate, it is important to understand that there are "remote like" characteristics of this as shown below. Exploiting a DLL search order hijacking vulnerability on Windows is not all that much different than exploiting LD_PRELOAD and similar style vulnerabilities in the UNIX world, namely that to exploit it, somehow a malicious library must be placed in a location that will be searched by the target application before the correct, expected library is found and loaded by an affected vulnerable application.  There are numerous ways this happens, including: A malicious _local_user or process that can write arbitrary content to a file that is located in a directory that will be searched when trying to locate a DLL for loading by the target application.  This could happen as the result of another previously exploited and unrelated vulnerability. That malicious DLLs have been somehow delivered to inadequately protected directories using something like a drive-by download vulnerability. Social engineering. Additionally, the target system and application must be free of hardening measures designed to mitigate or prevent exploitation of this style of vulnerability. As a trivial example of how this could be exploited, I used the following code, which simply launches the Windows calculator, calc.exe, whenever this DLL is loaded: #include <windows.h> int hijack() { WinExec("calc", 1); return 0; } BOOL WINAPI DllMain (HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { if (fdwReason == DLL_PROCESS_ATTACH) { hijack(); } return TRUE; } Then, link and compile as usual.  In this case, I used mingw: $ i686-w64-mingw32-gcc -c -o dll-hijack.o dll-hijack.c $ i686-w64-mingw32-gcc -shared -o UXTheme.dll dll-hijack.o -lcomctl32 -Wl,--subsystem,windows Then, I copy the malicious DLL (there can be several) to the directory where ScanNow lives or to the directory from which the call to ScanNow is made.  In this particular case, I simply deposited the malicious example DLL in my Downloads folder: Then, when ScanNow is executed, either with an unqualified path when in Downloads or with a fully-qualified path, you can see that calc was executed, proving the existence of this vulnerability: Obviously, the final part of exploiting this is getting the target system to somehow run ScanNow in a vulnerable environment which could happen with social engineering, exploiting another perhaps unrelated vulnerability or just getting lucky. Remediation and Mitigation The nature of how ScanNow is built, distributed and used in combination with the particulars surrounding the vulnerability, its exploitation and the ways to mitigate it complicated Rapid7's response to this vulnerability. It should be noted that there are various hardening techniques provided by Microsoft and others to help mitigate or prevent this class of vulnerabilities, however they are far from fool-proof and Rapid7 has experienced limited success when utilizing them to mitigate this particular vulnerability.  Some of these were: Microsoft Security Advisory 2269637 is not especially relevant in this particular case.  This is essentially Microsoft's response to when this class of vulnerability first really went like wildfire back in 2010, and in it they list all vulnerabilities to date in Microsoft products that were of the same basic class (DLL Search Order Hijacking), of which there have been 28 MS advisories and a handful of generic MS KnowledgeBase or TechNet items since its initial writing in 2010.  That is quite a few.  However, only the solutions in two of them are even potentially relevant when discussing this vulnerability, and, as I describe below, most of these were also found to be problematic. Dynamic-Link Library Security (Windows) is not really relevant in our case because we don't control the code that is making the insecure calls.  Otherwise this would be a fine option. Dynamic-Link Library Search Order (Windows) sounds like it should work.  It basically helps when an application doesn't follow the previous recommendation, and the lock-down needs to happen on the client running the application rather than when it is built.  We found this unable to prevent against exploitation of this vulnerability (in fact, the screenshots above were done on a system configured as suggested in this link).  Why is currently unknown and may be an area for future research. CAPEC - CAPEC-471: DLL Search Order Hijacking (Version 2.8) does an OK job at explaining what a DLL search order hijacking vulnerability is, how it is exploited and suggests CAPEC - CAPEC-159: Redirect Access to Libraries (Version 2.8) as a solution.  That solution actually gives 3 options.  The first two are essentially identical to the previous two bullet points (source code modifications, which are, again, not applicable in our case and client-side hardening, which we found to be ineffective in this instance).  The third and final suggestion is to sign system DLLs, the responsibility for which is Microsoft's in this case (right?), and would only be applicable if the affected application was only insecurely loading system DLLs (as opposed to non-system DLLs).  In the case of ScanNow, both Stefan's POC and ours utilize UXTheme.dll, which normally exists as uxtheme.dll on most Windows operating systems since XP in the usual locations buried under C:\Windows.  Had a signing solution been available, it is assumed that the vulnerability would be mitigated to some extent, but signing DLLs is a relatively easy process and there are ways to attempt to bypass signing checks like this.  In other words, signing helps, but perhaps not when facing a super determined adversary.  Furthermore, Stefan's POC which he has been using for several years in the security community, called sentinel, is signed, and it was reported that there were other avenues to exploit this class of vulnerability in ScanNow beyond UXTheme.dll which may or may not be signed by default. CAPEC - CAPEC-159: Redirect Access to Libraries (Version 2.8), is a similar but slightly different vulnerability, and unfortunately its solutions mimic what others have suggested above which were shown to be not relevant or ineffective for various reasons. Both CWE - CWE-426: Untrusted Search Path (2.9)  and CWE - CWE-427: Uncontrolled Search Path Element (2.9) cover vulnerabilities and solutions that are practically identical to everything above with the aforementioned flaws. Oftentimes, suggestions are made to advise against or prevent the execution of executables that live in unprotected locations and to avoid running executables when the current working directory is untrusted.  Download and other temporary directories are great targets for this for this vulnerability, and while following this advice would help, it also makes the system a bit of a pain for the end user.  The result is basically finding a compromise between security and usability, and usability may win in many environments in this regard UAC Group Policy Settings and Registry Key Settings shows that there are some settings that exist on UAC enabled systems to control elevation to administrator by installers automatically and, when disabled require that the user can authenticate successfully as an administrator before allowing the installer to utilize administrative privileges.  When this setting is disabled, all it does is prevent the immediate and easy jump from exploiting a DLL search order hijacking vulnerability in an installer to obtaining administrative privileges.  In other words, the vulnerability was still exploited but the current damage was stopped before further elevation of privilege.  While this setting is disabled on all all Windows enterprise operating systems, Windows home users systems have this setting enabled by default and are subsequently at risk for this turning into a quicker EOP. In short, there appear to be very few workable options for the occurrence of this vulnerability in ScanNow, and it seems like this is a predicament many will have to contend with when faced with a DLL search order hijacking vulnerability.  Both of these areas may be ripe for future research. Anyone who has downloaded ScanNow is advised to locate and remove the affected executables.  For any users who still have a need for ScanNow, both Metasploit and Nexpose have better coverage for the vulnerabilities that ScanNow previously covered.  For anyone who registered when downloading ScanNow over the years, Rapid7 will also be attempting to reach these users to advise them of the situation. Rapid7 would like to again thank Stefan Kanthak for responsibly disclosing this vulnerability.

The Internet of Gas Station Tank Gauges -- Take #2

In January 2015, Rapid7 worked with Jack Chadowitz and published research related to Automated Tank Gauges (ATGs) and their exposure on the public Internet.  This past September, Jack reached out to us again, this time with a slightly different request.  The goal was to reassess…

In January 2015, Rapid7 worked with Jack Chadowitz and published research related to Automated Tank Gauges (ATGs) and their exposure on the public Internet.  This past September, Jack reached out to us again, this time with a slightly different request.  The goal was to reassess the exposure of these devices and see if the exposure had changed, and if so, how and why, but also to see if there were other ways of identifying potentially exposed devices that may have skewed our original results. Scan Details As you may recall, in the original study, we sent a TLS-350 Get In-Tank Inventory Report request (I20100) to all hosts on the public IPv4 Internet with 10001/TCP open.  A device speaking TLS-350 and supporting this function will respond with something similar to: OCT 1, 2015 6:07 PM <station number><station name> <streety address> <city/state/zip/etc> 12345 IN-TANK INVENTORY TANK PRODUCT VOLUME TC VOLUME ULLAGE HEIGHT WATER TEMP 1 REGULAR 4812 4771 4708 44.45 0.00 71.95 2 PLUS 3546 3507 5974 35.83 0.00 75.15 3 PREMIUM 3377 3344 6143 35.31 0.00 73.92 In this most recent study we completed on October 1, 2015, we repeated this request, but made the following additional requests: TLS-350 System Revision Level request (I90200).  A device speaking TLS-350 and supporting this function will respond with something similar to: OCT 1, 2015 6:18 PM SOFTWARE REVISION LEVEL VERSION 131.02 SOFTWARE# 346330-100-B CREATED - S-MODULE# 330140-145-a SYSTEM FEATURES: PERIODIC IN-TANK TESTS ANNUAL IN-TANK TESTS CSLD BIR FUEL MANAGER PLLD 0.10 AUTO 0.20 REPETITIV WPLLD 0.10 AUTO 0.20 REPETITIV TLS-250 Inventory Report on all Tanks request (200).  A device speaking TLS-250 and supporting this function will respond with something similar to the TLS-350 Get In-Tank Inventory Report response TLS-250 Revision Level request (980).  A device speaking TLS-250 and supporting this function will respond with something similar to the TLS-350 System Revision Level response_._ While there are literally hundreds of other TLS-250 or TLS-350 or other ATG TLS protocol variant commands we could be sending and attempting, the goal of these studies was to identify ATGs with unprotected dangerous functionality or sensitive information.  Attempting more of these commands likely would have identified more ATGs, however these protocols are not well documented, and, like so many other IoT things, we aren't so sure how resilient they are to repeated poking so it is best to play nice.  Plus, these ATGs are connected to tanks full of untold gallons of flammable liquid, so excess caution is not totally unwarranted. Observations When analyzing the data from January and October's ATG studies, each response to a particular request was categorized as follows: Good: response appears to be a valid, non-error response for the protocol in question Error: response appears to be a valid, error response for the protocol in question Unknown: unknown data was received after connecting and attempting request Empty: no data was received; either connection failed or no response upon connecting and attempting request With this knowledge, we observed the following: Data Point January Initial TLS-350 October Enhanced TLS-250/TLS-350 Notes Default ATG TLS-250/TLS-350 port (10001/TCP) open 1712285 1070728 Our blacklist also increased by ~35m between these scans Empty for all TLS-250 requests, responded Good for TLS-350 Get In-Tank Inventory Report request, but Error for TLS-350 System Revision Level request n/a 4 This response profile aligns with how GasPot behaves Responded Error for at least one TLS-250 or TLS-350 request n/a 804 These devices are all likely ATGs, all speaking TLS-250 or TLS-250, but our request was rejected an unknown reason Responded Good for at least one TLS-250 or TLS-350 request n/a 6483 This is a rough representation of the number of ATGs that are exposing sensitive data/functionality over TLS-250 or TLS-350 in some way. Responded Good for TLS-350 Get In-Tank Inventory Report request 5893 5214 This shows the change between January and now most accurately, though it is caused partially by blacklist increases and predominantly by natural fluctuation. Responded Good or Error for at least one TLS-250 or TLS-350 request n/a 6502 This is rough representation of the number of ATGs that are likely directly exposed but not necessarily exposing sensitive data/functionality over TLS-250 or TLS-350 Responded Unknown for all requests n/a 5260 These devices are all likely not ATGs Responded Unknown/Empty for all requests n/a 1064225 99.4% of the devices with 10001/TCP open are not ATGs As you can see from the table above, things haven't changed all that much.  While the one data point that can be compared dropped by ~13%, combined with the fact that the number is so small to begin with (~5-6k), that our blacklist grew by ~5% ( ~35m), and that what is exposed where changes all the time on the public Internet, I view this as an insignificant change. Conclusion While the drop in exposed, vulnerable ATGs in the last 10 months is insignificant, one fact becomes readily apparent and should be alarming -- there are over 5000 improperly protected, IoT-style devices connected to tanks storing millions of gallons of flammable, valuable liquids all over the world.  Jack Chadowitz, the community member we've been working with on this, recently presented his findings from this work at the 2015 ICS Cyber Security Conference in Georgia. As mentioned in the original publication we did back in January, there are a variety of solutions to protect these exposed ATGs, including using VPN or firewall-based solutions or simply configuring a secure password.  We cannot draw any conclusions about the number of ATGs protected by VPN or firewall-based solutions because it is assumed that these solutions would prevent 10001/TCP from being found open in the first place.  Regarding the use of passwords as a solution, we can make a stretch observation.  We assume that any device with 10001/TCP open that either did not respond (Empty or received an Unknown response to all of our TLS-250 and TLS-350 requests is either not an ATG or is an ATG that is secured through other manners, from that we propose that any device that responded Good or Error is an ATG of some sort.  We know that there were 804 devices with 10001/TCP open that responded Error, and there are several reasons why we'd get Error, including us sending an ATG request that the device just happened to not support or considered invalid, or if there was authentication configured.  In other words, some of those 804 devices may be using authentication, but how many is unknown and likely few. To aid in identifying potentially vulnerable devices on your networks, we've added a simplistic Metasploit module for detecting and interacting with ATGs.  This was written and tested against GasPot, a honeypot simulating some ATG functionality, however it is likely to work on real ATGs as well; still, use at your own risk. We welcome your feedback!

The Pudding is in the Proof: The Importance of Proofs in Vulnerability Management

In vulnerability management and practices like it, including simple vulnerability assessment, down and dirty penetration testing, and compliance driven auditing, when a target is tested for the presence of a particular vulnerability, in addition to the binary answer for "Is it vulnerable or not?…

In vulnerability management and practices like it, including simple vulnerability assessment, down and dirty penetration testing, and compliance driven auditing, when a target is tested for the presence of a particular vulnerability, in addition to the binary answer for "Is it vulnerable or not?" oftentimes additional data will be provided that adds some confidence to that conclusion by explaining how it was reached. At Rapid7, for Nexpose, we typically refer to this data as the proof for a particular vulnerability. The purpose of this post is to talk briefly about proofs, what they are, and why they are important. What is a Proof? A proof in Nexpose serves to avoid reliving situations like this from Math classes in school, courtesy of the Frazz comic: At the core of a VM solution is the fundamental ability to assess a target for the presence of a vulnerability.  This ability is not magic, and the method used to obtain the answer is often nearly as important as the answer itself.  In other words, while it is great for a VM solution to be capable of giving 100% accurate results, without this proof data for each finding, that VM solution may get a less than stellar report card. A vulnerability proof in Nexpose is composed of the results of executing a variety of tests used to ascertain the existence of a given vulnerability.  Take a relatively simple vulnerability, like CVE-2007-6203, a reflected XSS vulnerability in Apache HTTPD.  Broken down into a series of smaller, simpler steps, Nexpose: Tests that an HTTP/HTTPS service exists Tests that this service looks to be Apache HTTPD Tests that a request with a malicious HTTP request method has this method reflected back in the error response unfiltered, allowing XSS, when the specified Content-Length header is too large Each of these steps records its own proof result, and they are all combined to form the final proof for the vulnerability which is in turn stored and made available in a variety of places in Nexpose.  In a more complicated scenario, for example if there are multiple ways of checking for a vulnerability, such as the existence of a patch, version of a library or registry key, Nexpose will record all of the results from each method and until it is confident that the system is definitively vulnerable or invulnerable. Why Proofs? As to why these proofs are important, consider some of the people who might need benefit from knowing the how or why and not just the what: Security Teams: while a vulnerability with a CVSS score of 10 may warrant an immediate response, security teams may be considerably more confident in their response if Nexpose can provide some level of proof that this vulnerability really does exist Operations Teams: simply saying that a target is vulnerable may not be enough to justify risking applying a patch or taking down a critical system Rapid7 support and development: false positives and false negatives in VM are, arguably, an unavoidable fact of life in the VM space. So when a team at Rapid7 is involved in a such a case, it can be invaluable to see this proof data, which allows us to retrace Nexpose's steps Where is my pudding?  Where are the Proofs? While the proofs are a critical part of Nexpose, proofs are only displayed in situations where the proof data is useful. In general, you'll find no hint of any proof-like data until you start drilling down into the finer vulnerability details at the individual asset level.  For example, you can find it in the UI when viewing the vulnerability results for a single asset: Proof data can also be found in appropriately detailed reports, for example the Nexpose Audit Report or the everything-including-the-kitchen-sink report, the XML export. As previously mentioned, the core of a VM solution is its vulnerability assessment capabilities, but a frequently ignored topic is that of the invulnerable results.  Proof data for invulnerable results can be used to validate a VM solution's negative findings, but because Nexpose and tools like it are essentially just vulnerability scanners and not invulnerability scanners, the proofs for invulnerable results are not currently readily available except through an XML export report that explicitly enables them: Be warned, though, that any report that contains the invulnerable results will likely be significantly larger than its vulnerable-results-only counterpart for the simple fact that most targets will invulnerable to vastly more vulnerabilities than not. Finally, we'd love to hear your feedback on vulnerability proofs, so please leave a comment here, drop a message in our Nexpose community forum or use the appropriate support mechanism.  We'd love to hear about how you are using vulnerability proofs or how we could improve them to assist in your VM practice. Enjoy!

12 Days of HaXmas: Exploiting CVE-2014-9390 in Git and Mercurial

This post is the eighth in a series, 12 Days of HaXmas__, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014. A week or two back, Mercurial inventor Matt Mackall found what…

This post is the eighth in a series, 12 Days of HaXmas__, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014. A week or two back, Mercurial inventor Matt Mackall found what ended up being filed as CVE-2014-9390.  While the folks behind CVE are still publishing the final details, Git clients (before versions, 1.9.5, 2.0.5, 2.1.4 and 2.2.1) and Mercurial clients (before version 3.2.3) contained three vulnerabilities that allowed malicious Git or Mercurial repositories to execute arbitrary code on vulnerable clients under certain circumstances. To understand these vulnerabilities and their impact, you must first understand a few basic things about Git and Mercurial clients.  Under the hood, a Git or Mercurial repository on disk is really just a directory.  In this directory is another specially named directory (.git for Git, .hg for Mercurial) that contains all of the configuration files and metadata that makes up the repository.  Everything else outside of this special directory is just a pile of files and directories, often called the working directory, written to disk based on the previous mentioned metadata.  So, in a way, if you had a Git repository called Test, Test/.hg is the repository and everything else under the Test directory is simply a working copy of of the files contained in the repository at a particular point in time.  An nearly identical concept also exists in Mercurial. Here is a quick example of a simple Git repository that contains has no files committed to it.  As you can see, even this empty repository has a fair amount of metadata and a number of configuration files: $ git init foo $ tree -a foo foo └── .git ├── branches ├── config ├── description ├── HEAD ├── hooks │ ├── applypatch-msg.sample │ ├── commit-msg.sample │ ├── post-update.sample │ ├── pre-applypatch.sample │ ├── pre-commit.sample │ ├── prepare-commit-msg.sample │ ├── pre-rebase.sample │ └── update.sample ├── info │ └── exclude ├── objects │ ├── info │ └── pack └── refs ├── heads └── tags If you then add a single file to it called test.txt, you can see how the directory starts to change as the raw objects are added to the .git/objects directory: $ cd foo $ date > test.txt && git add test.txt && git commit -m "Add test.txt" -a [master (root-commit) fb19d8e] Add test.txt 1 file changed, 1 insertion(+) create mode 100644 test.txt $ git log commit fb19d8e1e5db83b4b11bbd7ed91e1120980a38e0 Author: Jon Hart Date: Wed Dec 31 09:08:41 2014 -0800 Add test.txt $ tree -a . . ├── .git │ ├── branches │ ├── COMMIT_EDITMSG │ ├── config │ ├── description │ ├── HEAD │ ├── hooks │ │ ├── applypatch-msg.sample │ │ ├── commit-msg.sample │ │ ├── post-update.sample │ │ ├── pre-applypatch.sample │ │ ├── pre-commit.sample │ │ ├── prepare-commit-msg.sample │ │ ├── pre-rebase.sample │ │ └── update.sample │ ├── index │ ├── info │ │ └── exclude │ ├── logs │ │ ├── HEAD │ │ └── refs │ │ └── heads │ │ └── master │ ├── objects │ │ ├── 1c │ │ │ └── 8fe13acf2178ea5130480625eef83a59497cb0 │ │ ├── 4b │ │ │ └── 825dc642cb6eb9a060e54bf8d69288fbee4904 │ │ ├── e5 │ │ │ └── 58a44cf7fca31e7ae5f15e370e9a35bd1620f7 │ │ ├── fb │ │ │ └── 19d8e1e5db83b4b11bbd7ed91e1120980a38e0 │ │ ├── info │ │ └── pack │ └── refs │ ├── heads │ │ └── master │ └── tags └── test.txt Similarly, for Mercurial: $ hg init blah $ tree -a blah blah └── .hg ├── 00changelog.i ├── requires └── store 2 directories, 2 files $ cd blah $ date > test.txt && hg add test.txt && hg commit -m "Add test.txt" $ hg log changeset: 0:ea7dac4a11f0 tag: tip user: Jon Hart date: Wed Dec 31 09:25:07 2014 -0800 summary: Add test.txt $ tree -a . . ├── .hg │ ├── 00changelog.i │ ├── cache │ │ └── branch2-served │ ├── dirstate │ ├── last-message.txt │ ├── requires │ ├── store │ │ ├── 00changelog.i │ │ ├── 00manifest.i │ │ ├── data │ │ │ └── test.txt.i │ │ ├── fncache │ │ ├── phaseroots │ │ ├── undo │ │ └── undo.phaseroots │ ├── undo.bookmarks │ ├── undo.branch │ ├── undo.desc │ └── undo.dirstate └── test.txt These directories (.git, .hg) are created by a client when the repository is initially created or cloned.  The contents of these directories can be modified by users to, for example, configure repository options (.git/config for Git, .hg/hgrc for Mercurial), and are routinely modified by Git and Mercurial clients as part of normal operations on the repository. Simplified, the .hg and .git directories contain everything necessary for the repository to operate, and everything outside of these directories is considered is considered part of the working directory, namely the contents of the repository itself (test.txt in my simplified examples). Want to learn more? Git Basics and Understanding Mercurial are great resources. During routine repository operations such as cloning, updating, committing, etc, the repository working directory is updated to reflect the current state of the repository.  Using the examples from above, upon cloning either of these repositories, the local clone of the repository would be updated to reflect the current state of test.txt. This is where the trouble begins.  Both Git and Mercurial clients have had code for a long time that ensures that no commits are made to anything in the .git or .hg directories.  Because these directories control client side behavior of a Git or Mercurial repository, if they were not protected, a Git or Mercurial server could potentially manipulate the contents of certain sensitive files in the repository that could cause unexpected behavior when a client performs certain operations on the repository. Unfortunately these sensitive directories were not properly protected in all cases.  Specifically: On operating systems which have case-insensitive file systems, like Windows and OS X, Git clients (before versions, 1.9.5, 2.0.5, 2.1.4 and 2.2.1) can be convinced to retrieve and overwrite sensitive configuration files in the .git directory which can allow arbitrary code execution if a vulnerable client can be convinced to perform certain actions (for example, a checkout) against a malicious Git repository.  While a commit to a file under .git (all lower case) would be blocked, a commit to .giT (partially lower case) would not be blocked and would result in .git being modified because .git is equivalent to .giT on a case-insensitive file system. These same Git clients as well as Mercurial versions before 3.2.3 have a nearly identical vulnerability that affects HFS file systems (OS X and Windows) where certain Unicode codepoints are ignored in file names. Mercurial before 3.2.3 on Windows has a nearly identical vulnerability on Windows only where MS-DOS file "short names" or 8.3 formats are possible. Basic exploitation of the first vulnerability is fairly simple to do with basic Git commands as I described in #4435, and the commits that fix the second and third vulnerabilities show simple examples of how to exploit it. But basic exploitation is boring so in #4440 I've spiced things up a bit.  As currently written, this module exploits the first of these three vulnerabilities by launching an HTTP server designed to simulate a Git repository accessed over HTTP, which is one of the most common ways to interact with Git.  Upon cloning this repository, vulnerable clients will be convinced to overwrite Git hooks, which are shell scripts that get executed when certain operations happen (committing, updating, checkout, etc).  By default, this module overwrites the .git/hooks/post-checkout script which is executed upon completion of a checkout, which conveniently happens at clone time so the simple act of cloning a repository can allow arbitrary code execution on the Git client.  It goes a little bit further and provides some simplistic HTML in the hopes of luring in potentially vulnerable clients: And, if you clone it, it only looks mildly suspicious: $ git clone Cloning into 'ldf'... $ cd ldf $ git log commit 858597e39d8a5d8e3511d404bcb210948dc835ae Author: Deborah Phillips Date: Thu Apr 29 17:44:02 2004 -0500 Initial commit to open git repository for nf.tygzxwf.xnk0lycynl.org! The module has the beginnings of support for the second and third vulnerabilities, so this particular #haxmas gift may need some work by you, the Metasploit community. Enjoy!

Amp Up and Defy Amplification Attacks -- Detecting Traffic Amplification Vulnerabilities with Nexpose

Approximately a year ago, the Internet saw the beginnings of what would become the largest distributed denial of service (DDoS) attacks ever seen.  Peaking at nearly 400Gbs in early 2014, these attacks started when a previously undisclosed vulnerability that would ultimately become CVE-2013-5211 was…

Approximately a year ago, the Internet saw the beginnings of what would become the largest distributed denial of service (DDoS) attacks ever seen.  Peaking at nearly 400Gbs in early 2014, these attacks started when a previously undisclosed vulnerability that would ultimately become CVE-2013-5211 was discovered.  While these attacks were devastating and they received plenty of press, the style of attack was not new.  In fact, it had been occurring routinely for years prior and continues to this day.Denial of service (DoS) attacks are quite general -- anything that allows an attacker to deny access to a particular resource from its intended users is considered a denial of service attack. A distributed denial of service (DDoS) attack is a variant whereby the attack is made possible or more practical by utilizing more than one attacking system.  One of the most common types of DDoS attack is one that simply overwhelms the target with network traffic/requests.  One particularly nasty variant of this is what is called the distributed reflected/reflective denial of service (DRDoS) attack, where one or more attacking systems send UDP traffic to systems vulnerable to traffic amplification vulnerabilities but used forged source IP addresses belonging to the intended targets of the DRDoS attack.  These traffic amplification vulnerabilities are almost exclusively related to UDP protocols where the size/number of responses are larger than the UDP request that caused them.  The DRDoS attack takes advantage of the fact that there is an amplification factor -- since the number/size of the responses generated by a single request is larger, an attacker can use this to direct more traffic at a target than the attacker's resources would otherwise allow.  Done at sufficient scale with a high enough amplification factor, DRDoS attacks of this nature can be crippling to even the most well equipped and well funded organizations.There are numerous problems one will encounter when trying to weather or defect a DDoS attack, but a DRDoS attack in particular adds an interesting twist -- if you are suffering a DRDoS attack, the attack is only possible because other systems, presumably not under your control, are vulnerable to traffic amplification vulnerabilities.  In other words, you become affected because of another organization's vulnerabilities.In an effort to help address this problem, the 12/10/2014 release of Nexpose adds coverage for the UDP protocols commonly used to conduct DRDoS attacks as outlined in US-CERT's TA14-017A advisory from earlier this year. To take full advantage of this coverage, you should ensure that you obtain both the content and product updates from 12/10/2014 and scan with the Full Audit or similar template.  If you like, you can add just these checks to your own custom scan template by simply searching for any vulnerability referencing TA14-017A:By scanning your assets for these traffic amplification vulnerabilities and appropriately addressing any affected endpoints you can help ensure that your assets are not used to perform DRDoS attacks against others.  Enjoy!

R7-2014-17: NAT-PMP Implementation and Configuration Vulnerabilities

Overview In the summer of 2014, Rapid7 Labs started scanning the public Internet for NAT-PMP as part of Project Sonar.  NAT-PMP is a protocol implemented by many SOHO-class routers and networking devices that allows firewall and routing rules to be manipulated to enable internal, assumed…

Overview In the summer of 2014, Rapid7 Labs started scanning the public Internet for NAT-PMP as part of Project Sonar.  NAT-PMP is a protocol implemented by many SOHO-class routers and networking devices that allows firewall and routing rules to be manipulated to enable internal, assumed trusted users behind a NAT device to allow external users to access internal TCP and UDP services for things like Apple's Back to My Mac and file/media sharing services.  NAT-PMP is a simplistic but useful protocol, however the majority of the security mechanisms rely on proper implementation of the protocol and a keen eye on the configuration of the service(s) implementing NAT-PMP.  Unfortunately, after performing these harmless scans across UDP port 5351 and testing theories in a controlled lab environment, it was discovered that 1.2 million devices on the public Internet are potentially vulnerable to flaws that allow interception of sensitive, private traffic on the internal and external interfaces of a NAT device as well as various other flaws. No CVEs have been assigned for any of these, however CERT/CC has been involved and allocated VU#184540. Vulnerability Summary During our research, we identified approximately 1.2 million devices on the public Internet that responded to our external NAT-PMP probes.  Their responses represent two types of vulnerabilities; malicious port mapping manipulation and information disclosure about the NAT-PMP device.  These can be broken down into 5 specific issues, outlined below: Interception of Internal NAT Traffic: ~30,000 (2.5% of responding devices) Interception of External Traffic: ~1.03m (86% of responding devices) Access to Internal NAT Client Services: ~1.06m (88% of responding devices) DoS Against Host Services: ~1.06m (88% of responding devices) Information Disclosure about the NAT-PMP device: ~1.2m (100% of responding devices) In RFC 6886, it states: The NAT gateway MUST NOT accept mapping requests destined to the NAT gateway's external IP address or received on its external network interface.  Only packets received on the internal interface(s) with a destination address matching the internal address(es) of the NAT gateway should be allowed. This is a critical requirement given that the source IP address of incoming NAT-PMP messages is what is used to create the mappings as stated in RFC 6886: The source address of the packet MUST be used for the internal address in the mapping. The root cause of these issues is that some vendors are violating these portions of the RFC. What is NAT-PMP? NAT-PMP is a simple UDP protocol used for managing port-forwarding behavior of NAT devices.  While it has been around since approximately 2005, it was formally defined in RFC 6886, which was pushed largely by Apple in 2013 as a better replacement for the Internet Gateway Device (IGD) Standard Device Control Protocol, a part of UPnP.  NAT-PMP clients and libraries are available for nearly all operating systems and platforms.  NAT-PMP servers are similarly available.  An unknown but likely large portion of servers, and perhaps less so for clients, are based on miniupnp. To understand what NAT-PMP does and why the vulnerabilities we will describe are both real and significant, you must first understand how NAT-PMP works. NAT-PMP is designed to be deployed on NAT devices assumed to have at least two network interfaces, one listening on a private, often RFC1918 network and another on a public, generally Internet-facing network.  Clients behind the NAT device may wish to expose local TCP or UDP services to hosts on the public network (again, usually Internet) and can use NAT-PMP to achieve this. Assume the following setup: While it is absolutely possible to use different setups, for example with all private addresses, all public addresses, etc, NAT-PMP was designed to be deployed similar to what is shown above.  Even in these nonstandard deployments, however, many of the same security vulnerabilities may still exist. NAT-PMP really offers only two types of operations: Obtain the external address of the NAT-PMP device Establish mappings of a TCP or UDP port such that a service on the private network can be reached from the public Internet and then subsequently destroy this mapping To achieve this, a NAT-PMP implementation needs to know: Its external IP address, which is ultimately where hosts on the public Internet will try to connect to mapped TCP and UDP ports Where to listen for NAT-PMP messages Using the setup described earlier, if private client wants to allow hosts on the public Internet to connect to a game server it is hosting on, the following (simplified) messages are exchanged: Client requests "Please map UDP port 1234 from the outside to my UDP port 1234" The NAT device responds with either: "OK, port 1234/UDP has been forwarded to your port 1234/UDP", if the exact mapping was possible "OK, port 5678/UDP has been forwarded to port 1234/UDP", if the exact match wasn't possible but another was; in this case NAT-PMP has chosen port 5678 instead Client requests "What is your public IP address?" NAT device responds "My public IP address is a.b.c.d" Now the client can advertise a.b.c.d:1234/UDP as a way for hosts from the public Internet to connect to the server it is running. Exactly how this is achieved under the hood of a NAT-PMP device depends on the implementation, but generally it interacts directly with the firewall, routing, and/or NAT capabilities of the NAT-PMP device and inserts new rules to allow the traffic to flow.  For example, on Linux implementations of NAT-PMP, iptables or ipchains are often used, ipfw is used on OS X and derivatives, etc.  The resulting traffic flow would look like: external client -> a.b.c.d:1234/UDP -> The destination address of the resulting traffic flow corresponds to the IP address of the client that requested the mapping. It is important to note that the traffic flows created here are meant to control how traffic to the external address is handled.  Furthermore, it is important to understand that the external address is not necessarily the public/external address of the NAT-PMP device, but rather what the NAT-PMP device thinks it is.  In a properly deployed setup, however, the external address as reported by NAT-PMP will be identical to the public/external IP address of the device. Security Features NAT-PMP is designed to be simple, lightweight and used only on networks where the clients are reasonably trusted, and as such there are no security capabilities currently built into the protocol itself.  In fact, the RFC goes as far as to say that if you care, use IPsec. Many NAT-PMP implementations, however, offer capabilities that may include: Restricting what networks/interfaces it will listen on and respond on for NAT-PMP messages Restricting what clients can forward to/from using IP address, port and protocol restrictions Vulnerability Details Malicious NAT-PMP Port Mapping Manipulation When improperly configured to listen for and respond to NAT-PMP messages on an untrusted interface, in the absence of ACLs controlling what clients can forward, attackers can create malicious NAT-PMP port mappings that can allow: Interception of TCP and UDP traffic from internal, private NAT clients destined to the internal, private address of the NAT-PMP device itself.  This can allow for interception of sensitive internal services such as DNS and HTTP/HTTPS administration. Interception of TCP and UDP traffic from external hosts to the external address of the NAT-PMP device or the private NAT clients. Access to services provided by clients behind the NAT device by spoofing NAT-PMP port mapping requests DoS against the NAT-PMP device itself by requesting an external port mapping for a UDP or TCP service already listening on that port, including the NAT-PMP service itself. We will now explore each of these in a little more depth. Interception of Internal Traffic If a NAT-PMP device is incorrectly configured and sets its NAT-PMP external interface to be the internal interface that the NAT clients use as their gateway and listens for NAT-PMP messages on both the internal and external interfaces, it is possible for remote attackers from outside to intercept arbitrary TCP and UDP traffic destined to (not through) the NAT-PMP device's internal interface from internal NAT clients.  In this attack, traffic destined to the NAT-PMP device's internal interface is forwarded out of the NAT network to an external attacker, and would likely target a service that could be leveraged for further exploitation, such as DNS or HTTP/HTTPS administrative services on the NAT-PMP device itself. To demonstrate this, a NAT-PMP implementation running on a Linux system has been installed with iptables and two interfaces -- one external, listening on the public internet with address (thanks, RFC 5737!)  and another internal, listening on from RFC1918 space.  This device is purposely misconfigured, listens on all addresses for NAT-PMP, and incorrectly set miniupnpd's external interface to be the internal interface.  This system also hosts a simple HTTP administration page for internal use only, as seen when browsing to Then, utilizing Metasploit's natpmp_map module, we attack the external, public address and establish a mapping to intercept the HTTP administration requests, forwarding 80/TCP on the NAT-PMP device to our IP address: Inspecting the firewall rules on the target, we can see that firewall rules were established that should forward any HTTP traffic destined to the NAT-PMP device's internal address,, to our IP, : Finally, we confirm this by navigating to again.  Notice that the login is now different ("HOME NAT Interception"): To take this even further, oftentimes NAT clients are configured to utilize their NAT device as their DNS server.  By establishing a mapping for DNS that redirects to a malicious host, we can control DNS responses and use this to launch further attacks against the NAT clients, including client-side attacks.  First, we establish the mapping for DNS on 53/UDP: Then, utilizing Metasploit's fakedns module, we reply with for any requests for example.com's A record: Finally, showing that example.com now resolves the address we specified, By intercepting and controlling DNS requests for these private NAT clients, we can redirect arbitrary HTTP/HTTPS requests to arbitrary, assume malicious hosts on the public Internet which will allow all manner of further attack vectors. Interception of External Traffic If a NAT-PMP device is incorrectly configured, sets its NAT-PMP external interface to be the external interface that faces the public Internet and listens for NAT-PMP messages on both the internal and external interfaces, it is possible for remote attackers from outside to intercept arbitrary TCP and UDP traffic destined from external hosts to and perhaps through the NAT-PMP device's external interface.  In this attack, traffic destined to the NAT-PMP device's external interface is forwarded back out to the attacker, in effect bouncing requests from the NAT-PMP device's external interface to an attacker. Devices vulnerable to this will report the external address to be something external, often the public IPv4 address on the Internet. The attack scenarios are similar to the previous internal-targeted attack, however this time they would typically target services legitimately exposed externally.  Target services could again include HTTP/HTTPS and other administrative services, except in this case the potential victim of the traffic intercept would be anyone trying to administer the device remotely, which could include ISPs who expose HTTP/HTTPS services to the world but protect it with a password. This attack can also be used to cause the NAT-PMP device to respond to and forward traffic for services it isn't even listening on.  For example, if the NAT-PMP device does not have a listening HTTP service on the external interface, this same flaw could be used to redirect inbound HTTP requests to another external host, making it appear that HTTP content hosted on the external host is hosted by the NAT-PMP device. Access to Internal NAT Client Services Because NAT-PMP utilizes the source IP address as the target to which traffic will be forwarded, as the previous two attack scenarios demonstrate it is critical to control where NAT-PMP listens for messages as well as what IP addresses are allowed to create mappings.  If, however, ACLs exist that restrict which clients can create mappings but NAT-PMP is incorrectly configured to listen for NAT-PMP messages on an untrusted interface such was a WAN interface connected to the Internet, it is still possible to create mappings by spoofing NAT-PMP mapping requests, using a source address that matches a valid, internal network range served by the NAT-PMP device. Practically speaking, this attack is considerably more difficult due to things like BCP38 and others that are used to thwart attacks that rely on spoofed source IP addresses. DoS Against Host Services Taking the attack scenarios described in the first vulnerability, it is possible to turn them into DoS conditions against the NAT-PMP device itself by creating mappings for other services already listening on the NAT-PMP device, presumably internally.  For example, using the same setup as in vulnerabilities 1-2, if we request a mapping for the NAT-PMP service itself, we can redirect any NAT-PMP messages to a host of our choosing, in turn preventing any further mappings from being created. First, we act like a legitimate client on the private NAT network and request a forwarding for 1234/UDP, which works: Then, we attack from the outside and establish a bogus mapping for the NAT-PMP service itself: Finally, we again act as a legitimate client on the private NAT network and against request a forwarding for 1234/UDP, which now fails: NAT-PMP Information Disclosure If improperly deployed, NAT-PMP can disclose: The "external address" if listening on the public Internet.  This can often include an internal, RFC1918 address if the external interface is incorrectly set to the internal, private interface.  Metasploit's natpmp_external_address module can be used to demonstrate this. The ports the NAT-PMP device is listening on without having to scan those ports directly.  By requesting a mapping with an external port that corresponds to a port that the NAT-PMP device is already listening on, for example for another common NAT service such as DNS, HTTP/HTTP, SSH or Telnet, some NAT-PMP implementations most notably OS X, will respond in a positive manner but indicating that a different external port was used. Metasploit's natpmp_portscan module can be used to demonstrate this. Similarly, ports already forwarded or in use by another NAT client These information disclosure vulnerabilities present relatively little risk, however in the spirit of even disclosing research that doesn't result in huge security implications I figured it was worth discussing them here.  If nothing else, they are a fun exercise and may open future areas. Now for some miscellany. External Address and Response Code Analysis Using the first information disclosure and applying it to our Sonar data set, we discovered the following breakdown in terms of the types of external addresses returned: 1,032,492 devices responded with an external IP address identical to the IP address Sonar contacted them on 104,183 devices responded with status code 3 indicating that the NAT device itself hasn't received an IP address from DHCP, presumably on the truly internal network 34,187 devices responded with, usually indicating that the other interface hasn't received an IP address yet 24,927 devices responded with an RFC1918 address (, and had 17810, 4139 and 3608 devices, respectively) 7,400 devices responses were from a single ISP in Israel that responds to unwarranted UDP requests of any sort with HTTP responses from nginx. Yes, HTTP over UDP: 2,974 devices responded with an external address that wasn't identical to the IP address Sonar contacted it on, wasn't loopback or RFC1918, and wasn't in an obviously similar network 1,037 devices responded with an external address allocated for Carrier Grade NAT (CGN) address space, a private space set aside for large-scale NAT deployments in RFC6598.  Yes, there are 1037 "carrier grade" devices out there with an almost trivially exploitable gaping vulnerability allowing traffic interception.  CGN is Inception-style NAT. NAT within NAT, because even ISPs have more customers than they do usable IPv4 space.  Because each address in CGN in theory fronts hundreds, thousands or perhaps even more customers, each with their own RFC1918 networks with untold numbers of devices, the potential for impact here could be tremendous. 845 devices responded with an external IP address different from the IP address Sonar contacted them on, but in the same /16 240 devices responded with a loopback address in  Does this imply that we could intercept loopback traffic?  Oh, the horrors. 128 devices responded with an external IP address different from the IP address Sonar contacted them on, but in the same /24 Side-channel Port Scanning Using the second information disclosure issue above, you can gain a deeper understanding of these systems.  For example, on an Apple Airport, using the second technique, we can discover all sorts of fun ports that are apparently used internally on the airport but don't appear to be open from the outside: Some of these have obvious explanations, but others don't.  What are these services?  Why would the Airport not allow me to map some of these ports but the ports shows as closed in a portscan? Is the Airport actively using them?  If so, who can connect to them if it isn't me, the owner of the device? Are They Backwards? If you think about how most SOHO-style routers/firewalls are built, they are generally some sort of embedded-friendly operating system, perhaps even a stripped-down Linux install, running a DHCP client on a WAN interface and a DHCP server on the internal interface(s).  What if some number of the devices responding to NAT-PMP on the public Internet were simply cabled backwards, literally the WAN connection being plugged into the LAN port and vice versa?  Could it really be that simple? Theoretically, if the devices firewalling/routing capabilities can handle any arbitrary WAN or LAN port asking for or serving DHCP leases, for example, but the NAT-PMP implementation couldn't, in effect every address on the public Internet is technically behind these NAT devices and may have all of the NAT-PMP capabilities that legitimate clients in a proper NAT-PMP deployment would have, including, creating firewall and routing rules. Deep in the Bowels of Carrier Networks, Lurking RFC1918 Hipsters or Patterns of Problems How often have you, for example, tracerouted through or to a particular network, only to see RFC1918 addresses show up in the responses?  It definitely happens, and ISPs and other carriers have been known to utilize RFC1918 address space internally for all number of legitimate reasons.  So, does this mean that these NAT-PMP responses with RFC1918 external addresses are coming from ISP and carrier equipment? Or, are there really 568 people out there deciding "You know, using the first available address in this RFC1918 address space is too popular, I'm going to pick for the address of my device"? Or, are the hosts returning RFC1918 addresses in the NAT-PMP external address probes displaying patterns in the addresses themselves?  For each of the 3 RFC1918 address spaces, we observed the following breakdown in terms of the top 5 addresses returned from each space: NAT-PMP External AddressCount192.168.0.50784192.168.1.64334192.168.1.2152192.168.100.2118192.168.1.10040 NAT-PMP External AddressCount10.11.14.256510.10.10.102710.0.0.31710.0.0.41110.10.14.210 NAT-PMP External AddressCount172.17.0.1011172.20.20.210172.16.225.1143172.16.18.1313172.16.1.113 Using these popular addresses and a search engine, you'll very quickly find there may be particular devices that typically use these addresses -- for example if they come with a hard-coded RFC1918 address that must be changed. Country Analysis Rather than looking at the external address reported by NAT-PMP, if you instead take the public IPv4 address that responded and look at the country and organization of origin, you start to seem some interesting results.  A significant portion of the responses come from ISPs and telecom companies in large countries where the number of public IPv4 addresses allocated to that country is small relative to the number of people and devices looking for Internet access.  Furthermore, a number of them are primarily mobile phone/data providers operating in areas where the easiest option to provide Internet access to its customers is to do it wirelessly in some way. CountryResponding IPsArgentina145866Russian Federation133126China119043Brazil110007India99168Malaysia89934United States64182Mexico50662Singapore49713Portugal18863 Affected Vendors Incorrect miniupnp configurations are likely to blame for most occurrences of these issues as miniupnp is available for almost every platform that would ever connect to the Internet in some way.  There are upwards of 1.2 million devices on the public Internet that exhibit signs of being vulnerable to one or more of the previously described vulnerabilities.  To be clear, though, even though there are almost certainly miniupnp NAT-PMP instances out there that are vulnerable to the issues we are disclosing, these likely stem from misconfigurations of miniupnp rather than a flaw in miniupnp itself.  Furthermore, it is likely that many of the vulnerable NAT-PMP instances are not based on miniupnp and may have more or different vulnerabilities. During the initial discovery of this vulnerability and as part of the disclosure process, Rapid7 Labs attempted to identify what specific products supporting NAT-PMP were vulnerable, however that effort did not yield especially useful results.  To attempt to identify these products, we used the results of Sonar's probes to identify misconfigured or misimplemented hosts and then correlated that with other data that Sonar collects.  While things did seem promising at first when our correlation started hinting that the problem was widespread across dozens of products from a variety of vendors, upon acquiring a representative subset of these products and performing the testing in our lab we were unable to identify situations in which these products could be vulnerable to the NAT-PMP vulnerabilities discussed in this advisory.  Furthermore, because of the technical and legal complexities involved in uncovering the true identity of devices on the public Internet, it is entirely possible, perhaps even likely, that these vulnerabilities are present in popular products in default or supported configurations. Because of the potential impact of these vulnerabilities and the difficulty in identifying what products are vulnerable, Rapid7 Labs opted to engage CERT/CC to handle the initial outreach to potentially affected vendors and organizations. Conclusions The vulnerabilities disclosed in this advisory are not theoretical, however how many devices on the public Internet are actually vulnerable to the more severe traffic interception issues is unknown.  Vendors producing products with NAT-PMP capabilities should take care to ensure that flaws like the ones disclosed in this document are not possible in normal and perhaps even abnormal configurations. ISPs and entities that act like ISPs should take care to ensure that the access devices provided to customers are similarly free from these flaws.  Lastly, for consumers with NAT-PMP capable devices on your network, you should ensure that all NAT-PMP traffic is prohibited on un-trusted network interfaces. Thanks to Rapid7 for supporting this research, CERT/CC for assisting with coordinated, responsible disclosure, and Austin Hackers Anonymous (AHA!), where I initially presented some of the beginnings of this research back in 2011. Updates 10/22/2014: CERT-CC informed us that the miniupnp project has taken measures to prevent most of the issues described here.  In particular, NAT-PMP messages arriving on a WAN interface will now be logged and discarded (commit 16389fda3c5313bffc83fb6594f5bb5872e37e5e) , the default configuration file must now be configured by anyone running miniupnp,  and this file now has more commentary around the importance of selecting the correct external interface and ACLs for mappings (commit 82604ec5d0a12e87cb5326ac2a34acda9f83e837).

Adventures in Empty UDP Scanning

One of the interesting things about security research, and I guess research in general, is that all too often the only research that is publicized is research that proves something or shows something especially amazing.  Research that is incomplete, where the original hypothesis or…

One of the interesting things about security research, and I guess research in general, is that all too often the only research that is publicized is research that proves something or shows something especially amazing.  Research that is incomplete, where the original hypothesis or idea ends up being incorrect, or that ends up at non-spectacular conclusions rarely ends up getting published.  I feel that this trend is doing a disservice to the research community because the paths that the authors of this unpublished research took remain unknown to like-minded individuals, perhaps resulting in duplicate efforts in the future.  Furthermore, much like with life, its not the destination that matters but rather the journey itself -- the methodologies used in this unpublished research may help unlock other areas of interest.  With this thinking in mind, I am publishing this little bit of research I've done on and off over the past month or so.As part of Rapid7's Project Sonar, we routinely scan the entire public IPv4 Internet (minus any hosts that we've been asked to exclude -- we are nice like that) looking for data that indicates something interesting from a security perspective.  As anyone who has done large scale scanning like this will likely attest to, you see the strangest of things.  It all started when we were analyzing what seemed to be empty UDP messages (a UDP datagram with no data) in response to some of our UDP probes.  As it turns out, it was just another instance of the "etherleak" bug I reported back in 2002 that ultimately became CVE-2003-0001.  But my curiosity was piqued and I wondered if there were any circumstances where a UDP service would reply to a UDP message that contained no data.What follows is my research in using empty UDP messages as probes to elicit responses from UDP services.Background: UDP ScanningAny sort of UDP scanning, even when not done at an Internet-wide scale, is tricky.  When looking for open UDP ports and trying to identify the protocol spoken on an open port, reactions to your probes will fall into one of three categories -- no response at all, no response from the service but a response by the IP stack of the target indicating that your connection failed for one reason or another, or, if you are lucky, you get some sort of response directly from the service.If you get no response, this could mean one of:The port is not openThe service was too busy to respondYour probe was incorrect for whatever reason (for example, you sent a non-NTP message to an NTP service)If you get a response from the IP stack of the target in the form of an ICMP message, this could mean one of:The port is not open and the stack is configured to send ICMP port unreachable messagesThe port may be open but your connection was denied by ACLs or similarTo further complicate things for the above mentioned scenario, many systems are configured to rate limit the number of ICMP responses sent in an effort to prevent scanning abuse as well as mitigate DDoS attacks and their stealthier, harder-hitting cousin, DRDoS attacks.  So, while ICMPs are useful in identifying closed ports, because they are often rate limited this means a scanner must properly rate limit the UDP probes in order to not trigger more ICMP messages (again, indicating closed ports).If you do get a response from the service, you can be certain that the port is open but you cannot yet be sure what protocol the service listening on that port speaks.  To do that, you must analyze the response with knowledge of the probe you sent.Empty UDP ScanningAs previously alluded to and likely obvious by now, empty UDP scanning is simply sending a UDP datagram with no data to a particular UDP port.  More specifically, the length field at layer 4 is 8 and there is no data after the 8 bytes representing UDP.  Assuming the port in question really is open, you will get a response for one of several reasons:The protocol implemented by the service is truly verbose or promiscuous, meaning the data of the request is irrelevant and ignored by the service and a reply is sent regardlessThe protocol implementation is designed to respond with some sort of message specified by the protocol, for example an error message when the format of the request was wrongBoth of these conditions are a little interesting from a security perspective.  Port scanners and tools used to identify services/protocols can send empty UDP datagrams as a fall-back way to both discover open UDP ports and identify the protocol implemented by the service.  Nmap, among others, has been using this technique for years.Interesting FindingsOn September 19th, 2014, I released a very simple Metasploit auxiliary module, empty_udp, that I wrote to help better understand how systems would respond.  What follows are the results of my findings from scanning our 1000-odd system lab that is filled with all manner of interesting targets.  While our lab is absolutely not representative of the what we'd encounter on the wild Internet, what I found was sufficient to satisfy me for now. I had and still am considering running a much more focused version of this as part of Sonar, so if you are reading this and can think of reasons (and ports) to do an Internet-wide audit, let me know in the comments.  My biggest hesitation is that empty UDP packets have been the cause of DoS conditions in several protocols/services over the years and I am trying to play nice.Microsoft Windows RPCWhen IANA assigns TCP and UDP ports to a particular service, more often than not the service in question is allocated both the TCP and UDP ports regardless of whether or not it currently uses both.  For example, DNS, which typically uses port 53, legitimately uses both TCP and UDP, but SSH, which typically uses port 22, is assigned both TCP and UDP ports but never uses UDP to the best of my knowledge.Port 135 is assigned to the Microsoft Windows DCE-RPC endpoint mapper and clients wishing to communicate with DCE-RPC services on a Windows host must communicate with the endpoint mapper to figure out which other, generally high, ephemeral port the desired RPC service is listening on.  In all my years, I have never seen a legitimate Windows system listening on and responding to UDP port 135, hence my surprise when I saw several Windows systems in our lab responding to empty UDP probes to UDP port 135.While I can't reproduce this on a regular basis, what I saw was that Windows systems that were still vulnerable to MS03-026 were occasionally responding to the empty UDP packets with endpoint mapper responses seemingly constructed entirely from uninitialized memory, leading to a minor and nearly un-exploitable information disclosure.  This makes me wonder if, as part of the patch for MS03-026, Microsoft changed the endpoint mapper to not listen on and/or respond to UDP port 135.  This idea is backed up by MS03-039, which Microsoft claims supersedes MS03-026, where it states that a suitable workaround for MS03-039 includes blocking a pile of TCP and UDP ports specific to various Windows services, including TCP 135 but it does not mention UDP port 135.  This has interesting implications -- were there other vulnerabilities or methods of exploitation lurking in the UDP endpoint mapper that nobody discovered or disclosed?  Could there have been a lethal one-shot UDP exploit that could have used spoofed source IPs and devastated all hosts un-patched against MS03-026?Old UNIX/BSD ServicesI cut my security teeth, so to speak, on all manner of UNIX/BSD systems, but I still jump a bit when I encounter some of the largely useless services like quote of the day (quotd, port 17), character generator (chargen, port 19), good 'ole time of day (port 13), time (port 37) and the thing that came before DNS, named (port 42).  It is equivalent to me, a Bostonian living in Los Angeles, visiting the Mid-west or quaint towns and finding doors are left unlocked, complete strangers waving at you from across the street and people greeting you warmly wherever you go -- you know places like that exist but you are still a bit startled when it happens to you.So, while scanning, don't forget about these services.  Hit both the TCP and UDP ports and try the empty UDP trick -- you might discover things you wouldn't otherwise find.ISAKMPIn certain configurations, ISAKMP implementations will respond to empty UDP messages with an ISAKMP notification message indicating that the message length in the ISAKMP header does not match the sum of the actual payload lengths.  This UNEQUAL-PAYLOAD-LENGTHS message is optional, so it may be possible to fingerprint different ISAKMP implementations using this knowledge.NAT-PMPI like to pick on NAT-PMP.  My Apple Airport Express runs it so it gets my attention from time to time.  In the case of using empty UDP messages as a probe for NAT-PMP, the situation is interesting.  The smallest, valid request that a NAT-PMP implementation should respond to is the 4-byte external address request, which must consist of 4 null bytes.  It shouldn't come as too much of a surprise that at least some implementations will respond to shorter responses that only include the first 2-bytes, the version.  In this case, if the version does not match (it isn't version 0), the service may send back a response with a result code of 1 indicating that the version is unsupported.  But, in at least the case of Apple's Airport Express, an empty UDP message also results in a result code of 1, which makes you wonder what version the service thought the message was speaking when there was no data to interpret.TFTPIf you've ever had to scan for and/or exploit TFTP vulnerabilities or misconfigurations, you'll know the pain I am about to describe.  TFTP, which generally runs on UDP port 69, is the worst-case scenario.  In addition to being UDP and therefore problematic to scan for (as mentioned earlier), typically port 69 is used as the command port and any data is sent from a different port.  It gets worse -- lets say you want to retrieve a file off of a TFTP server.  You'd send the request from some high, ephemeral port (say, 12345) to the TFTP server's UDP port 69, and assuming all was well, it would respond with the contents of the file from a different port entirely but send it to the source port that initiated the original request on UDP 69, in this case 12345.  This means you have to be careful while trying to scan for TFTP -- you can't just assume that any and all responses will come from port 69.  Interestingly, some implementations will reply on UDP 69 with messages like "Packet only has 0 bytes" or "Access violation", in particular at least the TFTP server used by ManageEngine products.  Others, like VMware Studio and some unknown TFTP implementations available for Linux, reply from a high, ephemeral port saying "Missing mode".ConclusionsBeyond the findings for Windows DCE-RPC endpoint mapper and MS03-026, all of this has only very, very minor security implications, but it was a fun exercise, I (re)learned some things, saw some unexpected results, and perhaps this knowledge will come in handy for someone in the future.  If you know of other useful information related to empty UDP scanning or anything posted here, let me know in the comments.Enjoy!

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More


Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now


Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now