Rapid7 Blog

Research  

Building a Backpack Hypervisor

Researcher, engineer, and Metasploit contributor Brendan Watters shares his experience building a backpack-size hypervisor.…

Researcher, engineer, and Metasploit contributor Brendan Watters shares his experience building a backpack-size hypervisor.

Multiple vulnerabilities in Wink and Insteon smart home systems

Today we are announcing four issues affecting two popular home automation solutions: Wink's Hub 2 and Insteon's Hub. Neither vendor stored sensitive credentials securely on their associated Android apps. In addition, the Wink cloud-based management API does not properly expire and revoke authentication tokens, and…

Today we are announcing four issues affecting two popular home automation solutions: Wink's Hub 2 and Insteon's Hub. Neither vendor stored sensitive credentials securely on their associated Android apps. In addition, the Wink cloud-based management API does not properly expire and revoke authentication tokens, and the Insteon Hub uses an unencrypted radio transmission protocol for potentially sensitive security controls such as garage door locks. As most of these issues have not yet been addressed by the vendors, users should ensure mobile devices enable full disk encryption if possible, and avoid the use of these products for sensitive applications until the vulnerabilities are patched. While the potential impact is high, these vulnerabilities are not exploitable over the internet: access to the user's phone, or close proximity to connected devices in the replay case, is required for exploitation. This post provides additional details on the vulnerabilities and steps users can take to protect themselves. Rapid7 ID CVE Product Vulnerability Status R7-2017-19.1 CVE-2017-5249 Wink Android App Insecure Storage of Sensitive Information Patched in v6.3.0.28 R7-2017-19.2 n/a Wink API Insufficient Session Expiration Unpatched; fix planned R7-2017-20.1 CVE-2017-5250 Insteon Hub Insecure Storage of Sensitive Information Unpatched R7-2017-20.2 CVE-2017-5251 Insteon Hub Authentication Bypass by Capture-replay Unpatched Wink: Android app authentication token storage and API authentication token revocation vulnerabilities Two issues related to authentication were discovered in the Wink Android mobile application and related API: CVE-2017-5249, R7-2017-19.1, CWE 922 (Insecure Storage of Sensitive Information): the OAuth token used by the Wink Android application to authorize user access was not stored in an encrypted and secure way. R7-2017-19.2, CWE 613 (Insufficient Session Expiration): when users log out of the Wink Android application, the authentication token used for that session is not revoked, nor does the generation of new tokens revoke older ones. There is no way currently for users to see all valid authentication tokens connected to their account, but there is a method available to revoke all authentications aside from their current one (detailed below). No CVE was assigned to this issue, as remediation would likely be done on Wink's servers. Product Description Wink Hub 2 is a smart home system designed to help consumers manage multiple home IoT products and protocols from various vendors. It is composed of a hub device as well as a mobile application for user interaction. More information can be found on the vendor's product page. Credit These issues were discovered by Deral Heiland, Research Lead at Rapid7, Inc. This advisory was prepared in accordance with Rapid7's disclosure policy. R7-2017-19.1 Details and Exploitation During analysis of the Android mobile application for Wink (Wink application version 6.1.0.19 on Android 5.1 running kernel 3.10.72), it was discovered that the OAuth token is stored unencrypted within the following file: /data/data/com.quirky.android.wink.wink/shared_prefs/user.xml To determine the longevity of this preference file, the Android device was rebooted, after which the unencrypted OAuth token shown in Figure 1 below was still present. It was determined that the token remained until the user logged off manually. Based on the nature of IoT technology, users typically stay logged in, however. Thus the authentication tokens are likely to stay valid indefinitely, unless the user doesn't interact with the application for more than 30 days. Remediation A vendor-supplied patch for the mobile app should be provided to prevent storing potentially sensitive information, such as authentication tokens, in cleartext. While local storage is likely necessary for normal functionality, sensitive information should be stored in an encrypted format that requires authentication to decrypt. In the case of Android, there is a built-in secure storage method that should be used. [Updated Sept 22, 2017] The latest version of the Wink Android application, v6.3.0.28, fixed the authentication storage issue. This was released on Sep 13, 2017. Users should update to this version immediately. Users should also consider enabling full-disk encryption (FDE), and be sure to log out of the Wink application when not in use. Enabling FDE will protect the authentication tokens on the device from being accessed. iPhones enable FDE by default, and most modern Android devices are capable of enabling it. FDE also adds an additional layer of protection by adding a boot password. If FDE is not possible for a particular device, the user should be especially careful not to lose the device, as sensitive data may be recoverable. R7-2017-19.2 Details and Exploitation Further examination of the Wink app's use of OAuth revealed that the normal user logout process does not include OAuth token revocation. When the user logs out from the mobile application, it only sends a delete request for the mobile device tracker in specific (as shown in Figure 2 below). This tracker plays no direct roll in the authentication process. When the user logs back in, they are assigned a new OAuth token. However, testing showed that the previous OAuth token still remained valid. To further validate the security of this, the user's password was changed, at which point revocation of that user's tokens was expected. The system rebooted, but the original OAuth token still remained valid, as shown in Figure 3: This means that if a user loses their mobile device, or if it is stolen, a malicious actor could extract the unencrypted OAuth token from the user.xml file, giving the malicious actor full access to the Wink Hub 2 remotely. Remediation A vendor-supplied patch should be provided to revoke the user's OAuth token after logout from the mobile application. In addition, a mechanism should be added to allow for the revocation of all tokens across all mobile devices with access to the user's Wink Hub 2. This would help prevent unauthorized access to the device and services even if a device is lost or compromised. [Updated Sept 22, 2017] Wink reports that they plan to address this issue in the near future through a server-side change. [Updated Sept 26, 2017] Users are currently able to force a log out of other user sessions. This can only be done by going to change your password, and then selecting the "Log Out Others" option presented in figure 4: This step will invalidate all associated OAuth tokens, except for the one currently in use by the user. This lowers exposure greatly, as it is much less likely for an attacker to have that current token, rather than an older one. We recommend users change their password and log out other sessions associated with their account to avoid exposure. If users are able to view sessions associated with their account in the future, they should review that regularly. R7-2017-19 Disclosure Timeline Wed, Jul 19, 2017: Initial contact with Wink Wed, Jul 20, 2017: Wink acknowledged receipt Fri, Jul 28, 2017: Details disclosed to Wink Mon, Jul 31, 2017: Wink acknowledged Android token issue, plans to fix Mon, Aug 14, 2017: Rapid7 reserved CVE-2017-5249 for R7-2017-19.1 Mon, Aug 14, 2017: Disclosed to CERT/CC Fri, Sep 22, 2017: Public blog disclosure; presented at DerbyCon 7.0 Fri, Sep 22, 2017: Wink confirmed that R7-2017-19.1 was fixed on Sep 13, 2017, and that they intend to fix R7-2017-19.2 in the near future. Insteon Hub: Unencrypted credential storage and radio replay vulnerabilities Two issues related to authentication and radio transmission security were discovered in the Insteon Hub: CVE-2017-5250, R7-2017-20.1, CWE 922 (Insecure Storage of Sensitive Information): the OAuth token used by the Insteon Android application to authorize user access is not stored in an encrypted and secure way. CVE-2017-5251, R7-2017-20.2, CWE 294 (Authentication Bypass by Capture-replay): the radio transmissions used for communication between the Hub and connected devices are not encrypted, and do not provide sufficient protections to guard against capture-replay attacks. These issues can be used to compromise and control an Insteon Hub environment. Product Description The Insteon Hub is a smart home system designed to help consumers connect various home IoT products and manage home automation. It is composed of a hub device as well as a mobile application for user interaction. More information can be found on the vendor's product page. Credit These issues were discovered by Deral Heiland, Research Lead at Rapid7, Inc. This advisory was prepared in accordance with Rapid7's disclosure policy. R7-2017-20.1 Details and Exploitation Analysis of the Android mobile application for Insteon Hub (Insteon application version 1.9.7) revealed that the account and password for both Insteon services and the Hub hardware was stored unencrypted within the following file: /data/data/com.insteon.insteon3/shared_prefs/com.insteon.insteon3_preferences.xml To determine the longevity of this file, the Android device was rebooted, after which the plaintext username and password shown in Figures 1 and 2 below was still present. It was determined that the account credentials remained in the file until the user logged off manually. Based on the nature of IoT technology, users typically stay logged in. Remediation A vendor-supplied patch should be made to the mobile app to prevent storing sensitive information, such as user credentials, unencrypted. While local storage is likely necessary for normal functionality, sensitive information should be stored in an encrypted format that requires authentication to decrypt. In the case of Android, there is a built-in secure storage method that should be used. Absent a vendor-supplied patch, users should consider full-disk encryption (FDE), and be sure to log out of the Insteon Hub application when not in use. Enabling FDE will protect the authentication tokens on the device from being accessed. iPhones enable FDE by default, and most modern Android devices are capable of enabling it. FDE also adds an additional layer of protection by adding a boot password. If FDE is not possible for a particular device, the user should be especially careful not to lose the device, as sensitive data may be recoverable. R7-2017-20.2 Details and Exploitation The Insteon Hub uses radio signals to communicate with connected devices, specifically a 915MHz Frequency Shift Keying (FSK) communication protocol. Analysis of this protocol revealed that the transmissions do not appear to be encrypted, nor to contain any security mechanisms to prevent replay attacks. A malicious actor can easily capture and replay the radio signals at any time to manipulate any device being managed via this communication protocol. To test the Insteon Hub's security against replay attacks, an Insteon Garage Door Control Kit (Device 43) was configured via an Insteon Hub (2245-222). Using software-defined radio (SDR), the 915MHz radio signal used to open and close the door via the Garage Door Control device was captured. Once captured, the signal was filtered to remove background noise and then replayed to successfully actuate the Insteon Garage Door Control device. This confirmed that the Insteon RF protocol is vulnerable to replay attacks, and is shown in Figure 3 below: Remediation A vendor-supplied patch should be provided to configure the 915MHz signal to encrypt the data being communicated, or to apply a rotating certificate to prevent replay of captured RF signals. Absent a vendor-supplied patch, users should avoid using the Insteon's radio-control features with security-related and access control devices if they are concerned about potential radio eavesdroppers. R7-2017-20 Disclosure Timeline Wed, Jul 19, 2017: Initial contact with Insteon Wed, Jul 20, 2017: Insteon acknowledged receipt Fri, Jul 28, 2017: Details disclosed to Insteon Mon, Jul 31, 2017: Insteon acknowledged receipt, intent to review Mon, Aug 14, 2017: Rapid7 reserved CVE-2017-5250 for .1 and CVE-2017-5251 for .2 Mon, Aug 14, 2017: Disclosed to CERT/CC Fri, Sep 22, 2017: Public blog disclosure; presented at DerbyCon 7.0

Cisco Smart Install Exposure

Cisco Smart Install (SMI) provides configuration and image management capabilities for Cisco switches. Cisco’s SMI documentation goes into more detail than we’ll be touching on in this post, but the short version is that SMI leverages a combination of DHCP, TFTP and a…

Cisco Smart Install (SMI) provides configuration and image management capabilities for Cisco switches. Cisco’s SMI documentation goes into more detail than we’ll be touching on in this post, but the short version is that SMI leverages a combination of DHCP, TFTP and a proprietary TCP protocol to allow organizations to deploy and manage Cisco switches. Using SMI yields a number of benefits, chief among which is the fact that you can place an unconfigured Cisco switch into an SMI-enabled (and previously configured) network and it will get the correct image and configuration without needing to do much more than wiring up the device and turning it on. Simple “plug and play” for adding new Cisco switches. But, with the great power and heightened privileges comes great responsibility, and that remains true with SMI. Since its first debut in 2010, SMI has had a handful of vulnerabilities published, including one that led to remote code execution (CVE-2011-3271) and several denial of service issues (CVE-2012-0385, CVE-2013-1146, CVE-2016-1349, CVE-2016-6385). Things got more interesting for SMI within the last year when Tenable Network Security, Daniel Turner of Trustwave SpiderLabs, and Alexander Evstigneev and Dmitry Kuznetsov of Digital Security disclosed a number of security issues in SMI during their presentation at the 2016 Zeronights security conference. Five issues were reported, the most severe of which easily rated as CVSS 10.0, if risk scoring is your thing. Or, put more bluntly, if you leave SMI exposed and unpatched and have not followed Cisco’s recommendations for securing SMI, effectively everything about that switch is at risk for compromise. Things get even more gnarly quickly when you consider what a successful attack against a Cisco switch exposing SMI would get an attacker. Even an otherwise well-protected network could be compromised if a malicious actor could arbitrarily reroute a target’s traffic at will. In direct response to last year’s research, Cisco issued a security response hoping to put the issue of SMI security to bed once and for all. They effectively claim that these issues are not vulnerabilities, but rather “misuse of the protocol”, even while encouraging customers to disable SMI if it was not in use. True, this largely boils down to a lack of authentication both in some of the underlying protocols (DHCP and TFTP) as well as SMI itself, which is a key part of achieving the aforementioned installation/deployment simplicities. However, every SMI-related security advisory published by Cisco has included recommendations to disable SMI unless needed. Most recently, they’ve provided various coverage for SMI abuse across their product lines, updated the relevant documentation that details how to properly secure SMI, and released a scanning tool for customers to use to determine if their networks are affected by these SMI issues. To further help Cisco customers secure their switching infrastructure, they’ve also made available several SMI related hardening fixes: CSCvd36820 automatically disables SMI if not used during bootup CSCvd36799 if SMI is enabled it must show in the running config CSCvd36810 periodically alerts to the console if SMI is enabled Ultimately, whether we call this protocol a vulnerability or a feature, exposed SMI endpoints present a very ripe target to attackers. And this is where things get even more interesting. Given that up until recently there was no Cisco provided documentation on how to secure SMI and no known tools for auditing SMI, it was entirely possible that scores of Cisco switches were exposing SMI in networks they shouldn’t be without the knowledge of the network administrators tasked with managing them. Sure enough, a preliminary scan of the public IPv4 Internet by these same original SMI researchers showed 251,801 systems exposing SMI and seemingly vulnerable to some or all of the issues they disclosed. The Smart Install Exploit Tool (SIET) was released to help identify and interact with exposed SMI endpoints, which includes exploit code for a variety of the issues they disclosed. As part of Cisco’s response to this research, they indicated that the SIET tool was suspected in active attacks against organizations’ networks. A quick look through Rapid7 Labs’ Project Heisenberg for 2017 shows only minimal background network noise on the SMI port and no obvious large-scale scanning efforts, though this does not rule out the possibility of targeted attacks. As with many situations like this in security, it's like a case of “if you build it, they will come” gone wrong, almost a “if you design it, they’ll misuse it.” There are any number of ways a human or a machine could mistakenly deploy or forget to secure SMI. Until recently, SMI had little mention in documentation, and, as evidenced by the three SMI related hardening fixes, it was difficult for customers to identify that they were even using SMI in the first place. Plus, even in the presence of a timely patching program, any organization exposing SMI to hostile networks but failing to do their security due diligence are easy targets for deep compromise. To top it all off, by following the recommended means of securing SMI when it is actually being used for deployment, you must add specific ACLs to control who can speak to SMI enabled devices, thereby severely crippling the ease of use that SMI was supposed to provide in the first place. With all of this in mind, we decided to reassess the public Internet for exposure of SMI with several questions in mind: Have things changed since the publication of the SMI research in 2016 and the resulting official vendor response in 2017? Are there additional clues that could explain why SMI is being exposed insecurely? Can Rapid7 assist? The methodology we used to assess public Internet for SMI exposure is almost identical to what the Zeronights researchers used in 2016, except that after the first-pass zmap scan to locate supposedly open SMI 4786/TCP endpoints, we utilized the logic demonstrated in Cisco’s smi_check to determine if the endpoint actually spoke the SMI protocol. Our study found ~3.2 million open 4786/TCP endpoints, the vast majority of which are oddly deployed SSH, HTTP and other services, as well as security devices. It is worth noting that while the testing methodologies between these two scans appear nearly identical, both are relying on possibly limited public knowledge about the proprietary SMI protocol. Future research may yield additional methods of identifying SMI. Using the same logic as in smi_check, we identified 219,123 endpoints speaking SMI in July and 215,317 endpoints in a subsequent scan in August 2017. Answering our first question, we see there was a ~13% drop in the number of exposed SMI endpoints between the Zeronights researcher’s original study and Rapid7 Labs’ Sonar scan in July of 2017, but it is hard to say what the cause was. For one, the composition of the Internet has changed in the last year. Two, Sonar maintains a blacklist of addresses whose owners have requested that we cease scanning their IPs and it is unknown if the Zeronights researchers had a similar list, which means that there were likely a few fundamental differences in the target space between these two disparate scans. So, despite a history of security problems and Cisco advising administrators to properly secure SMI, a year later things haven’t really changed. Why? Examining SMI exposing IPs by country, as usual it is no surprise that countries with a large number IPv4 IPs and significant network infrastructure are at the top of those exposed: Repeating this visualization but examining the organizations that own the IPs exposing SMI, a possible pattern appears -- ISPs: Unfortunately this is where things get a little complicated. The data above seems to imply that the bulk of the organizations exposing SMI are ISPs or similar, however that is also an artifact of how the attribution process happens here. In cases where a given organization is its own ASN and they expose SMI, our reporting would attribute any of the IPs that that ASN is responsible for to the organization in question. However, in cases where a given organization is not its own ASN, or if it uses IP space that it doesn’t control, for example it just gets a cable/DSL/etc router from their ISP, then the name of that organization will not be reflected in our data. Proprietary protocols are interesting from a security perspective and SMI is no exception. Being specific to Cisco switches, you’ll only find this in Cisco shops that didn’t properly secure the switch prior to deployment, so there are limited opportunities for researchers or attackers to explore this. While proprietary protocols are not necessarily closed, SMI does appear that way in that there is almost no public documentation on the protocol particulars beyond what the Zeronights researchers published in 2016. Despite the lack of documentation at the time, these researchers employed a simple method for understanding how the protocol works and it turned out to to be highly effective -- exercising SMI functionality while observing the live protocol communication with a network sniffer. There are several areas for future research which may provide value: When properly secured, including applying all the relevant IOS updates and following Cisco’s recommendations with regards to securing SMI, what risks remain in networks that utilize SMI, if any? When improperly secured, are there are additional risks to be explored? For example, what about the related SMI director service that exposes 4787/TCP? Are there safer or quieter ways to carry out the attacks described by the Zeronights researchers such that more accurate vulnerability coverage could exist? Is there a need for vulnerability coverage similar to SIET’s in Metasploit? What is current state of the art with regards to post-compromise behavior on network switches like this? What methods are (or could) attackers employ to establish advantageous footholds in the networks and devices serviced by a compromised switch? In order to ensure that Rapid7 customers are able to identify assets exposing SMI in their environments, we have added SMI fingerprinting and vulnerability coverage to both Metasploit as of September 1 in auxiliary/scanner/misc/cisco_smart_install, and InsightVM as of the August 17, 2017 release via cisco-sr-20170214-smi. Interested in this research? Looking to collaborate? Questions? Comments? Leave them below or reach out via research@rapid7.com!

Data Mining the Undiscovered Country

Using Internet-scale Research Data to Quantify and Reduce Exposure It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research…

Using Internet-scale Research Data to Quantify and Reduce Exposure It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research platform. Let’s take a look at how two key components of Rapid7 Labs’ research platform—Project Heisenberg and Heisenberg Cloud—came together to enumerate and reduce exposure the past two quarters. (If reading isn't your thing, we'll cover this in person at today's UNITED talk.) Project Sonar Refresher Back in “the day” the internet really didn’t need an internet telemetry tool like Rapid7's Project Sonar. This: was the extent of what would eventually become the internet and it literally had a printed directory that held all the info about all the hosts and users: Fast-forward to Q1 2017 where Project Sonar helped identify a few hundred million hosts exposing one or more of 30 common TCP & UDP ports: Project Sonar is an internet reconnaissance platform. We scan the entire public IPv4 address range (except for those in our opt-out list) looking for targets, then do protocol-level decomposition scans to try to get an overall idea of “exposure” of many different protocols, including: In 2016, we began a re-evaluation and re-engineering project of Project Sonar that greatly increased the speed and capabilities of our core research gathering engine. In fact, we now perform nearly 200 “studies” per-month collecting detailed information about the current state of IPv4 hosts on the internet. (Our efforts are not random, and there’s more to a scan than just a quick port hit; there’s often quite a bit of post-processing engineering for new scans, so we don’t just call them “scans.”) Sonar has been featured in over 20 academic papers (see for yourself!) and is a core part of the foundation for many popular talks at security conferences (including 3 at BH/DC in 2017). We share all our scan data through a research partnership with the University of Michigan — https://scans.io. Keep reading to see how you can use this data on your own to help improve the security posture in your organization. Cloudy With A Chance Of Honeypots Project Sonar enables us to actively probe the internet for data, but this provides only half the data needed to understand what’s going on. Heisenberg Cloud is a sensor network of honeypots developed by Rapid7 that are hosted in every region of every major cloud provider (the following figure is an example of Heisenberg global coverage from three of the providers). Heisenberg agents can run multiple types and flavors of honeypots. From simple tripwires that enable us to enumerate activity: to more stealthy ones that are designed to blend in by mimicking real protocols and servers: All of these honeypot agents are managed through traditional, open source cloud management tools. We collect all agent-level log data using Rapid7's InsightOps tool and collect all honeypot data—including raw PCAPs—centrally on Amazon S3. We have Hesienberg nodes appearing to be everything from internet cameras to MongoDB servers and everything in between. But, we’re not just looking for malicious activity. Heisenberg also enables us to see cloud and internet service “misconfigurations”—i.e., legit, benign traffic that is being sent to a node that is no longer under the control of the sending organization but likely was at some point. We see database queries, API calls, authenticated sessions and more and this provides insight into how well organizations are (or aren’t) configuring and maintaining their internet presence. Putting It All Together We convert all our data into a column-storage format called “parquet” that enables us to use a wide array of large-scale data analysis platforms to mine the traffic. With it, we can cross-reference Sonar and Heisenberg data—along with data from feeds of malicious activity or even, say, current lists of digital coin mining bots—to get a pretty decent picture of what’s going on. This past year (to date), we’ve publicly used our platform to do everything from monitoring Mirai (et al) botnet activity to identifying and quantifying (many) vulnerable services to tracking general protocol activity and exposure before and after the Shadow Brokers releases. Privately, we’ve used the platform to develop custom feeds for our Insight platform that helps users identify, quantify and reduce exposure. Let’s look into a few especially fun and helpful cases we’ve studied: Sending Out An S.O.S. Long-time readers of the Rapid7 blog may remember a post we did on protestors hijacking internet-enabled devices that broadcasters use to get signals to radio towers. We found quite a bit of open and unprotected devices: What we didn’t tell you is that Rapid7’s Rebekah Brown worked with the National Association of Broadcasters to get the word out to vulnerable stations. Within 24 hours the scope of the issue was reduced by 50% and now only a handful (~15%) remain open and unprotected. This is an incredible “win” for the internet as exposure reduction like this is rarely seen. We used our Sonar HTTP study to look for candidate systems and then performed a targeted scan to see if each device was — in fact — vulnerable. Thanks to the aforementioned re-engineering efforts, these subsequent scans take between 30 minutes to three hours (depending on the number of targets and complexity of the protocol decomposition). That means, when we are made aware of a potential internet-wide issue, we can get active, current telemetry to help quantify the exposure and begin working with CERTs and other organizations to help reduce risk. Internet of Exposure It’d be too easy to talk about the Mirai botnet or stunt-hacking images from open cameras. Let’s revisit the exposure of a core component of our nation’s commercial backbone: petroleum. Specifically, the gas we all use to get around. We’ve talked about it before and it’s hard to believe (or perhaps not, in this day and age) such a clunky device... ...can be so exposed. We’ve shown you we can count these IoThings but we’ve taken the ATG monitoring a step further to show how careless configurations could possibly lead to exposure of important commercial information. Want to know the median number of gas tanks at any given petrol station? We’ve got an app for that: Most stations have 3-4 tanks, but some have many more. This can be sliced-and-diced by street, town, county and even country since the vast majority of devices provide this information with the tank counts. How about how much inventory currently exists across the stations? We won’t go into the economic or malicious uses of this particular data, but you can likely ponder that on your own. Despite previous attempts by researchers to identify this exposure—with the hopeful intent of raising enough awareness to get it resolved—we continue to poke at this and engage when we can to help reduce this type of exposure. Think back on this whenever your organization decides to deploy an IoT sensor network and doesn’t properly risk-assess the exposure depending on the deployment model and what information is being presented through the interface. But, these aren’t the only exposed things. We did an analysis of our Port 80 HTTP GET scans to try to identify IoT-ish devices sitting on that port and it’s a mess: You can explore all the items we found here but one worth calling out is: These are 251 buildings—yes, buildings—with their entire building management interface directly exposed to the internet, many without authentication and not even trying to be “sneaky” and use a different port than port 80. It’s vital that you scan your own perimeter for this type of exposure (not just building management systems, of course) since it’s far too easy to have something slip on to the internet than one would expect. Wiping Away The Tears Rapid7 was quick to bring hype-free information and help for the WannaCry “digital hurricane” this past year. We’ve migrated our WannaCry efforts over to focused reconnaissance of related internet activity post-Shadow Brokers releases. Since WannaCry, we’ve seen a major uptick in researchers and malicious users looking for SMB hosts (we’ve seen more than that but you can read our 2017 Q2 Threat Report for more details). As we work to understand what attackers are doing, we are developing different types of honeypots to enable us to analyze—and, perhaps even predict—their intentions. We’ve done even more than this, but hopefully you get an idea of the depth and breadth of analyses that our research platform enables. Take Our Data...Please! We provide some great views of our data via our blog and in many reports: But, YOU can make use of our data to help your organization today. Sure, Sonar data is available via Metasploit (Pro) via the Sonar C, but you can do something as simple as: $ curl -o smb.csv.gz\ https://scans.io/data/rapid7/sonar.tcp/2017-08-16-1502859601-tcp_smb_445.csv.gz $ gzcat smb.csv.gz | cut -d, -f4,4 | grep MY_COMPANY_IP_ADDRESSES to see if you’re in one of the study results. Some ones you really don’t want to show up in include SMB, RDP, Docker, MySQL, MS SQL, MongoDB. If you’re there, it’s time to triage your perimeter and work on improving deployment practices. You can also use other Rapid7 open source tools (like dap) and tools we contribute to (such as the ZMap ecosystem) to enrich the data and get a better picture of exposure, focusing specifically on your organization and threats to you. Fin We’ve got more in store for the rest of the year, so keep an eye (or RSS feed slurper) on the Rapid7 blog as we provide more information on exposure. You can get more information on our studies and suggest new ones via research@rapid7.com.

Measuring SharknAT&To Exposures

On August 31, 2017, NoMotion’s “SharknAT&To” research started making the rounds on Twitter. After reading the findings, and noting that some of the characteristics seemed similar to trends we’ve seen in the past, we were eager to gauge the exposure of…

On August 31, 2017, NoMotion’s “SharknAT&To” research started making the rounds on Twitter. After reading the findings, and noting that some of the characteristics seemed similar to trends we’ve seen in the past, we were eager to gauge the exposure of these vulnerabilities on the public internet. Vulnerabilities such as default passwords or command injection, which are usually trivial to exploit, in combination with a sizable target pool of well-connected, generally unmonitored internet-connected devices, such as DSL/cable routers, can have a significant impact on the general health of the internet, particularly in the age of DDoS and malware for hire, among other things. For example, starting around this time last year and continuing until today, the internet has been dealing with the Mirai malware that exploits default passwords as part of its effort to replicate itself. The SharknAT&To vulnerabilities seemed so similar, we had to get a better idea of what we might be facing. What we found surprised us: the issues are actually not as universal as initially surmised. Indeed, we found that clusters of each of the vulnerabilities are found almost entirely in their own, distinct regional pockets (namely, Texas, California, and Connecticut). We also observed that these issues may not be limited to just one ISP deploying a particular model of Internet router but perhaps a variety of different devices that is complicated by a history of companies, products, and services being bought, sold, OEM’d and customized. For more information about these SharknAT&To vulnerabilities and Rapid7’s efforts to understand the exposure these vulnerabilities represent, please read on. Five Vulnerabilities Disclosed NoMotion identified five vulnerabilities that, at the time, seemed limited to Arris modems being deployed as part of AT&T U-Verse installations: SSH exposed to the Internet; superuser account with hardcoded username/password. (22/TCP) Default credentials “caserver” in https server NVG599 (49955/TCP) Command injection “caserver” in https server NVG599 (49955/TCP) Information disclosure/hardcoded credentials (61001/TCP) Firewall bypass no authentication (49152/TCP) Successful exploitation of even just one of these vulnerabilities would result in a near complete compromise of the device in question and would pose a grave risk to the computers, mobile devices, and IoT gadgets on the other side. If exploited in combination, the victim’s device would be practically doomed to persistent, near-undetectable compromise. Scanning to Gauge Risk NoMotion did an excellent job of using existing Censys and Shodan sources to gauge exposure; however, they also pointed out that some of the at-risk services on these devices are not regularly audited by scanning projects like this. In an effort to assist, Rapid7 Labs fired off several Sonar studies shortly after learning of the findings in order to get current information for all affected services, within reason. As such, we queued fresh analysis of: SSH on port 22/TCP to cover vulnerability 1 HTTPS on 49955/TCP and 61001/TCP, covering vulnerabilities 2-4 A custom protocol study on port 49152/TCP for vulnerability 5 Findings Vulnerability 1: SSH Exposure Not having a known vulnerable Arris device at our disposal, we had to take a bit of an educated guess as to how to identify affected devices. In NoMotion’s blog post, they cite Censys as showing 14,894 vulnerable endpoints. A search through Sonar’s SSH data from early August showed just over 7,000 hosts exposing SSH on 22/TCP with “ARRIS” in the SSH banner, suggesting that these may be made by Arris, one of the vendors involved in this issue. There are several caveats that could explain the difference in number, including the fact that Arris makes several other devices, which are unaffected by these issues, and that there is no guarantee that affected and/or vulnerable devices will necessarily mention Arris in their SSH protocol. A follow-up study today showed similar results with just over 8,000. It is assumed that the difference in Rapid7’s numbers as compared to NoMotion’s is caused by the fact that Sonar maintains a blacklist of IP addresses that we’ve been asked to not study, as well as normal changes to the landscape of the public Internet. A preliminary check of our Project Heisenberg honeypots showed no noticeable change in the patterns we observe related to the volume and variety of SSH brute force and default account attempts prior to this research. However, the day after NoMotion's research was published, our honeypots started to see exploitation attempts using the default credentials published by NoMotion. September 13, 2017 UPDATE on SSH exposure findings The researchers from NoMotion reached out to Rapid7 Labs after the initial publication of this blog and shared how they estimated the number of exposed, vulnerable SSH endpoints. They did so by searching for SSH endpoints owned by AT&T U-Verse that were running a particular version of dropbear. Repeating our some of our original research with this new information, we found nearly 22,000 seemingly vulnerable endpoints in that same study from early August concentrated in Texas and California. Armed with this new knowledge, we re-analyzed SSH studies from late August and early September and discovered that seemingly none of the endpoints that appeared vulnerable in early August were still advertising the vulnerable banner, indicating that something changed with regards to SSH on AT&T U-Verse modems that caused this version to disappear entirely. Sure enough, a higher level search for just AT&T U-Verse endpoints shows that there were nearly 40,000 AT&T U-Verse SSH endpoints in early August and just over 10,000 in late August and early September, with the previously seen California and Texas concentrations drying up. What changed here is unknown. Vulnerabilities 2 and 3: Port 49955/TCP Service Exposure US law understandably prohibits research that performs any exploitation or intrusive activity, which rules out specifically testing the validity of the default credentials, or attempting to exploit the command injection vulnerability. Combined with no affected hardware being readily available to us at the time of this writing, we had to get creative to estimate the number of exposed and potentially affected Arris devices. As mentioned in NoMotion’s blog, they observed several situations in which the HTTP(S) server listening on 49955/TCP would return various messages implying a lack of authorization, depending on how the request was made. Our first slice through the Sonar data from August 31, 2017 showed ~3.4 million 49955/TCP endpoints open, though only approximately 284,000 of those appear to be HTTPS. Further summarization showed that better than 99% of these responses were identical HTTP 403 forbidden messages, giving us high confidence that these were all the same types of devices and were all likely affected. In some HTTP research situations we are able to examine the HTTP headers in the response for clues that might indicate a particular vendor, product or version that would help narrow our search, however the HTTP server in question here returns no headers at all. Furthermore, by examining the organization and locality information associated with the IPs in question, we start to see a pattern that this is isolated almost entirely to AT&T-related infrastructure in the Southern United States, with Texas cities dominating the top results: The ~53k likely affected devices that we failed to identify a city and state for all report the same latitude and longitude, smack in the middle of Cheney Reservoir in Kansas. This is an anomaly introduced by MaxMind, our source of Geo-IP information, and is the default location used when an IP cannot be located any more precisely than being in the United States. As further proof that we were on the right track, NoMotion has two locations, both in Texas. It’s likely that these Arris devices were first encountered in day-to-day work and life by NoMotion employees, and not scrounged off of eBay for research purposes. We’ve certainly happened upon interesting security research this way at Rapid7—it’s our nature as security researchers to poke at the devices around us. Because this HTTP service is wrapped in SSL, Sonar also records information about the SSL session. A quick look at the same devices identified above shows another clear pattern -- that most have the same, default, self-signed SSL certificate: This presents another vulnerability. Because the vast majority of these devices have the same certificate, they will also have the same private key. This means that anyone with access to the private key from one of these vulnerable devices is poised to be able to decrypt or manipulate traffic for other affected devices, should a sufficiently-skilled attacker position themselves in an advantageous manner, network-wise. Because some of the SharknAT&To vulnerabilities disclosed by NoMotion allow filesystem access, it is assumed that access to the private key, even if password protected, is fairly simple. To add insult to injury, because these same vulnerable services are the very services an ISP would use to manage and update or patch affected systems against vulnerabilities like these, should an attacker compromise them in advance, all bets are off for patching these devices using all but a physical replacement. It is also very curious that outside of the top SSL certificate subject and fingerprint, there is still a clear pattern in the certificates: there is a common name with a long integer after it, which looks plausibly like a serial number. Perhaps at some point in their history, these devices used a different scheme for SSL certificate generation, and inadvertently included the serial number. Some simple testing with a supposedly unaffected device showed that this number didn’t necessarily match the serial number. Examining Project Heisenberg’s logs for any traffic appearing on 49955/TCP shows only a minimal amount of background noise, and no obvious widespread exploitation yet in 2017. Vulnerability 4: Port 61001/TCP Exposure Much like with vulnerabilities 2 and 3 on port 49955/TCP, Sonar is a bit hamstrung when it comes to its ability to test for the presence of this vulnerability on the public internet. Following the same steps as we did with 49955/TCP, we observed ~5.8 million IPs on the public IPv4 Internet with port 61001/TCP open. A second pass of filtering showed that nearly half of these were HTTPS. Using the same payload analysis technique as before didn’t pay dividends this time, because while the responses are all very similar -- large clusters of HTTP 404, 403, and other default looking HTTP response -- there is no clear outlier. The top response from ~874,000 endpoints looks similar to what we observed on 49955/TCP -- lots of Texas with some Cali slipping in: The vast majority of the remainder appear to be 2Wire DSL routers that are also used by AT&T U-Verse. The twist here is that Arris acquired 2Wire several years ago. Whether or not these 2Wire devices are affected by any of these issues is currently unknown: As shown above, there is still a significant presence in the Southern United States, but there is also a sizeable Western presence now, which really highlights the supply chain problem that NoMotion mentioned in their research. While the 49955/TCP vulnerability appears to be isolated to just one region of the United States, the 61001/TCP issue has a broader reach, further implying that this extends beyond just the Arris models named by NoMotion, but not necessarily beyond AT&T. Repeating the same investigation into the SSL certificates on port 61001/TCP shows that there are likely some patterns here, including the same exact Arris certificate showing up again, this time with over 45,000 endpoints, and Motorola making an appearance with 3/4 of a million: Examining Project Heisenberg’s logs for any traffic appearing on 61001/TCP shows there is only a minimal amount of background noise and no obvious widespread exploitation yet in 2017. Vulnerability 5: Port 49152/TCP Exposure The service listening on 49152/TCP appears to be used as a kind of a source-routing, application layer to MAC layer TCP proxy. By specifying a magic string, the correct opcode, a valid MAC and a port, the “wproxy” service will forward on any remaining data received during a connection to port 49152/TCP from (generally) the WAN to a host on the LAN with that specified MAC to the specified. Why exactly this needs to be exposed to the outside world with no restrictions whatsoever is unknown, but perhaps the organizations in question deploy this for debugging and maintenance purposes and failed to properly secure it. In order to gauge exposure of this issue, we developed a Sonar study that sends to the wproxy service a syntactically valid payload that elicits an error response. More specifically, the study sends a request with a valid magic string, an invalid opcode, an invalid MAC and an invalid port, which in turn generally causes the remote endpoint to return an error that allows us to positively identify the wproxy service. Because this vulnerability is inherent in the service itself due to a lack of any authentication or authorization, any endpoint exposing this service is at risk. As with the other at risk services described so far, our first step was to determine how many public IPv4 endpoints seemed to have the affected port open, 49152/TCP. A quick zmap scan showed nearly 8 million hosts with this port open. With our limited knowledge of the protocol service, we looked for any wproxy-like responses, which quickly whittled down the list to approximately 42,000 IPv4 hosts exposing the wproxy service. We had hoped that a quick application of geo-IP and we’d be done, but it wasn’t quite that simple. Using the same techniques as with other services, we grouped by common locations until something caught our eye, and immediately we knew something was up. Up until this point, all of this had landed squarely in AT&T land, clustering around Texas and California, but several different lenses into the 49152/TCP data pointed to one region—Connecticut: Sure, there are a few AT&T mentions and even 5 oddly belonging to Arris in Georgia, but otherwise this particular service seemed off. Why all Texas/California AT&T previously, but now Frontier in Connecticut? Guesses of bad geo-IP data wouldn’t be too far off, but in reality, Frontier acquired all of AT&T’s broadband business in Connecticut 3 years ago. This means that AT&T broadband customers who were at risk of having their internal networks swiss-cheesed by determined attackers with a penchant for packets for at least 3 years are now actually Frontier customers using AT&T hardware, almost certainly further complicating the supply chain problem and definitely putting customers at risk because of a service that should have never seen the public internet in the first place. Examining Project Heisenberg’s logs for any traffic appearing on 49152/TCP and there is largely just suspected background noise in 2017, albeit a little higher than port 49955/TCP and 61001/TCP. There are a few slight spikes back in February 2017, perhaps indicating some early scouting, but it is just as likely to have been background noise or probes for entirely different issues. Some high level investigation shows a deluge of blindly lobbed HTTP exploits at this port. Conclusions The issues disclosed by NoMotion are certainly attention-grabbing, since the initial analysis implies that AT&T U-Verse, a national internet service provider with millions of customers, is powered by dangerously vulnerable home routers. However, our analysis of what’s actually matching the described SharknAT&To indicators seems to point to a more localized phenomenon; Texas and other Southern areas are primarily indicated, with flare ups in California, Chicago, and Connecticut, with significantly lower populations in other regions of the U.S. These results seem to imply which vendor is in the best position to fix these bugs, but the supply chain problems detailed above add a level of complication that will inevitably leave some customers at risk unnecessarily. Armed with these Sonar results, we can say with confidence that these vulnerabilities are almost wholly contained in the AT&T U-Verse and associated networks, and not part of the wider Arris ecosystem of hardware. This, in turn, implies that the software was produced or implemented by the ISP, and not natively shipped by the hardware manufacturer. This knowledge will hopefully speed up remediation. Interested in further collaboration on this? Have additional information? Questions? Comments? Leave them here or reach out to research@rapid7.com!

R7-2017-07: Multiple Fuze TPN Handset Portal vulnerabilities (FIXED)

This post describes three security vulnerabilities related to access controls and authentication in the TPN Handset Portal, part of the Fuze platform. Fuze fixed all three issues by May 6, 2017, and user action is not required to remediate. Rapid7 thanks Fuze for their quick…

This post describes three security vulnerabilities related to access controls and authentication in the TPN Handset Portal, part of the Fuze platform. Fuze fixed all three issues by May 6, 2017, and user action is not required to remediate. Rapid7 thanks Fuze for their quick and thoughtful response to these vulnerabilities: R7-2017-07.1, CWE-284 (Improper Access Control): An unauthenticated remote attacker can enumerate through MAC addresses associated with registered handsets of Fuze users. This allows them to craft a URL that reveals details about the user, including their Fuze phone number, email address, parent account name/location, and a link to an administration interface. This information is returned over HTTP and does not require authentication. R7-2017-07.2, CWE-319 (Cleartext Transmission of Sensitive Information): The administration interface URL revealed from the URLs enumerated in R7-2017-07.1 will prompt for a password over an unencrypted HTTP connection. An attacker with a privileged position on the network can capture this traffic. R7-2017-07.3, CWE-307 (Improper Restriction of Excessive Authentication Attempts): Authentication requests to the administration portal do not appear to be rate-limited, thus allowing attackers to potentially find successful credentials through brute-force attempts. Product Description Fuze is an enterprise, multi-platform voice, messaging, and collaboration service created by Fuze, Inc. It is described fully at the vendor's website. While much of the Fuze suite of applications are delivered as web-based SaaS components, there are endpoint client applications for a variety of desktop and mobile platforms. Credit These issues were discovered by a Rapid7 user, and they are being disclosed in accordance with Rapid7's vulnerability disclosure policy. Exploitation R7-2017-07.1 Any unauthenticated user can browse to http://mb.thinkingphones.com/tpn-portlet/mb/MACADDRESS and, if a valid MAC address is provided in place of MACADDRESS, receive a response that includes the following data about a Fuze handset user: Owner email address Account (including location information) Primary phone number Administration portal link Here is a (redacted) example of retrieving the above information using Fuze's TPN Portlet: While the total possible MAC address space is large (48 bits), the practical space in this case is significantly less. An attacker would only need to enumerate options starting with related published OUIs to target the subset of MAC addresses for Polycom and Yealink phones, which are the officially supported phone brands that Fuze offers as outlined here. For example, Polycom's OUIs are 00:04:F2 and 64:16:7F. An attacker can use this information to enumerate all Fuze customers/users with hard phones and collect their their email addresses, their phone numbers, and also access the Fuze device admin login page (shown below) and potentially make configuration changes. While it is common for handsets to request configuration from a remote server during boot, and indeed for those requests to not be authenticated, the fact that the configuration server is located in the cloud versus on-prem, and the fact that the specific URLs are crafted using a known pattern of MAC addresses, adds an unexpected surface for undesired information disclosure. R7-2017-07.2 Network traffic between a handset and the TPN Portal (http://mb.thinkingphones.com/tpn-portlet/mb/MACADDRESS/admin.jsp) are made over HTTP. Thus if an attacker is able to capture/intercept network traffic while the handset boots up, they would be able to view the content of requests made to the Portal, including the admin code, as shown below: R7-2017-07.3 If an attacker was not listening to network traffic during handset boot, they could still determine the administration portal URL by MAC enumeration as mentioned in R7-2017-07.1. Given that URL, the attacker could try various admin codes until they are successfully logged in, as it does not appear that authentication attempts are limited. Remediation Fuze addressed R7-2017-07.1 on April 29, 2017 by requiring password authentication to access the TPN portal (http://mb.thinkingphones.com/tpn-portlet/mb/MACADDRESS), and R7-2017-07.2 on May 6, 2017 by encrypting traffic to the TPN portal. No user action is required to remediate these two issues. Hashed passwords were pushed out by Fuze to customer handsets during a daily required update check. Handsets were also configured to use TLS for future communication with the portal at that time. After this update was pushed, Fuze's servers were configured to deny unauthenticated requests, as well as requests made over HTTP. If any handsets did not receive these updates, users would not be able to perform some actions from the handset directly, such as re-assigning to a new user. This may impact a small number of users, who should work with Fuze support to resolve. Phone re-assignment and other configuration changes can still be made and pushed from the Fuze server side. More importantly, if a handset did happen to be offline during the initial update push, once back online it would still be able to download firmware updates and essential configuration updates, including those related to SIP and TLS requirements. Fuze addressed R7-2017-07.3 on May 6, 2017 by rate limiting authentication attempts to the administration portal. In addition, MAC enumeration to find URLs providing the administration portal URLs is no longer possible given the authentication requirement. No user action is required to remediate this issue, as the change was made to Fuze's servers. Vendor Statement Rapid7 is a Fuze customer and a highly valued voice in ensuring that Fuze is continuously improving the security of its voice, video, and messaging service. As users of the entire Fuze platform, Rapid7’s team identified security weaknesses and responsibly disclosed them to the Fuze security team. In this case, while the exposure was a limited set of customer data, Fuze took immediate action upon receiving notification by Rapid7, and remediated the vulnerabilities with its handset provisioning service, in full, within two weeks. Fuze has no evidence of any bad actors exploiting this vulnerability to compromise customer data. Fuze is grateful to Rapid7 for its continued partnership in responsibly sharing security information, and believes in its larger mission to normalize the vulnerability disclosure process across the entire software industry. -- Chris Conry, CIO of Fuze Disclosure Timeline Wed, Apr 12, 2017: Issues discovered by Rapid7 Tue, Apr 25, 2017: Details disclosed to Fuze Sat, Apr 29, 2017: R7-2017-07.1 fixed by Fuze Sat, May 6, 2017: R7-2017-07.2 and R7-2017-07.2 fixed by Fuze Tue, May 23, 2017: Disclosed to CERT/CC Fri, May 26, 2017: CERT/CC and Rapid7 decided no CVEs are warranted since these issues exist on the vendor's side, and customers do not need to take action. Tue, Aug 22, 2017: Public disclosure

You've Got 0-Day!

Hey all, it feels like it's been forever since I wrote a blog post that wasn't about some specific disaster currently consuming the Internet, so I just wanted to drop a note here about how I'll be speaking at UNITED 2017, Rapid7's annual security summit…

Hey all, it feels like it's been forever since I wrote a blog post that wasn't about some specific disaster currently consuming the Internet, so I just wanted to drop a note here about how I'll be speaking at UNITED 2017, Rapid7's annual security summit in Boston September 11-14. Specifically, I'll be closing out the Research and Collaborate track at UNITED on a topic near and dear to my heart: the vagaries of vulnerability disclosure. Vuln disclosure is a funny business; when you're on the receiving side, it's at best some unwelcome news about some bug in your product that's putting your customers at risk. If you're on the giving side, it's pretty much an invitation for angry letters from CTOs and their attorneys. So why bother? Turns out, despite all the emotional pain associated with it, reasonable vulnerability disclosure is pretty much the most effective tool we have to make the internet-connected products and services we produce and use that much stronger in the face of an increasingly hostile public network. We need vuln disclosure conversations in order to get better at what we do, since it's literally impossible to write, assemble, package, and deliver software of any complexity completely vulnerability-free on the first try. So, the goal of this talk is to share some stories about my experiences in vuln handling from both sides. As director of research here at Rapid7, I'm often the first point of contact for software and technology vendors when one of our researchers uncovers a vulnerability. On the flip side, I also get notifications about Rapid7 product bugs from security@rapid7.com, so I spend a fraction of my work life helping to get those bits of nastiness resolved. If you're looking for tips and advice on how to handle vulnerability disclosures—either as a discoverer, or as someone responsible for patching shipping software—then I hope my experiences will give you some insight into how this surprisingly emotion-driven business of disclosure works. Haven't yet signed up to join us at UNITED this year? Register here.

Remote Desktop Protocol (RDP) Exposure

The Remote Desktop Protocol, commonly referred to as RDP, is a proprietary protocol developed by Microsoft that is used to provide a graphical means of connecting to a network-connected computer. RDP client and server support has been present in varying capacities in most every Windows…

The Remote Desktop Protocol, commonly referred to as RDP, is a proprietary protocol developed by Microsoft that is used to provide a graphical means of connecting to a network-connected computer. RDP client and server support has been present in varying capacities in most every Windows version since NT. Outside of Microsoft's offerings, there are RDP clients available for most other operating systems. If the nitty gritty of protocols is your thing, Wikipedia's Remote Desktop Protocol article is a good start on your way to a trove of TechNet articles. RDP is essentially a protocol for dangling your keyboard, mouse and a display for others to use. As you might expect, a juicy protocol like this has a variety of knobs used to control its security capabilities, including controlling user authentication, what encryption is used, and more. The default RDP configuration on older versions of Windows left it vulnerable to several attacks when enabled; however, newer versions have upped the game considerably by requiring Network Level Authentication (NLA) by default. If you are interested in reading more about securing RDP, UC Berkeley has put together a helpful guide, and Tom Sellers, prior to joining Rapid7, wrote about specific risks related to RDP and how to address them. RDP's history from a security perspective is varied. Since at least 2002 there have been 20 Microsoft security updates specifically related to RDP and at least 24 separate CVEs: MS99-028: Terminal Server Connection Request Flooding Vulnerability MS00-087: Terminal Server Login Buffer Overflow Vulnerability MS01-052: Invalid RDP Data Can Cause Terminal Service Failure MS02-051: Cryptographic Flaw in RDP Protocol can Lead to Information Disclosure MS05-041: Vulnerability in Remote Desktop Protocol Could Allow Denial of Service MS09-044: Vulnerabilities in Remote Desktop Connection Could Allow Remote Code Execution MS11-017: Vulnerability in Remote Desktop Client Could Allow Remote Code Execution MS11-061: Vulnerability in Remote Desktop Web Access Could Allow Elevation of Privilege MS11-065: Vulnerability in Remote Desktop Protocol Could Allow Denial of Service MS12-020: Vulnerabilities in Remote Desktop Could Allow Remote Code Execution MS12-036: Vulnerability in Remote Desktop Could Allow Remote Code Execution MS12-053: Vulnerability in Remote Desktop Could Allow Remote Code Execution MS13-029: Vulnerability in Remote Desktop Client Could Allow Remote Code Execution MS14-030: Vulnerability in Remote Desktop Could Allow Tampering MS14-074: Vulnerability in Remote Desktop Protocol Could Allow Security Feature Bypass MS15-030: Vulnerability in Remote Desktop Protocol Could Allow Denial of Service MS15-067: Vulnerability in RDP Could Allow Remote Code Execution MS15-082: Vulnerabilities in RDP Could Allow Remote Code Execution MS16-017: Security Update for Remote Desktop Display Driver to Address Elevation of Privilege MS16-067: Security Update for Volume Manager Driver In more recent times, the Esteemaudit exploit was found as part of the ShadowBrokers leak targeting RDP on Windows 2003 and XP systems, and was perhaps the reason for the most recent RDP vulnerability addressed in CVE-2017-0176. RDP is disabled by default for all versions of Windows but is very commonly exposed in internal networks for ease of use in a variety of duties like administration and support. I can't think of a place where I've worked where it wasn't used in some capacity. There is no denying the convenience it provides. RDP also finds itself exposed on the public internet more often than you might think. Depending on how RDP is configured, exposing it on the public internet ranges from suicidal on the weak end to not-too-unreasonable on the other. It is easy to simply suggest that proper firewall rules or ACLs restricting RDP access to all but trusted IPs is sufficient protection, but all that extra security only gets in the way when Bob-from-Accounting's IP address changes weekly. Sure, a VPN might be something that RDP could hide behind and be considerably more secure, but you could also argue that a highly secured RDP endpoint on the public internet is comparable security-wise to a VPN.  And when your security-unsavvy family members or friends need help from afar, enabling RDP is definitely an option that is frequently chosen. There have also been reports that scammers have been using RDP as part of their attacks, often convincing unwary users to enable RDP so that “remote support” can be provided.  As you can see and imagine, there are all manner of ways that RDP could end up exposed on the public internet, deliberately or otherwise. It should come as no surprise, then, to learn that we've been doing some poking at the global exposure of RDP on the public IPv4 internet as part of Rapid7 Labs' Project Sonar. Labs first looked at the abuse of RDP from a honeypot's perspective as part of the Attackers Dictionary research published last year. Around the same time, in early 2016, Sonar observed 10.8 million supposedly open RDP endpoints. As part of the research for Rapid7's 2016 National Exposure Index, we observed 9 million and 9.4 million supposedly open RDP endpoints in our two measurements in the second quarter of 2016. More recently, as part of the 2017 National Exposure Index, in the first quarter of 2017, Sonar observed 7.2 million supposedly open RDP endpoints. Exposing an endpoint is one thing, but actually exposing the protocol in question is where the bulk of the risk comes from. As part of running Sonar, we frequently see a variety of honeypots, tarpits, IPs or other security devices that will make it appear as if an endpoint is open when it really isn't—or when it really isn't speaking the protocol you are expecting. As such, I'm always skeptical of these initial numbers. Surely there aren't really 7-10 million systems exposing RDP on the public internet. Right? Recently, we launched a Sonar study in order to shed more light on the number of systems actually exposing RDP on the public internet. We built on the previous RDP studies which were simple zmap SYN scans, followed up with a full connection to each IP that responded positively and attempted the first in a series of protocol exchanges that occur when an RDP client first contacts an RDP server. This simple, preliminary protocol negotiation mimics what modern RDP clients perform and is similar to what Nmap uses to identify RDP. This 19-byte RDP negotiation request should elicit a response from almost every valid RDP configuration it encounters, from the default (less secure) settings of older RDP versions to the NLA and SSL/TLS requirements of newer defaults: We analyzed the responses, tallying any that appeared to be from RDP speaking endpoints, counting both error messages indicating possible client or server-side configuration issues as well as success messages. 11 million open 3389/TCP endpoints,and 4.1 million responded in such a way that they were RDP speaking of some manner or another. This number is shockingly high when you remember that this protocol is effectively a way to expose keyboard, mouse and ultimately a Windows desktop over the network. Furthermore, any RDP speaking endpoints discovered by this Sonar study are not applying basic firewall rules or ACLs to protect this service, which brings into question whether or not any of the other basic security practices have been applied to these endpoints. Given the myriad of ways that RDP could end up exposed on the public Internet as observed in this recent Sonar study, it is hard to say why any one country would have more RDP exposed than another at first glance, but clearly the United States and China have something different going on than everyone else: Looked at from a different angle, by examining the organizations that own the IPs with exposed RDP endpoints, things start to become much more clear: The vast majority of these providers are known to be cloud, virtual, or physical hosting providers where remote access to a Windows machine is a frequent necessity; it's no surprise, therefore, that they dominate exposure. We can draw further conclusions by examining the RDP responses we received. Amazingly, over 83% of the RDP endpoints we identified indicated that they were willing to proceed with CredSSP as the security protocol, implying that the endpoint is willing to use one of the more secure protocols to authenticate and protect the RDP session. A small handful in the few thousand range selected SSL/TLS. Just over 15% indicated that they didn't support SSL/TLS (despite our also proposing CredSSP…) or that they only supported the legacy “Standard RDP Security”, which is susceptible to man-in-the-middle attacks. Over 80% of exposed endpoints supporting common means for securing RDP sessions is rather impressive. Is this a glimmer of hope for the arguably high number of exposed RDP endpoints? Areas for potential future research could include: Security protocols and supported encryption levels. Nmap has an NSE script that will enumerate the security protocols and encryption levels available for RDP. While 83% of the RDP speaking endpoints support CredSSP, this does not mean that they don't also support less secure options; it just means that if a client is willing, they can take the more secure route. When TLS/SSL or CredSSP are involved, are organizations following best practices with regard to certificates, including self-signed certificates (perhaps leading to MiTM?), expiration, and weak algorithms? Exploring the functionality of RDP in non-Microsoft client and server implementations Rapid7's InsightVM and Metasploit have fingerprinting coverage to identify RDP, and InsightVM has vulnerability coverage for all of the above mentioned RDP vulnerabilities. Interested in this RDP research? Have ideas for more? Want to collaborate? We'd love to hear from you, either in the comments below or at research@rapid7.com.

Copyright Office Calls For New Cybersecurity Researcher Protections

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built…

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built into this report, submitting two sets of detailed comments—see here and here—to the Copyright Office with Bugcrowd, HackerOne, and Luta Security, as well as participating in official roundtable discussions. Here we break down why this matters for researchers, what the Copyright Office's study concluded, and how it matches up to Rapid7's recommendations. What is DMCA Sec. 1201 and why does it matter to researchers? Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs, like encryption, authentication requirements, region coding, user agents) to access copyrighted works, including software, without permission of the owner. That creates criminal penalties and civil liability for independent security research that does not obtain authorization for each TPM circumvention from the copyright holders of software. This hampers researchers' independence and flexibility. While the Computer Fraud and Abuse Act (CFAA) is more famous and feared by researchers, liability for DMCA Sec. 1201 is arguably broader because it applies to accessing software on devices you may own yourself while CFAA generally applies to accessing computers owned by other people. To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years in its "triennial rulemaking" process. The permanent exception to the prohibition on circumventing TPMs for security testing is quite limited – in part because researchers are still required to get prior permission from the software owner. The temporary exemptions go beyond the permanent exemptions. In Oct. 2015 the Copyright Office granted a very helpful exemption to Sec. 1201 for good faith security testing that circumvents TPMs without permission. However, this temporary exemption will expire at the end of the three-year exemption window. In the past, once a temporary exemption expires, advocates must start from scratch in re-applying for another temporary exemption. The temporary exemption is set to expire Oct. 2018, and the renewal process will ramp up in the fall of this year. Copyright Office study and Rapid7's recommendations The Copyright Office announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study to weigh legislative and procedural reforms to Sec. 1201, including the permanent exemptions and the three-year rulemaking process. The Copyright Office solicited two sets of public comments and held a roundtable discussion to obtain feedback and recommendations for the study. At each stage, Rapid7 provided recommendations on reforms to empower good faith security researchers while preserving copyright protection against infringement – though, it should be noted, there were several commenters opposed to reforms for researchers on IP protection grounds. Broadly speaking, the conclusions reached in the Copyright Office's study are quite positive for researchers and largely tracked the recommendations of Rapid7 and other proponents of security research. Here are four key highlights: Authorization requirement: As noted above, the permanent exemption for security testing under Sec. 1201(j) is limited because it still requires researchers to obtain authorization to circumvent TPMs. Rapid7's recommendation is to remove this requirement entirely because good faith security research does not infringe copyright, yet an authorization requirement compromises independence and speed of research. The Copyright Office's study recommended [at pg. 76] that Congress make this requirement more flexible or remove it entirely. This is arguably the study's most important recommendation for researchers. Multi-factor test: The permanent exemption for security testing under Sec. 1201(j) also partially conditions liability protection on researchers when the results are used "solely" to promote the security of the computer owner, and when the results are not used in a manner that violates copyright or any other law. Rapid7's recommendations are to remove "solely" (since research can be performed for the security of users or the public at large, not just the computer owner), and not to penalize researchers if their research results are used by unaffiliated third parties to infringe copyright or violate laws. The Copyright Office's study recommended [at pg. 79] that Congress remove the "solely" language, and either clarify or remove the provision penalizing researchers when research results are used by third parties to violate laws or infringe copyright. Compliance with all other laws: The permanent exemption for security testing only applies if the research does not violate any other law. Rapid7's recommendation is to remove this caveat, since research may implicate obscure or wholly unrelated federal/state/local regulations, those other laws have their own enforcement mechanisms to pursue violators, and removing liability protection under Sec. 1201 would only have the effect of compounding the penalties. Unfortunately, the Copyright Office took a different approach, tersely noting [at pg. 80] that it is unclear whether the requirement to comply with all other laws impedes legitimate security research. The Copyright Office stated they welcome further discussion during the next triennial rulemaking, and Rapid7 may revisit this issue then. Streamlined renewal for temporary exemptions: As noted above, temporary exemptions expire after three years. In the past, proponents must start from scratch to renew the temporary exemption – a process that involves structured petitions, multiple rounds of comments to the Copyright Office, and countering the arguments of opponents to the exemption. For researchers that want to renew the temporary security testing exemption, but that lack resources and regulatory expertise, this is a burdensome process. Rapid7's recommendation is for the Copyright Office to presume renewal of previously granted temporary exemptions unless there is a material change in circumstances that no longer justifies granting the exemption. In its study, the Copyright Office committed [at pg. 143] to streamlining the paperwork required to renew already granted temporary exemptions. Specifically, the Copyright Office will ask parties requesting renewal to submit a short declaration of the continuing need for an exemption, and whether there has been any material change in circumstances voiding the need for the exemption, and then the Copyright Office will consider renewal based on the evidentiary record and comments from the rulemaking in which the temporary exemption was originally granted. Opponents of renewing exemptions, however, must start from scratch in submitting evidence that a temporary exemption harms the exercise of copyright. Conclusion—what's next? In the world of policy, change typically occurs over time in small (often hard-won) increments before becoming enshrined in law. The Copyright Office's study is one such increment. For the most part, the study is making recommendations to Congress, and it will ultimately be up to Congress (which has its own politics, processes, and advocacy opportunities) to adopt or decline these recommendations. The Copyright Office's study comes at a time that House Judiciary Committee is broadly reviewing copyright law with an eye towards possible updates. However, copyright is a complex and far-reaching field, and it is unclear when Congress will actually take action. Nonetheless, the Copyright Office's opinion on these issues will carry significant weight in Congress' deliberations, so it would have been a heavy blow if the Copyright Office's study had instead called for tighter restrictions on security research. Importantly, the Copyright Office's new, streamlined process for renewing already granted temporary exemptions will take effect without Congress' intervention. The streamlined process will be in place for the next "triennial rulemaking," which begins in late 2017 and concludes in 2018, and which will consider whether to renew the temporary exemption for security research. This is a positive, concrete development that will reduce the administrative burden of applying for renewal and increase the likelihood of continued protections for researchers. The Copyright Office's study noted that "Independent security test[ing] appears to be an important component of current cybersecurity practices". This recognition and subsequent policy shifts on the part of the Copyright Office are very encouraging. Rapid7 believes that removing legal barriers to good faith independent research will ultimately strengthen cybersecurity and innovation, and we hope to soon see legislative reforms that better balance copyright protection with legitimate security testing.

Rapid7 issues comments on NAFTA renegotiation

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North…

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North American Free Trade Agreement (NAFTA). NAFTA is a trade agreement between the US, Canada, and Mexico, that covers a huge range of topics, from agriculture to healthcare. Rapid7 submitted comments in response, focusing on 1) preventing data localization, 2) alignment of cybersecurity risk management frameworks, 3) protecting strong encryption, and 4) protecting independent security research. Rapid7's full comments on the renegotiation of NAFTA are available here. 1) Preserving global free flow of information – preventing data localization Digital goods and services are increasingly critical to the US economy. By leveraging cloud computing, digital commerce offers significant opportunities to scale globally for individuals and companies of all sizes – not just large companies or tech companies, but for any transnational company that stores customer data. However, regulations abroad that disrupt the free flow of information, such as "data localization" (requirements that data be stored in a particular jurisdiction), impede both trade and innovation. Data localization erodes the capabilities and cost savings that cloud computing can provide, while adding the significant costs and technical burdens of segregating data collected from particular countries, maintaining servers locally in those countries, and navigating complex geography-based laws. The resulting fragmentation also undermines the fundamental concept of a unified and open global internet. Rapid7's comments [pages 2-3] recommended that NAFTA should 1) Prevent compulsory localization of data, and 2) Include an express presumption that governments would minimize disruptions to the flow of commercial data across borders. 2) Promote international alignment of cybersecurity risk management frameworks When NAFTA was originally negotiated, cybersecurity was not the central concern that it is today. Cybersecurity is presently a global affair, and the consequences of malicious cyberattack or accidental breach are not constrained by national borders. Flexible, comprehensive security standards are important for organizations seeking to protect their systems and data. International interoperability and alignment of cybersecurity practices would benefit companies by enabling them to better assess global risks, make more informed decisions about security, hold international partners and service providers to a consistent standard, and ultimately better protect global customers and constituents. Stronger security abroad will also help limit the spread of malware contagion to the US. We support the approach taken by the National Institute of Standards and Technology (NIST) in developing the Cybersecurity Framework for Critical Infrastructure. The process was open, transparent, and carefully considered the input of experts from the public and private sector. The NIST Cybersecurity Framework is now seeing impressive adoption among a wide range of organizations, companies, and government agencies – including some critical infrastructure operators in Canada and Mexico. Rapid's comments [pages 3-4] recommended that NAFTA should 1) recognize the importance of international alignment of cybersecurity standards, and 2) require the Parties to develop a flexible, comprehensive cybersecurity risk management framework through a transparent and open process. 3) Protect strong encryption Reducing opportunities for attackers and identifying security vulnerabilities are core to cybersecurity. The use of encryption and security testing are key practices in accomplishing these tasks. International regulations that require weakening of encryption or prevent independent security testing ultimately undermine cybersecurity. Encryption is a fundamental means of protecting data from unauthorized access or use, and Rapid7 believes companies and innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive disadvantage with uncompromised products. Requirements to weaken encryption would impose significant security risks on US companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments. Rapid7's comments [page 5] recommended that NAFTA forbid Parties from conditioning market access for cryptography in commercial applications on the transfer of decryption keys or alteration of the encryption design specifications. 4) Protect independent security research Good faith security researchers access software and computers to identify and assess security vulnerabilities. To perform security testing effectively, researchers often need to circumvent technological protection measures (TPMs) – such as encryption, login requirements, region coding, user agents, etc. – controlling access to software (a copyrighted work). However, this activity can be chilled by Sec. 1201 of the Digital Millennium Copyright Act (DMCA) of 1998, which forbids circumvention of TPMs without the authorization of the copyright holder. Good faith security researchers do not seek to infringe copyright, or to interfere with a rightsholder's normal exploitation of protected works. The US Copyright Office recently affirmed that security research is fair use and granted this activity, through its triennial rulemaking process, a temporary exemption from the DMCA's requirement to obtain authorization from the rightsholder before circumventing a TPM to safely conduct security testing on lawfully acquired (i.e., not stolen or "borrowed") consumer products. Some previous trade agreements have closely mirrored the Digital Millennium Copyright Act's (DMCA) prohibitions on unauthorized circumvention of TPMs in copyrighted works. This approach replicates internationally the overbroad restrictions on independent security testing that the US is now scaling back. Newly negotiated trade agreements should aim to strike a more modern and evenhanded balance between copyright protection and good faith cybersecurity research. Rapid7's comments [page 6] recommended that any anti-circumvention provisions of NAFTA should be accompanied by provisions exempting security testing of lawfully acquired copyrighted works. Better trade agreements for the Digital Age? Data storage and cybersecurity have undergone considerable evolution since NAFTA was negotiated more than a quarter century ago. To the extent that renegotiation may better address trade issues related to digital goods and services, we view the modernization of NAFTA and other agreements as potentially positive. The comments Rapid7 submitted regarding NAFTA will likely apply to other international trade agreements as they come up for renegotiation. We hope the renegotiations result in a broadly equitable and beneficial trade regime that reflects the new realities of the digital economy.

WannaCry Update: Vulnerable SMB Shares Are Widely Deployed And People Are Scanning For Them

WannaCry Overview Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server…

WannaCry Overview Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server Message Block (SMB) file sharing protocol. It spreads to unpatched devices directly connected to the internet and, once inside an organization, those machines and devices behind the firewall as well. For full details, check out the blog post: Wanna Decryptor (WannaCry) Ransomware Explained. Since last Friday morning (May 12), there have been several other interesting posts about WannaCry from around the security community. Microsoft provided specific guidance to customers on protecting themselves from WannaCry. MalwareTech wrote about how registering a specific domain name triggered a kill switch in the malware, stopping it from spreading. Recorded Future provided a very detailed analysis of the malware's code. However, the majority of reporting about WannaCry in the general news has been that while MalwareTech's domain registration has helped slow the spread of WannaCry, a new version that avoids that kill switch will be released soon (or is already here) and that this massive cyberattack will continue unabated as people return to work this week. In order to understand these claims and monitor what has been happening with WannaCry, we have used data collected by Project Sonar and Project Heisenberg to measure the population of SMB hosts directly connected to the internet, and to learn about how devices are scanning for SMB hosts. Part 1: In which Rapid7 uses Sonar to measure the internet Project Sonar regularly scans the internet on a variety of TCP and UDP ports; the data collected by those scans is available for you to download and analyze at scans.io. WannaCry exploits a vulnerability in devices running Windows with SMB enabled, which typically listens on port 445. Using our most recent Sonar scan data for port 445 and the recog fingerprinting system, we have been able to measure the deployment of SMB servers on the internet, differentiating between those running Samba (the Linux implementation of the SMB protocol) and actual Windows devices running vulnerable versions of SMB. We find that there are over 1 million internet-connected devices that expose SMB on port 445. Of those, over 800,000 run Windows, and — given that these are nodes running on the internet exposing SMB — it is likely that a large percentage of these are vulnerable versions of Windows with SMBv1 still enabled (other researchers estimate up to 30% of these systems are confirmed vulnerable, but that number could be higher). We can look at the geographic distribution of these hosts using the following treemap (ISO3C labels provided where legible): The United States, Asia, and Europe have large pockets of Windows systems directly exposed to the internet while others have managed to be less exposed (even when compared to their overall IPv4 blocks allocation). We can also look at the various versions of Windows on these hosts: The vast majority of these are server-based Windows operating systems, but there is also a further unhealthy mix of Windows desktop operating systems in the mix—, some quite old. The operating system version levels also run the gamut of the Windows release history timeline: <span Using Sonar, we can get a sense for what is out there on the internet offering SMB services. Some of these devices are researchers running honeypots (like us), and some of these devices are other research tools, but a vast majority represent actual devices configured to run SMB on the public internet. We can see them with our light-touch Sonar scanning, and other researchers with more invasive scanning techniques have been able to positively identify that infection rates are hovering around 2%. Part 2: In which Rapid7 uses Heisenberg to listen to the internet While Project Sonar scans the internet to learn about what is out there, Project Heisenberg is almost the inverse: it listens to the internet to learn about scanning activity. Since SMB typically runs on port 445, and the WannaCry malware scans port 445 for potential targets, if we look at incoming connection attempts on port 445 to Heisenberg nodes as shown in Figure 4, we can see that scanning activity spiked briefly on 2017-05-10 and 2017-05-11, then increased quite a bit on 2017-05-12, and has stayed at elevated levels since. Not all traffic to Heisenberg on port 445 is an attempt to exploit the SMB vulnerability that WannaCry targets (MS17-010). There is always scanning traffic on port 445 (just look at the activity from 2017-05-01 through 2017-05-09), but a majority of the traffic captured between 2017-05-12 and 2017-05-14 was attempting to exploit MS17-010 and likely came from devices infected with the WannaCry malware. To determine this we matched the raw packets captured by Heisenberg on port 445 against sample packets known to exploit MS17-010. Figure 5 shows the number of unique IP addresses scanning for port 445, grouped by hour between 2017-05-10 and 2017-05-16. The black line shows that at the same time that the number of incoming connections increases (2017-05-12 through 2017-05-14), the number of unique IPs addresses scanning for port 445 also increases. Furthermore, the orange line shows the number of new, never- before- seen IPs scanning for port 445. From this we can see that a majority of the IPs scanning for port 445 between 2017-05-12 and 2017-05-14 were new scanners. Finally, we see scanning activity from 157 different countries in the month of May, and scanning activity from 133 countries between 2017-05-12 and 2017-05-14. Figure 6 shows the top 20 countries from which we have seen scanning activity, ordered by the number of unique IPs from those countries. While we have seen the volume of scans on port 445 increase compared to historical levels, it appears that the surge in scanning activity seen between 2017-05-12 and 2017-05-14 has started to tail off. So what? Using data collected by Project Sonar we have been able to measure the deployment of vulnerable devices across the internet, and we can see that there are many of them out there. Using data collected by project Heisenberg, we have seen that while scanning for devices that expose port 445 has been observed for quite some time, the volume of scans on port 445 has increased since 2017-05-12, and a majority of those scans are specifically looking to exploit MS17-010, the SMB vulnerability that the WannaCry malware looks to exploit. MS17-010 will continue to be a vector used by attackers, whether from the WannaCry malware or from something else. Please, follow Microsoft's advice and patch your systems. If you are a Rapid7 InsightVM or Nexpose customer, or you are running a free 30 day trial, here is a step by step guide on on how you can scan your network to find all of your assets that are potentially at risk for your organization. Coming Soon If this sort of information about internet wide measurements and analysis is interesting to you, stay tuned for the National Exposure Index 2017. Last year, we used Sonar scans to evaluate the security exposure of all the countries of the world based on the services they exposed on the internet. This year, we have run our studies again, we have improved our methodology and infrastructure, and we have new findings to share. Related: Find all of our WannaCry related resources here [Blog] Using Threat Intelligence to Mitigate Wanna Decryptor (WannaCry)

Under the Hoodie: Actionable Research from Penetration Testing Engagements

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.This paper covers the often occult art of…

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.This paper covers the often occult art of penetration testing, and seeks to demystify the process, techniques, and tools that pentesters use to break into enterprise networks. By drawing on the experiences of dozens of pentesters in the field, based on real, qualified data drawn from the real-life experiences of those pentesters, we're able to suss out the most common vulnerabilities that are exploited, the most common network misconfigurations that are leveraged, and the most effective methods we've found to compromise high-value credentials.Finding: Detection is EverythingProbably the most actionable finding we discovered is that most organizations that conduct penetration testing exercises have a severe lack of usable, reliable intrusion detection capabilities. Over two-thirds of our pentesters completely avoided detection during the engagement. This is especially concerning given that most assessments don't put a premium on stealth; due to constraints in time and scope, pentesters generate an enormous amount of malicious traffic. In an ideal network, these would be setting off alarm bells everywhere. Most engagements end with recommendations to implement some kind of incident detection and response, regardless of the specific techniques for compromise were used.Finding: Enterprise Size and Industry Doesn't MatterWhen we started this study, we expected to find quantitative differences between small networks and large networks, and between different industries. After all, you might expect a large, financial industry enterprise of over 1,000 employees would be better equipped to detect and defend against unwelcome attackers due to the security resources available and required by various compliance regimes and regulatory requirements. Or, you might believe that a small, online-only retail startup would be more nimble and more familiar with the threats facing their business.Alas, this isn't the case. As it turns out, the detection and prevention rates are nearly identical between large and small enterprises, and no industry seemed to fare any better or worse when it came to successful compromises.This is almost certainly due to the fact that IT infrastructure pretty much everywhere is built using the same software and hardware components. Thus, all networks tend to be vulnerable to the same common misconfigurations that have the same vulnerability profiles when patch management isn't firing at 100%. There are certainly differences in the details -- especially when it comes to custom-designed web applications -- but even those tend to have the same sorts of frameworks and components that power them.The Human TouchFinally, if you're not really into reading a bunch of stats and graphs, we have a number of "Under the Hoodie" sidebar stories, pulled from real-life engagements. For example, while discussing common web application vulnerabilities, we're able to share a story of how a number of otherwise lowish-severity, external web application issues lead to the eventual compromise of the entire internal back-end network. Not only are these stories fun to read, they do a pretty great job of illustrating how unrelated issues can conspire on an attacker's behalf to lead to surprising levels of unauthorized access.I hope you take a moment to download the paper and take a look at our findings; I don't know of any other research out there that explores the nuts and bolts of penetration testing in quite the depth or breadth that this report provides. In addition, we'll be covering the material at our booth at the RSA security conference next week in San Francisco, as well as hosting a number of "Ask a Pentester" sessions. Andrew and I will both be there, and we love nothing more than connecting with people who are interested in Rapid7's research efforts, so definitely stop by.

Snakes Masquerading as Vines

We spend a lot of time identifying trustworthiness in our day-to-day lives. We constantly evaluate trustworthiness in both the people that we meet and in the products and services that we decide to interact with. Imagine that you're like Tarzan in the jungle; you're trying…

We spend a lot of time identifying trustworthiness in our day-to-day lives. We constantly evaluate trustworthiness in both the people that we meet and in the products and services that we decide to interact with. Imagine that you're like Tarzan in the jungle; you're trying to navigate your way through products and services using the vines that hang in your path. Each vine either helps or hinders your path forward. Some are stronger than others and help you swing a far distance quickly and effectively (angel patterns). Others are actually snakes masquerading as vines. You reach out to grab hold and instead get bitten, releasing your grip and falling to the ground (Dark Patterns). As a user swinging through the jungle of products and services, it's easy to mistake a snake for a vine and end up lost on the ground. As designers, we need to do everything we can to make the vines of angel patterns obvious and remove the dark pattern snakes from the user's path. Like any new relationship, using new software starts with a little bit of anxiety. In software products, this anxiety is felt the first time a user engages with your product. A product with clear and honest messaging with transparent communications can reduce such user anxiety. We call it an angel pattern. There are many such angel patterns to achieve trustworthy experience. Read the original article published in User Experience Magazine to learn how we apply these patterns.

Research Report: Vulnerability Disclosure Survey Results

When cybersecurity researchers find a bug in product software, what's the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers' disclosure? Questions like these are becoming increasingly important as…

When cybersecurity researchers find a bug in product software, what's the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers' disclosure? Questions like these are becoming increasingly important as more software-enabled goods - and the cybersecurity vulnerabilities they carry - enter the marketplace. But more data is needed on how these issues are being dealt with in practice. Today we helped publish a research report [PDF] that investigates attitudes and approaches to vulnerability disclosure and handling. The report is the result of two surveys – one for security researchers, and one for technology providers and operators – launched as part of a National Telecommunications and Information Administration (NTIA) “multistakeholder process” on vulnerability disclosure. The process split into three working groups: one focused on building norms/best practices for multi-party complex disclosure scenarios; one focused on building best practices and guidance for disclosure relating to “cyber safety” issues, and one focused on driving awareness and adoption of vulnerability disclosure and handling best practices. It is this last group, the “Awareness and Adoption Working Group” that devised and issued the surveys in order to understand what researchers and technology providers are doing on this topic today, and why. Rapid7 - along with several other companies, organizations, and individuals - participated in the project (in full disclosure, I am co-chair of the working group) as part of our ongoing focus on supporting security research and promoting collaboration between the security community and technology manufacturers. The surveys, issued in April, investigated the reality around awareness and adoption of vulnerability disclosure best practices. I blogged at the time about why the surveys were important: in a nutshell, while the topic of vulnerability disclosure is not new, adoption of recommended practices is still seen as relatively low. The relationship between researchers and technology providers/operators is often characterized as adversarial, with friction arising from a lack of mutual understanding. The surveys were designed to uncover whether these perceptions are exaggerated, outdated, or truly indicative of what's really happening. In the latter instance, we wanted to understand the needs or concerns driving behavior. The survey questions focused on past or current behavior for reporting or responding to cybersecurity vulnerabilities, and processes that worked or could be improved. One quick note – our research efforts were somewhat imperfect because, as my data scientist friend Bob Rudis is fond of telling me, we effectively surveyed the internet (sorry Bob!). This was really the only pragmatic option open to us; however, it did result in a certain amount of selection bias in who took the surveys. We made a great deal of effort to promote the surveys as far and wide as possible, particularly through vertical sector alliances and information sharing groups, but we expect respondents have likely dealt with vulnerability disclosure in some way in the past. Nonetheless, we believe the data is valuable, and we're pleased with the number and quality of responses. There were 285 responses to the vendor survey and 414 to the researcher survey. View the infographic here [PDF]. Key findings Researcher survey The vast majority of researchers (92%) generally engage in some form of coordinated vulnerability disclosure. When they have gone a different route (e.g., public disclosure) it has generally been because of frustrated expectations, mostly around communication. The threat of legal action was cited by 60% of researchers as a reason they might not work with a vendor to disclose. Only 15% of researchers expected a bounty in return for disclosure, but 70% expected regular communication about the bug. Vendor survey Vendor responses were generally separable into “more mature” and “less mature” categories. Most of the more mature vendors (between 60 and 80%) used all the processes described in the survey. Most “more mature” technology providers and operators (76%) look internally to develop vulnerability handling procedures, with smaller proportions looking at their peers or at international standards for guidance. More mature vendors reported that a sense of corporate responsibility or the desires of their customers were the reasons they had a disclosure policy. Only one in three surveyed companies considered and/or required suppliers to have their own vulnerability handling procedures. Building on the data for a brighter future With the rise of the Internet of Things we are seeing unprecedented levels of complexity and connectivity for technology, introducing cybersecurity risk in all sorts of new areas of our lives. Adopting robust mechanisms for identifying and reporting vulnerabilities, and building productive models for collaboration between researchers and technology providers/operators has never been so critical. It is our hope that this data can help guide future efforts to increase awareness and adoption of recommended disclosure and handling practices. We have already seen some very significant evolutions in the vulnerability disclosure landscape – for example, the DMCA exemption for security research; the FDA post-market guidance; and proposed vulnerability disclosure guidance from NHTSA. Additionally, in the past year, we have seen notable names in defense, aviation, automotive, and medical device manufacturing and operating all launch high profile vulnerability disclosure and handling programs. These steps are indicative of an increased level of awareness and appreciation of the value of vulnerability disclosure, and each paves the way for yet more widespread adoption of best practices. The survey data itself offers a hopeful message in this regard - many of the respondents indicated that they clearly understand and appreciate the benefits of a coordinated approach to vulnerability disclosure and handling. Importantly, both researchers and more mature technology providers indicated a willingness to invest time and resources into collaborating so they can create more positive outcomes for technology consumers. Yet, there is still a way to go. The data also indicates that to some extent, there are still perception and communication challenges between researchers and technology providers/operators, the most worrying of which is that 60% of researchers indicated concern over legal threats. Responding to these challenges, the report advises that: “Efforts to improve communication between researchers and vendors should encourage more coordinated, rather than straight-to-public, disclosure. Removing legal barriers, whether through changes in law or clear vulnerability handling policies that indemnify researchers, can also help. Both mature and less mature companies should be urged to look at external standards, such as ISOs, and further explanation of the cost-savings across the software development lifecycle from implementation of vulnerability handling processes may help to do so.” The bottom line is that more work needs to be done to drive continued adoption of vulnerability disclosure and handling best practices. If you are an advocate of coordinated disclosure – great! – keep spreading the word. If you have not previously considered it, now is the perfect time to start investigating it. ISO 29147 is a great starting point, or take a look at some of the example policies such as the Department of Defense or Johnson and Johnson. If you have questions, feel free to post them here in the comments or contact community [at] rapid7 [dot] com. As a final thought, I would like to thank everyone that provided input and feedback on the surveys and the resulting data analysis - there were a lot of you and many of you were very generous with your time. And I would also like to thank everyone that filled in the surveys - thank you for lending us a little insight into your experiences and expectations. ~ @infosecjen

Signal to Noise in Internet Scanning Research

We live in an interesting time for research related to Internet scanning. There is a wealth of data and services to aid in research. Scanning related initiatives like Rapid7's Project Sonar, Censys, Shodan, Shadowserver or any number of other public/semi-public projects have been around…

We live in an interesting time for research related to Internet scanning. There is a wealth of data and services to aid in research. Scanning related initiatives like Rapid7's Project Sonar, Censys, Shodan, Shadowserver or any number of other public/semi-public projects have been around for years, collecting massive troves of data.  The data and services built around it has been used for all manner of research. In cases where existing scanning services and data cannot answer burning security research questions, it is not unreasonable for one to slap together some minimal infrastructure to perform Internet wide scans.  Mix the appropriate amounts of zmap or masscan with some carefully selected hosting/cloud providers, a dash of automation, and a crash-course in the legal complexities related to "scanning" and questions you ponder over morning coffee can have answers by day's end. So, from one perspective, there is an abundance of signal.  Data is readily available. There is, unfortunately, a significant amount of noise that must be dealt with. Dig even slightly deep into almost any data produced by these scanning initiatives and you'll have a variety of problems to contend with that can waylay researchers. For example, there are a variety of snags related the collection of the scan data that could influence the results of research: Natural fluctuation of IPs and endpoint reachability due to things like DHCP, mobile devices, or misconfiguration. When blacklists or opt-out lists are utilized to allow IP "owners" to opt-out from a given project's scanning efforts, how big is this blacklist?  What IPs are in it?  How has it changed since the last scan? Are there design issues/bugs in the system used to collect the scan data in the first place that influenced the scan results? During a given study, were there routing or other connectivity issues that affected the reachability of targets? Has this data already been collected?  If so, can that data be used instead of performing an additional scan? Worse, even in the absence of any problems related to the collection of the scan data, the data itself is often problematic: Size.  Scans of even just a single port and protocol can result in a massive amount of data to be dealt with.  For example, a simple HTTP GET request to every 80/TCP IPv4 endpoint currently results in a compressed archive of over 75G.  Perform deeper HTTP 1.1 vhost scans and you'll quickly have to contend with a terabyte or more.  Data of this size requires special considerations when it comes to the storage, transfer and processing. Variety.  From Internet-connected bovine milking machines, to toasters, *** toys, appliances and an increasingly large number of "smart" or "intelligent" devices are being connected to the Internet, exposing services in places you might not expect them.  For example, pick any TCP port and you can guarantee that some non-trivial number of the responses will be from HTTP services of one type or another.  These potentially unexpected responses may need to be carefully handled during data analysis. Oddities.  There is not a single TCP or UDP port that wouldn't yield a few thousand responses, regardless of how seemingly random the port may be.  12345/TCP?  1337/UDP?  65535/TCP?  Sure.  You can believe that there will be something out there responding on that port in some way.  Oftentimes these responses are the result of some security device between the scan source and destination.  For example, there is a large ISP that responds to any probe on any UDP port with an HTTP 404 response over UDP.  There is a vendor with products and services used to combat DDoS that does something similar, responding to any inbound TCP connection with HTTP responses. Lastly there is the issue of focus.  It is very easy for research that is based on Internet scanning data to quickly venture off course and become distracted. There is seemingly no end to the amount of strange things that will be connected in strange ways to the public IP space that will tempt the typically curious researcher. Be careful out there!

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now