Rapid7 Blog

Vulnerability Management  

Container Security Assessment in InsightVM

Earlier in the year in this blog post around modern network coverage and container security in InsightVM, we shared Rapid7’s plans to better understand and assess the modern and ever-changing network with Docker and container security. We began by introducing discovery of Docker hosts…

Earlier in the year in this blog post around modern network coverage and container security in InsightVM, we shared Rapid7’s plans to better understand and assess the modern and ever-changing network with Docker and container security. We began by introducing discovery of Docker hosts and images, as well as vulnerability assessment and secure configuration for Docker hosts. With these capabilities you can see where Docker technology lives in your environment and the exposure of your Docker hosts. We know visibility into your modern infrastructure, including vulnerabilities on individual container images is always precious. Today we’re happy to announce the next stage of container security capabilities in InsightVM: Container image assessment and visualization. Container image visibility InsightVM is built to provide visibility into your modern infrastructure; it’s the only solution that directly integrates with Azure, AWS, and VMware to automatically monitor your dynamic environments for new assets and vulnerabilities. Now, this visibility extends to vulnerabilities residing within Docker container images. When performing scans for vulnerabilities, InsightVM collects configuration information about Docker hosts and the images deployed on the host. One of the new ways InsightVM makes this information available is through Liveboards, a dashboard view that is updated in real time. You can add the Containers Dashboard to get a quick view, or add Container-specific cards to create your own views. The new cards give you insight into the potential risk posed by containers in your environment, such as: How many container hosts exist in my environment? Which specific assets are container hosts? How many of the container images in my environment have been assessed for vulnerabilities? What are the most commonly deployed container images? Expanding a card, we can see details of the assets that have been identified as Docker hosts. You’ll notice new filters available, allowing you to tailor your visualizations based on container image metadata: We can also drill into the individual hosts and view Container images that reside on the host. InsightVM also provides simple visibility into container images themselves. Here we see a view of vulnerabilities on packages. From this view we can also explore the specifics of layers that compose a container image. With InsightVM, getting visibility into container images is easy. However, most development teams working with containers make heavy use of container repositories. Automatically assessing container registries In order to get visibility into the risks containers present in your environment at scale, InsightVM offers integration with container registries. InsightVM provides visibility into container images hosted in public and private registries. Here we see a list of registries connected to InsightVM. InsightVM is configured by default with connections to Docker Hub and Quay.io registries and additional connections may be created: Registries can contain many images. InsightVM automatically assesses container images in your network within a registry. You can be assured when an image from the repository is deployed in your network, InsightVM will provide visibility to the vulnerabilities and configuration of the image. You can also assess or re-assess images as needed: These capabilities make Rapid7 a great partner for securing your application development infrastructure; we can now help you: Assess and secure container images in InsightVM; Scan production applications for vulnerabilities with InsightAppSec; Monitor container usage and deployment with InsightOps; Get a penetration test of your application environment with actionable advice; and Build out a secure software development life cycle with expert guidance. For more detailed information on using these capabilities in InsightVM, see our help page here. And of course, if you haven’t done so already, get a trial of InsightVM today and start assessing!

Patch Tuesday - September 2017

It's a big month, with Microsoft patching 85 separate vulnerabilities including the two Adobe Flash Player Remote Code Execution (RCE) fixes bundled with the Edge and Internet Explorer 11 updates. Continuing recent trends, the bulk of Critical RCE vulnerabilities are client-side, primarily in Edge, IE,…

It's a big month, with Microsoft patching 85 separate vulnerabilities including the two Adobe Flash Player Remote Code Execution (RCE) fixes bundled with the Edge and Internet Explorer 11 updates. Continuing recent trends, the bulk of Critical RCE vulnerabilities are client-side, primarily in Edge, IE, and Office. Microsoft has also released patches for today's branded public disclosure, "BlueBorne", which is a collection of vulnerabilities affecting the Bluetooth stacks from multiple vendors. The Microsoft-specific issue is CVE-2017-8628, a spoofing vulnerability that could allow a man-in-the-middle attack when in physical proximity to an affected system. In terms of exploitability, CVE-2017-8759 (a flaw in the way the .NET framework processes untrusted input) is the most urgent as it is known to already be exploited in the wild. Any attacker able to persuade a user to open a maliciously crafted document or application will be able to take control of affected systems with the same privileges as the user. Among the Office vulnerabilities, CVE-2017-8742, CVE-2017-8743, and CVE-2017-8744 are memory corruption vulnerabilities that could lead to RCE which Microsoft has classified as being likely to be exploited. Administrators should prioritize rolling out .NET fixes to workstations, then any relevant Windows 10 (which bundle Edge) and IE updates, followed by the Microsoft Office and system-level patches. As usual, there are also server-side patches that need to be applied. SharePoint sees a fix for a XSS vulnerability (CVE-2017-8629) as well as for two RCE vulnerabilities that also apply to Office Online Server (CVE-2017-8631) and CVE-2017-8743). Exchange Server also gets some love with fixes for CVE-2017-11761 and CVE-2017-8758 (Information Disclosure and Privilege Escalation, respectively). Of course, standard Windows Server systems are also getting critical fixes, such as that for CVE-2017-0161, an RCE in NetBIOS Over TCP/IP (NetBT).

Apache Struts S2-052 (CVE-2017-9805): What You Need To Know

Apache Struts, Again? What’s Going On? Yesterday’s Apache Struts vulnerability announcement describes an XML Deserialization issue in the popular Java framework for web applications. Deserialization of untrusted user input, also known as CWE-502, is a somewhat well-known vulnerability pattern, and I would expect…

Apache Struts, Again? What’s Going On? Yesterday’s Apache Struts vulnerability announcement describes an XML Deserialization issue in the popular Java framework for web applications. Deserialization of untrusted user input, also known as CWE-502, is a somewhat well-known vulnerability pattern, and I would expect crimeware kits to incorporate this vulnerability well before most enterprises have committed to a patch, given the complications that this patch introduces. What’s The Catch? The problem with deserialization vulnerabilities is that oftentimes, application code relies precisely on the unsafe deserialization routines being exploited—therefore, anyone who is affected by this vulnerability needs to go beyond merely applying a patch and restarting the service, since the patch can make changes to how the underlying application will treat incoming data. Apache mentions this in the "Backward Compatibility" section of S2-052. Updates that mention, "it is possible that some REST actions stop working" is enough to cause cold sweats for IT operations folks who need to both secure their infrastructure and ensure that applications continue to function normally. What Can I Do? Organizations that rely on Apache Struts to power their websites need to start that application-level testing now so as to avoid becoming the next victims in a wave of automated attacks that leverage this vulnerability. Remote code execution means everything from defacements to ransoms and everything in between. In the meantime, Rapid7’s product engineering teams are working up coverage for organizations to detect, verify, and remediate this critical issue. A Metasploit module is in progress, and will be released shortly to help validate any patching or other mitigations. InsightVM customers with content at “Wednesday 6th September 2017” or later (check Administration --> General to confirm content version) can determine whether they have a vulnerable version of Apache Struts present on Unix hosts in their environment by performing an authenticated scan. The vulnerability id is struts-cve-2017-9805 should you wish to set up a scan template with just this check enabled. It has also been tagged with 'Rapid7 Critical.' An unauthenticated check for CVE-2017-9805 is available for InsightVM and Nexpose under the same id, struts-cve-2017-9805. This check does not remotely execute code; instead, it detects the presence of the vulnerable component against the root and default showcase URIs of Apache Struts instances. In addition to these specific updates, we’ve also produced a quick guide with step-by-step instructions on how InsightVM and Nexpose can be used to discover, assess, and track remediation of critical vulnerablities, including this Apache Struts vuln. Not an InsightVM customer? Download a free 30-day trial today to get started. Should I Panic? Yes, you should panic. For about two minutes. Go ahead and get it out of your system. Once that’s done, though, the work of evaluating the Apache Struts patch and how it’ll impact your business needs to get started. We can’t stress enough the impact here—Java deserialization nearly always leads to point-and-click remote code execution in the context of the web service, and patching against new deserialization bugs carries some risk of altering the intended logic for your specific web application. It’s not a great situation to be in, but it’s surmountable. If you have any questions about this issue, feel free to comment below, or get in touch with your regular Rapid7 support contacts.

Vulnerability Management Market Disruptors

Gartner’s recent vulnerability management report provides a wealth of insight into vulnerability management (VM) tools and advice for how to build effective VM programs. Although VM tools and capabilities have changed since the report’s last iteration in 2015, interestingly one thing hasn’t:…

Gartner’s recent vulnerability management report provides a wealth of insight into vulnerability management (VM) tools and advice for how to build effective VM programs. Although VM tools and capabilities have changed since the report’s last iteration in 2015, interestingly one thing hasn’t: Gartner’s analysis of potential disruptors to VM tools and practices. Great minds think alike, as we’ve been heavily investing in these areas to help our customers overcome these persistent challenges. We’ve made numerous enhancements to our vulnerability management solutions (InsightVM and Nexpose) since that 2015 report to address both current and emerging vulnerability management challenges. New Asset Types: Gone are the days when you could just count the number of servers and desktops in your network and be confident that any changes in between quarterly scans would be minimal. Now, networks are constantly changing thanks to virtual machines, IoT, and containers. Nexpose was always a leader in technology integrations, and InsightVM is even more closely integrated into modern infrastructure. InsightVM is the only vulnerability management tool that has direct integration with VMware to automatically discover and assess these devices as they’re spun up; the Insight Agent is also easily clonable so you can integrate an agent into any gold image for automatic deployment. This means that even if your network is constantly changing as VMs are spun up and down, we’ve automatically got you covered. IoT devices are a trickier beast, and Rapid7 is one of the leaders in IoT security research—our recently-released hardware bridge brings the power of Metasploit to IoT penetration testing, enabling research and security testing of a wide range of IoT devices. Finally, InsightVM currently lets you discover containers in your environment, and we’re working on the ability to actively assess containers and container images, providing visibility to another area that many security teams struggle with. Bring Your Own Devices: BYOD has been the buzzword of buzzwords for a number of years now, but as consumer and corporate adoption continues to rise (powered by mobile productivity apps like messaging tools, mobile CRM apps, etc. ), the combined attack surface increases, and the line between what’s personal and what’s corporate blurs. Gartner has released several reports on the topic and recognizes that this is a continuing challenge for vulnerability management. InsightVM makes it easy to get visibility into that attack surface and assess employee devices. We can discover mobile devices that connect to ActiveSync, providing visibility into corporate device ownership so security teams can see where their risk is. Rapid7 Insight Agents can be deployed to any remote laptop, providing continuous monitoring for any device, even if it never connects to the corporate network. Agents can be installed as part of your gold laptop images so that they’re automatically deployed to new employees. With InsightVM, you don’t have to worry about losing track of people working from home or replacement laptops becoming security holes that are never scanned. Cloud Computing: Gartner lists cloud computing as an issue related to the loss of control of infrastructure and even of the devices to be scanned. We find the biggest challenge with cloud services is visibility; cloud instances are often spun up and down rapidly, and the details don’t always make their way to security, giving them only a small inkling of the true footprint and attack surface of their AWS or Azure environments. Similar to our integration with VMWare, InsightVM integrates with AWS and Azure to automatically detect new devices as they’re spun up or down. InsightVM also makes it easy to deploy agents to new cloud devices by embedding them into a gold image. To aid in visibility, you can import tags from Azure into InsightVM, so security teams can report on the same groupings that their IT and development teams use. Thus security teams can be confident in understanding their changing attack surface as rapidly as new devices are deployed. Large Volumes of Data: With all of the above factors drastically increasing the scope of vulnerability management, data management and analysis becomes more important. Even if a tool can gather vulnerability data from every part of your network, you’re never going to have time to fix everything; how do you prioritize what to fix first, and how do you get a holistic view of your security program’s progress? This challenge is why we launched InsightVM and the Insight platform in general; by leveraging the cloud for data analysis, we can provide features like live customizable dashboards and remediation tracking without weighing down customer networks. It also lets us more rapidly deploy new features, like dashboard cards and built-in ticketing integrations with ServiceNow and JIRA. Vulnerability Prioritization: According to Gartner, “A periodic scan of a 100,000-node network often yields from 1 million to as many as 10 million findings (some legitimate and some false or irrelevant).” Given the limited resources that virtually every security team faces, it’s increasingly difficult to figure out what to spend time on, especially given that some systems are more important from a business context than others. Understanding how attackers think and behave has always been one of Rapid7’s strengths, and we pass this on to our customers with InsightVM. Our risk scoring leverages CVSS and amplifies it by factoring in exploit exposure, malware exposure, and vulnerability age to provide a much more granular risk score of 1-1000, enabling customers to focus on the vulnerabilities that make it easiest for an attacker to break in. Combined with the ability to tag certain assets as critical to automatically prioritize them in remediation, we automate the often-manual process of trying to figure out what to fix first. InsightVM has been built to tackle the future of vulnerability management head-on, so that customers never have to worry about falling behind the curve and opening gaps in their security posture. For more information, Gartner customers can download the report, and try out InsightVM today!

Patch Tuesday - August 2017

It was a busy month this month with a total of 48 security issues fixed. All of these have a severity of Critical or Important with Remote Code Execution vulnerabilities again figuring highly, particularly for Microsoft Edge. There were also a few publicly disclosed vulnerabilities…

It was a busy month this month with a total of 48 security issues fixed. All of these have a severity of Critical or Important with Remote Code Execution vulnerabilities again figuring highly, particularly for Microsoft Edge. There were also a few publicly disclosed vulnerabilities that were fixed, including CVE-2017-8633 (Privilege Escalation with Windows Error Reporting). None of the disclosed vulnerabilities have publicly known exploits as of writing. Another critical Adobe Flash Player RCE vulnerability has been fixed (ADV170010). Also of note were a few revisions to CVE-2017-0071, CVE-2017-0228, and CVE-2017-0299 that will require the installation of July (CVE-2017-0071) and August (CVE-2017-0228 and CVE-2017-0299) patches to ensure you are fully protected. We were waiting to see if Microsoft would release any patches for the recently disclosed SMBLoris vulnerability in this release, but they don't seem to have taken any action to fix in this round of patches. Finally, this is the first time we have seen vulnerabilities patched on the Linux subsystem under Windows. Since its introduction, it was only a matter of time: CVE-2017-8627 (Dos) and CVE-2017-8622 (Privilege Escalation) are the first of their kind.

R7-2017-13 |CVE-2017-5243: Nexpose Hardware Appliance SSH Enabled Obsolete Algorithms

Summary Nexpose physical appliances shipped with an SSH configuration that allowed obsolete algorithms to be used for key exchange and other functions. Because these algorithms are enabled, attacks involving authentication to the hardware appliances are more likely to succeed. We strongly encourage current hardware appliance…

Summary Nexpose physical appliances shipped with an SSH configuration that allowed obsolete algorithms to be used for key exchange and other functions. Because these algorithms are enabled, attacks involving authentication to the hardware appliances are more likely to succeed. We strongly encourage current hardware appliance owners to update their systems to harden their SSH configuration using the steps outlined under “Remediation” below. In addition, Rapid7 is working with the appliance vendor to ensure that future appliances will only allow desired algorithms. This vulnerability is classified as CWE-327 (Use of a Broken or Risky Cryptographic Algorithm). Given that the SSH connection to the physical appliances uses the 'administrator' account, which does have sudo access on the appliance, the CVSS base score for this issue is 8.5. Credit Rapid7 warmly thanks Liam Somerville for reporting this vulnerability to us, as well as providing information throughout the investigation to help us resolve the issue quickly. Am I affected? All physical, hardware appliances are affected. Virtual appliances (downloadable virtual machines) are NOT affected. Vulnerability Details Nexpose Physical Appliances The default SSH configuration of the hardware appliance enables potentially problematic algorithms which are considered obsolete. KEX algorithms: diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, diffie-hellman-group1-sha1, ssh-dss, ecdsa-sha2-nistp256, ssh-ed25519 Encryption algorithms: arcfour256, arcfour128, aes128-cbc, 3des-cbc, blowfish-cbc, cast128-cbc, aes192-cbc, aes256-cbc, arcfour, rijndael-cbc@lysator.liu.se MAC algorithms: hmac-md5-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64-etm@openssh.com, hmac-ripemd160-etm@openssh.com, hmac-sha1-96-etm@openssh.com, hmac-md5-96-etm@openssh.com, hmac-md5, hmac-sha1, umac-64@openssh.com, hmac-ripemd160, hmac-ripemd160@openssh.com, hmac-sha1-96, hmac-md5-96 These are supported by the version of OpenSSH used on the appliance, and should be disabled via explicit configuration that enables only desired algorithms. Nexpose Virtual Appliances The software appliances (downloadable virtual machines) are NOT affected by this issue. They specify desired algorithms, only allowing those generally recommended. Remediation - Updated 2017/06/02 Before making any updates, first verify that your appliance is running Ubuntu 14.04 or above. You can determine the version by running "lsb_release -r”. If on 14.04, you should see output like “Release: 14.04”. If appliance is running Ubuntu 12.04 or below: OS upgrade required Please reach out to Rapid7 support for more information on upgrading. Ubuntu 12.04 reached End of Life in April 2017, and we strongly encourage you to update to a supported version. DO NOT continue with the changes below if you are not on 14.04 or above, as some of the configuration options will not be supported by older versions of OpenSSH. If appliance is running Ubuntu 14.04 or above The version of OpenSSH on base Ubuntu 14.04 (“OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014”) will support the configuration changes below. If you updated from 12.04, you may want to update OpenSSH to the latest available version to ensure you have available security patches, but it is not required for this change. You can check you OpenSSH version by running “ssh -V”. The current latest for 14.04 is “OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8” Note on verification and existing connections Do not close the SSH session you use to make the configuration change until you first attempt a new connection and verify you are able to connect. This will enable you to stay connected in the event there is a problem with the edit and you need to revert and review, as active SSH sessions should not be closed even across service restarts. If you skip this step, you may lose the ability to connect over SSH, potentially meaning you need physical access or other means to fix the issue. Configuration change Administrators need to edit the /etc/ssh/sshd_config file on their Nexpose appliance. Before changing the configuration file, copy it (e.g. "sudo cp /etc/ssh/sshd_config /home/administrator") in case there is a problem during editing. Add the following lines (based on the guidelines available here) to the end of the file: # Enable only modern ciphers, key exchange, and MAC algorithms # https://wiki.mozilla.org/Security/Guidelines/OpenSSH#Modern_.28OpenSSH_6.7.2B.29 KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com Please be careful to copy the entirety of the lines above, which may scroll horizontally in your browser. You can also copy from this gist. Depending on the version of SSH you have installed, there may be other configuration lines in that file for "KexAlgorithms", "Ciphers", and "MACs". If that is the case, remove or comment (add a"#" to the beginning of the line) those lines so that the desired configuration you add is sure to be respected. Editing this file will require root access, so the appliance default "administrator" user, or another user with permission to sudo, will need to perform this step. After updating the configuration file, verify that the changes made match the configuration above. This is important, as missing part of the configuration may result in a syntax error on service restart, and a loss of connectivity. You can run this command and compare the three output lines with the configuration block above: egrep "KexAlgorithms|Ciphers|MACs" /etc/ssh/sshd_config After verifying the configuration change, restart the SSH service by running "service ssh restart". Once that completes, verify you can still connect via ssh client to the appliance in a separate terminal. Do not close the original terminal until you've successfully connected with a second terminal. This change should not impact connections from Nexpose instances to the physical appliance (SSH is not used for this communication). The main impact is shoring up access by SSH clients such that they cannot connect to the appliance using obsolete algorithms. We apologize for any inconvenience, and would like to warmly thank the customers that worked with us to test and troubleshoot these remediations. Disclosure Timeline Wed, May 10, 2017: Vulnerability reported to Rapid7 Wed, May 17, 2017: Vulnerability confirmed by Rapid7 Tue, May 23, 2017: Rapid7 assigned CVE-2017-5243 for this issue Wed, May 31, 2017: Disclosed to MITRE Wed, May 31, 2017: Public disclosure

CVE-2017-5242: Nexpose/InsightVM Virtual Appliance Duplicate SSH Host Key

Today, Rapid7 is notifying Nexpose and InsightVM users of a vulnerability that affects certain virtual appliances. While this issue is relatively low severity, we want to make sure that our customers have all the information they need to make informed security decisions regarding their networks.…

Today, Rapid7 is notifying Nexpose and InsightVM users of a vulnerability that affects certain virtual appliances. While this issue is relatively low severity, we want to make sure that our customers have all the information they need to make informed security decisions regarding their networks. If you are a Rapid7 customer who has any questions about this issue, please don't hesitate to contact your customer success manager (CSM), or your usual support contact. We apologize for any inconvenience this may cause our customers. We take our customers' security very seriously and strive to provide full transparency and clarity so users can take action to protect their assets as soon as practicable. Description of CVE-2017-5242 Nexpose and InsightVM virtual appliances downloaded between April 5th, 2017 and May 3rd, 2017 contain identical SSH host keys. Normally, a unique SSH host key should be generated the first time a virtual appliance boots. A malicious user with privileged access to one of these vulnerable virtual appliances could retrieve the SSH host private key and use it to impersonate another user's vulnerable appliance. In order to do so, an attacker would also need to redirect traffic from the victim's appliance to the attacker's appliance. Likewise, an attacker that can capture SSH traffic between a victim's client machine and the victim's virtual appliance could decrypt this traffic. In either attack scenario, an attacker would need to gain a privileged position on a victim's network in order to capture or redirect network traffic. Since our virtual appliances are rarely exposed directly to the internet, this added complexity makes it a relatively low-risk vulnerability. Am I affected? Customers can determine whether their virtual appliance is affected by running the following command: stat /etc/ssh/ssh_host_* | grep Modify Modify: 2017-04-29 13:20:13.684650643 -0700 Modify: 2017-04-29 13:20:13.684650643 -0700 Modify: 2017-04-29 13:20:13.724650642 -0700 Modify: 2017-04-29 13:20:13.724650642 -0700 Modify: 2017-04-29 13:20:13.764650641 -0700 Modify: 2017-04-29 13:20:13.764650641 -0700 Modify: 2017-04-29 13:20:13.592650647 -0700 Modify: 2017-04-29 13:20:13.592650647 -0700 Affected virtual appliances contain SSH host keys generated between April 5th, 2017 and May 3rd, 2017. If the modified date for any of the SSH host keys falls in this range, then the virtual appliance is affected and the remediation steps below should be completed. Remediation Customers should either download and deploy the latest virtual appliance or regenerate SSH host keys, using these commands: /bin/rm -v /etc/ssh/ssh_host_* dpkg-reconfigure openssh-server /etc/init.d/ssh restart Post-remediation After regenerating the SSH host keys, customers will see a "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" notice the next time they SSH to the virtual appliance. Customers should run the following command on the client they use to SSH to the virtual appliance. ssh-keygen -R <Virtual_Appliance_FQDN_or_IP> Resources The latest virtual appliances are available at: https://community.rapid7.com/docs/DOC-2595 Additional details to resolve “REMOTE HOST IDENTIFICATION HAS CHANGED!” warning can be found at: https://www.cyberciti.biz/faq/warning-remote-host-identification-has-changed-err or-and-solution/

Preparing for GDPR

GDPR is coming….. If your organisation does business with Europe, or more specifically does anything with the Personal Data of EU Citizens who aren't dead (i.e. Natural Persons), then, just like us, you're going to be in the process of living the dream…

GDPR is coming….. If your organisation does business with Europe, or more specifically does anything with the Personal Data of EU Citizens who aren't dead (i.e. Natural Persons), then, just like us, you're going to be in the process of living the dream that is Preparing for the General Data Protection Regulation. For many organisations, this is going to be a gigantic exercise, as even if you have implemented processes and technologies to meet with current regulations there is still additional work to be done. Penalties for infringements of GDPR can be incredibly hefty. They are designed to be dissuasive. Depending on the type of infringement, the fine can be €20 million, or 4% of your worldwide annual turnover, depending on which is the higher amount. Compliance is not optional, unless you fancy being fined eye-watering amounts of money, or you really don't have any personal data of EU citizens within your control. The Regulation applies from May 25th 2018. That's the day from which organisations will be held accountable, and depending on which news website you choose to read, many organisations are far from ready at the time of writing this blog. Preparing for GDPR is likely to be a cross-functional exercise, as Legal, Risk & Compliance, IT, and Security all have a part to play. It's not a small amount of regulation (are they ever?) to read and understand either – there are 99 Articles and 173 Recitals. I expect if you're reading this, it's because you're hunting for solutions, services, and guidance to help you prepare. Whilst no single software or services vendor can act as a magic bullet for GDPR, Rapid7 can certainly help you cover some of the major security aspects of protecting Personal Data, in addition to having solutions to help you detect attackers earlier in the attack chain, and service offerings that can help you proactively test your security measures, we can also jump into the fray if you do find yourselves under attack. Processes and procedures, training, in addition to technology and services all have a part to play in GDPR. Having a good channel partner to work with during this time is vital as many will be able to provide you with the majority of aspects needed. For some organisations, changes to roles and responsibilities are required too – such as appointing a Data Protection Officer, and nominating representatives within the EU to be points of contact. So what do I need to do?If you're just beginning in your GDPR compliance quest, I'd recommend you take a look at this guide which will get you started in your considerations. Additionally, having folks attend training so that they can understand and learn how to implement GDPR is highly recommended – spending a few pounds/euros/dollars, etc on training now can save you from the costly infringement fines later on down the line. There are many courses available – in the UK I recently took this foundation course, but do hunt around to find the best classroom or virtual courses that make sense for your location and teams.Understanding where Personal Data physically resides, the categories of Personal Data you control and/or process, how and by whom it is accessed, and how it is secured are all areas that you have to deal with when complying with GDPR. Completing Privacy Impact Assessments are a good step here. Processes for access control, incident detection and response, breach notification and more will also need review or implementation. Being hit with a €20million fine is not something any organisation will want to be subject to. Depending on the size of your organisation, a fine of this magnitude could easily be a terminal moment. There is some good news, demonstrating compliance, mitigating risk, and ensuring a high level of security are factors that are considered if you are unfortunate to experience a data breach. But ideally, not being breached in the first place is best, as I'm sure you‘d agree, so this is where your security posture comes in. Article 5, which lists the six principles of processing personal data, states that personal data must be processed in an appropriate manner as to maintain security. This principal is covered in more detail by Article 32 which you can read more about here.Ten Recommendations for Securing Your EnvironmentEncrypt data – both at rest and in transit. If you are breached, but the Personal Data is in a render unintelligible to the attacker then you do not have to notify the Data Subjects (See Article 34 for more on this). There are lots of solutions on the market today – have a chat to your channel partner to see what options are best for you. Have a solid vulnerability management process in place, across the entire ecosystem. If you're looking for best practices recommendations, do take a look at this post. Ensuring ongoing confidentiality, integrity and availability of systems is part of Article 32 – if you read Microsoft's definition of a software vulnerability it talks to these three aspects. Backups. Backups. Backups. Please make backups. Not just in case of a dreaded ransomware attack; they are a good housekeeping facet anyway in case of things like storage failure, asset loss, natural disaster, even a full cup of coffee over the laptop. If you don't currently have a backup vendor in place, Code42 have some great offerings for endpoints, and there are a plethora of server and database options available on the market today. Disaster recovery should always be high on your list regardless of which regulations you are required to meet.Secure your web applications. Privacy-by-design needs to be built in to processes and systems – if you're collecting Personal Data via a web app and still using http/clear text then you're already going to have a problem. Pen tests are your friend. Attacking your systems and environment to understand your weak spots will tell you where you need to focus, and it's better to go through this exercise as a real-world scenario now than wait for a ‘real' attacker to get in to your systems. You could do this internally using tools like Metasploit Pro, and you could employ a professional team to perform regular external tests too. Article 32 says that you need to have a process for regularly testing, assessing, & evaluating the effectiveness of security measures. Read more about Penetration testing in this toolkit.Detect attackers quickly and early. Finding out that you've been breached ~5 months after it first happened is an all too common scenario (current stats from Mandiant say that the average is 146 days after the event). Almost 2/3s of organisations told us that they have no way of detecting compromised credentials, which has topped the list of leading attack vectors in the Verizon DBIR for the last few years. User Behaviour Analytics provide you with the capabilities to detect anomalous user account activity within your environment, so you can investigate and remediate fast.Lay traps. Deploying deception technologies, like honey pots and honey credentials, are a proven way to spot attackers as they start to poke around in your environment and look for methods to access valuable Personal Data.  Don't forget about cloud-based applications. You might have some approved cloud services deployed already, and unless you've switched off the internet it's highly likely that there is a degree of shadow IT (a.k.a. unsanctioned services) happening too. Making sure you have visibility across sanctioned and unsanctioned services is a vital step to securing them, and the data contained within them. Know how to prioritise and respond to the myriad of alerts your security products generate on a daily basis. If you have a SIEM in place that's great, providing you're not getting swamped by alerts from the SIEM, and that you have the capability to respond 24x7 (attackers work evenings and weekends too). If you don't have a current SIEM (or the time or budget to take on a traditional SIEM deployment project), or you are finding it hard to keep up with the number of alerts you're currently getting, take a look at InsightIDR – it covers a multitude of bases (SIEM, UBA and EDR), is up and running quickly, and generates alert volumes that are reasonable for even the smallest teams to handle. Alternatively, if you want 24x7 coverage, we also have a Managed Detection and Response offering which takes the burden away, and is your eyes and ears regardless of the time of day or night.Engage with an incident response team immediately if you think you are in the midst of an attack. Accelerating containment and limiting damage requires fast action. Rapid7 can have an incident response engagement manager on the phone with you within an hour. Security is just one aspect of the GDPR, for sure, but it's very much key to compliance. Rapid7 can help you ready your organisation, please don't hesitate to contact us or one of our partners if you are interested in learning more about our solutions and services. GDPR doesn't have to be GDP-argh!

Vulnerability Management: Best Practices

We are often asked by customers for recommendations on what they should be scanning, when they should be scanning, how they ensure remote devices don't get missed, and in some cases why they need to scan their endpoints (especially when they have counter-measures in place…

We are often asked by customers for recommendations on what they should be scanning, when they should be scanning, how they ensure remote devices don't get missed, and in some cases why they need to scan their endpoints (especially when they have counter-measures in place protecting the endpoints). This blog post is intended to help you understand why running regular scans is a vital part of a security program, and to give you options on how to best protect your ecosystem.Q: What do I need to be scanning?Scan everything. This may seem blunt or overly simplified, but if a device touches your ecosystem, then it should be scanned. Why? Because if you don't, you are losing visibility into the weaknesses in your infrastructure. This brings inherent, unquantifiable risk because you cannot see where the holes are that an attacker can use to access your organisation. Exploitable vulnerabilities exist across all operating systems and applications; if you are not scanning your entire ecosystem, including cloud and virtual, you are leaving these vulnerabilities as unknowns. Scanning everything does not mean that all systems or devices will be treated with the same level of criticality when it comes to prioritizing remediation actions. Q: How frequently should I scan my ecosystem?Our recommendation is to combine Insight Agents and regular scanning to get a live picture of your ecosystem at all times. Nexpose Now capabilities prevent your data from becoming stale, meaning you'll know where to focus your efforts on reducing risk at all times. Specifically, adaptive security within Nexpose Now automatically detects new devices as they join your network, so you never miss a network change.If you haven't had a chance to upgrade your vulnerability management program to include the live monitoring that comes with Nexpose Now and are still using traditional Nexpose, then scanning everything as frequently as possible is highly recommended. Monthly scans to coincide with Patch Tuesday are good, but scanning more frequently certainly doesn't hurt. Customers often split up their scans to hit different segments at different times, but they'll cover the whole environment on a monthly or bi-weekly basis. More details on scan configuration can be found here.Q: How do I ensure my remote workers aren't missed?Most organisations have a number of remote workers, some of whom hardly ever connect to the internal network, but still have access to certain applications when they are on the road. It can be tricky to ensure their devices don't get missed during scans and patching. Remote workers bring additional risk as they often keep sensitive data local to their devices for ease of access when they are travelling, and frequently connect to unsecured Wi-Fi. Therefore, on the occasions when they do venture into the office, their devices are potential grenades.  You really don't want to miss these folks. The best way to ensure you have visibility into these devices is to use our Insight Agent, which can connect back to Nexpose Now as long as the device has internet access.  You can learn more about how Rapid7 can solve your remote workforce challenges here.Q: Why are endpoints important? Can I just scan my servers?Endpoints run operating systems and applications that have vulnerabilities, meaning they can be breached just as easily as servers — if not more so. Endpoints are more likely to have a connection to the internet and generally have users attached to them. Users often introduce security risks, either due to a lack of care or, in some cases, through no fault of their own (i.e. unknowingly connecting to a compromised website). Endpoints can have sensitive data saved locally while also accessing resources on the network. Users can also introduce security risks by connecting removable media and other USB type devices to endpoints. Furthermore, attackers have been increasingly focusing on using endpoints as an initial entry point in an attack. We've become very good at spending millions of dollars on firewalls and defense-in-depth tools to protect servers, so attackers have moved to the weakest link that remains: users and their endpoints. Almost every major breach in the news begins with a phishing or spear phishing attack, and these all exploit endpoints.As mentioned above, any device you do not scan brings unquantifiable risk to your ecosystem. Scan or use Insight Agents across all your devices, endpoints, servers, virtual, remote, and cloud. Q: But I've got countermeasures in place!Good. Countermeasures — and a good security policy — are really important. These could include Host or Network IPS, a strong security configuration on the endpoints, plus things like access control policies and strict settings for remote users to ensure they always connect to your VPN before accessing the internet. That doesn't mean you shouldn't scan devices for vulnerabilities *and* validate that your countermeasures are working. There have been multiple instances of vulnerabilities in security software itself, not to mention operating system and application vulnerabilities, as well as malware that affects configuration settings and a device's security policy. If you don't have a way to see which vulnerabilities are on a device, then you are leaving a door open for attackers. The best way to test that your countermeasures are working properly is to simulate an attack and make sure they catch it; many customers use Metasploit Pro to test their security controls, or our professional services to simulate a full-scale attack and help plan how to improve compensating controls.Additional questions?If you would like to discuss best practices further, we would love to talk with you. If you are already a customer, your Customer Success Manager is a great resource. We can also provide services engagements to help you implement or invigorate your security program. If you're interested in receiving training on how to make the most of Nexpose, we have options available to you as well. Contact us through your CSM or Rapid7.com and let us know how we can help.

12 Days of Haxmas: Giving the Gift of Bad News

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. This holiday season, eager little hacker girls and boys around the world will be tearing open their new IoT gadgets and geegaws, and set to work on evading tamper evident seals, proxying communications, and reversing firmware, in search of a Haxmas miracle of 0day. But instead of exploiting these newly discovered vulnerabilities, many will instead notice their hearts growing three sizes larger, and wish to disclose these new vulns in a reasonable and coordinated way in order to bring attention to the problem and ultimately see a fix for the discovered issues. In the spirit of HaXmas, then, I'd like to take a moment to talk directly to the good-hearted hackers out there about how one might go about disclosing vulnerabilities in a way that maximizes the chances that your finding will get the right kind of attention. Keep It Secret, Keep it Santa First and foremost, I'd urge any researcher to consider the upsides of keeping your disclosure confidential for the short term. While it might be tempting to tweet a 140-character summary publically to the vendor's alias, dropping this kind of bomb on the social media staff of an electronics company is kind of a jerk move, and only encourages an adversarial relationship from there on out. In the best case, the company most able to fix the issue isn't likely to work with you once you've published, and in the worst, you might trigger a defensive reflex where the vendor refuses to acknowledge the bug at all. Instead, consider writing a probing email to the company's email aliases of security@, secure@, abuse@, support@, and info@, along the lines of, "Hi, I seem to have found a software vulnerability with your product, who can I talk to?" This is likely to get a human response, and you can figure out from there who to talk to about your fresh new vulnerability. The Spirit of Giving You could also go a step further, and check the vendor's website to see if they offer a bug bounty for discovered issues, or even peek in on HackerOne's community-curated directory of security contacts and bug bounties. For example, searching for Rapid7 gives a pointer to our disclosure policies, contact information, and PGP key. However, be careful when deciding to participate in a bug bounty. While the vast majority of bounty programs out there are well-intentioned, some come with an agreement that you will never, ever, ever, in a million years, ever disclose the bug to anyone else, ever — even if the vendor doesn't deign to acknowledge or fix the issue. This can leave you in a sticky situation, even if you end up getting paid out. If you agree to terms like that, you can limit your options for public disclosure down the line if the fix is non-existent or incomplete. Because of these kinds of constraints, I tend to avoid bug bounties, and merely offer up the information for free. It's totally okay to ask about a bounty program, of course, but be sure that you're not phrasing your request that can be read as an extortion attempt — that can be taken as an attack, and again, trigger a negative reaction from the vendor. No Reindeer Games In the happy case where you establish communications with the vendor, it's best to be as clear and as direct as possible. If you plan to publish your findings on your blog, say so, and offer exactly what and when you plan to publish. Giving vendors deadlines — in a friendly, non-threatening, matter-of-fact way — turns out to be a great motivator for getting your issue prioritized internally there. Be prepared to negotiate around the specifics, of course — you might not know exactly how to fix a bug, and how long that'll take, and the moment you disclose, they probably don't, either. Most importantly, though, try to avoid over-playing your discovery. Consider what an adversary actually has to do to exploit the bug — maybe they need to be physically close by, or already have an authorized account, or something like that. Being upfront with those details can help frame the risk to other users, and can tamp down irrational fears about the bug. Finally, try to avoid blaming the vendor too harshly. Bugs happen — it's inherent in the way we write, assemble, and ship software for general purpose computers. Assume the vendor isn't staffed with incompetents and imbeciles, and that they actually do care about protecting their customers. Treating your vendor with respect will engender a pretty typical honey versus vinegar effect, and you're much more likely to see a fix quickly. Sing it Loud For All to Hear Assuming you've hit your clearly-stated disclosure deadline, it's time to publish your findings. Again, you're not trying to shame the vendor with your disclosure — you're helping other people make better informed decisions about the security of their own devices, giving other researchers a specific, documented case study of a vulnerability discovered in a shipping product, and teaching the general public about How Security Works. Again, effectively communicating the vulnerability is critical. Avoid generalities, and offer specifics — screenshots, step-by-step instructions on how you found it, and ideally, a Metasploit module to demonstrate the effects of an exploit. Doing this helps move other researchers along in helping them to completely understand your unique findings and perhaps apply your learnings to their own efforts. Ideally, there's a fix already available and distributed, and if so, you should clearly state that, early on in your disclosure. If there isn't, though, offer up some kind of solution to the problem you've discovered. Nearly always, there is a way to work around the issue through some non-default configuration, or a network-level defense, or something like that. Sometimes, the best advice is to avoid using the product all together, but that tends to be the last course of defense. Happy HaXmas! Given the recently enacted DMCA research exemption on consumer devices, I do expect to see an uptick in disclosing issues that center around consumer electronics. This is ultimately a good thing -- when people tinker with their own devices, they are more empowered to make better decisions on how a technology can actually affect their lives. The disclosure process, though, can be almost as challenging as the initial hackery as finding and exploiting vulnerabilities in the first place. You're dealing with emotional people who are often unfamiliar with the norms of security research, and you may well be the first security expert they've talked to. Make the most of your newfound status as a security ambassador, and try to be helpful when delivering your bad news.

Vulnerability Categories and Severity Levels: "Informational" Vulnerabilities vs. True Vulnerabilities

A question that often comes up when looking at vulnerability management tools is, “how many vulnerability checks do you have?” It makes sense on the surface; after all, less vulnerability checks = less coverage = missed vulnerabilities during a scan right? As vulnerability researchers would tell you,…

A question that often comes up when looking at vulnerability management tools is, “how many vulnerability checks do you have?” It makes sense on the surface; after all, less vulnerability checks = less coverage = missed vulnerabilities during a scan right? As vulnerability researchers would tell you, it's not that simple: Just as not all vulnerabilities are created equal, neither are vulnerability checks. How “True” Vulnerability Checks Work At Rapid7 we pride ourselves in generating “True” Vulnerability Checks, which leverage vulnerability information right from the source, the vendor. Our content is composed of two fundamental components; fingerprinting and vulnerability check data. Researchers spend considerable effort in-order to provide our expert system the capability to accurately identify vendor products such as applications and operating systems. “True” vulnerability checks are executed within our expert system, which utilizes these fingerprints to determine characteristics for each asset it encounters, then comparing these characteristics against our vulnerability check data to identify any vulnerabilities. Looking at vulnerability check count alone is a meaningless metric as security vendors could easily inflate this number by spreading their check logic across multiple check files. There is only a finite amount of ways to test for the presence of a vulnerability, which is most often prescribed by the vendor. “Informational” Vulnerabilities This brings us to what vendors usually describe as “Informational Vulnerabilities.” In the act of doing a vulnerability scan (especially during credentialed scans), a vulnerability scanner gleans a ton of useful information that doesn't necessarily have a CVSS score or real risk, such as installed software, open ports, and general information about what a system is and how it operates. A common way vendors show these findings to users is by making them “informational or potential” vulnerabilities, categorizing them in the same way they categorize CVSS-scored issues. Most scanners that do this thankfully make it easy to filter out informational vulnerabilities from “real” ones so you can focus on the vulnerabilities with actual risk; however, it still leads to several issues: Users that are new to vulnerability management may not understand what is informational and what isn't, leaving those vulnerabilities in reports and making it appear that their scan is catching much more than others (when in reality the actual vulnerability information is likely very similar) There's no industry standard for classifying “informational” vulnerabilities like there is for CVSS scored “real” vulnerabilities. This leaves it to the vendor's discretion what they consider is pertinent information. There's a huge amount of incidental information that can be gathered from a vulnerability scan; labeling ALL of it as vulnerabilities is impractical, and so is leaving out data by labeling only SOME of the data. It's a lose-lose situation Thanks to the above point, vendors often tout their total number of vulnerability checks as proof of their superiority over each other, without pointing out that a sizeable chunk of these checks are largely irrelevant to prioritizing important vulnerabilities The Nexpose Approach Nexpose doesn't have any informational vulnerabilities.  For example, identifying that the target has a resolvable FQDN isn't something you will find in our vulnerability list. This is simply a characteristic of the target not necessarily a vulnerability and therefore is found in the asset details page. We know that no one wants to be bogged down with irrelevant vulnerabilities or spend extra time filtering out information they don't need; that's why we focus on making it easy to filter down your assets to identify relevant information and report off of assets based on these filters. Need to see all assets that are virtual machines (yes, believe it or not, being a virtual machine is classified as a vulnerability in some tools!)? Simply create a dynamic asset group to automatically filter your assets down to just virtual machines, a group that updates automatically as new devices are added. Strip away informational vulns, and you'll be surprised with how may real vulnerability checks are left over. In the end, the number of vulnerability checks isn't much of a differentiator anymore; as those new Sprint commercials say, its 2016, and every enterprise level vulnerability scanner has pretty similar coverage across even uncommon types of assets. Vendors that tout the # of checks as a differentiator often do it because they know that have more informational checks than their competition, and conveniently fail to mention that a sizeable chunk of these would never be used in actual remediation, only slowing down your security team and giving you more 1000 page irrelevant reports.

Research Report: Vulnerability Disclosure Survey Results

When cybersecurity researchers find a bug in product software, what's the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers' disclosure? Questions like these are becoming increasingly important as…

When cybersecurity researchers find a bug in product software, what's the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers' disclosure? Questions like these are becoming increasingly important as more software-enabled goods - and the cybersecurity vulnerabilities they carry - enter the marketplace. But more data is needed on how these issues are being dealt with in practice. Today we helped publish a research report [PDF] that investigates attitudes and approaches to vulnerability disclosure and handling. The report is the result of two surveys – one for security researchers, and one for technology providers and operators – launched as part of a National Telecommunications and Information Administration (NTIA) “multistakeholder process” on vulnerability disclosure. The process split into three working groups: one focused on building norms/best practices for multi-party complex disclosure scenarios; one focused on building best practices and guidance for disclosure relating to “cyber safety” issues, and one focused on driving awareness and adoption of vulnerability disclosure and handling best practices. It is this last group, the “Awareness and Adoption Working Group” that devised and issued the surveys in order to understand what researchers and technology providers are doing on this topic today, and why. Rapid7 - along with several other companies, organizations, and individuals - participated in the project (in full disclosure, I am co-chair of the working group) as part of our ongoing focus on supporting security research and promoting collaboration between the security community and technology manufacturers. The surveys, issued in April, investigated the reality around awareness and adoption of vulnerability disclosure best practices. I blogged at the time about why the surveys were important: in a nutshell, while the topic of vulnerability disclosure is not new, adoption of recommended practices is still seen as relatively low. The relationship between researchers and technology providers/operators is often characterized as adversarial, with friction arising from a lack of mutual understanding. The surveys were designed to uncover whether these perceptions are exaggerated, outdated, or truly indicative of what's really happening. In the latter instance, we wanted to understand the needs or concerns driving behavior. The survey questions focused on past or current behavior for reporting or responding to cybersecurity vulnerabilities, and processes that worked or could be improved. One quick note – our research efforts were somewhat imperfect because, as my data scientist friend Bob Rudis is fond of telling me, we effectively surveyed the internet (sorry Bob!). This was really the only pragmatic option open to us; however, it did result in a certain amount of selection bias in who took the surveys. We made a great deal of effort to promote the surveys as far and wide as possible, particularly through vertical sector alliances and information sharing groups, but we expect respondents have likely dealt with vulnerability disclosure in some way in the past. Nonetheless, we believe the data is valuable, and we're pleased with the number and quality of responses. There were 285 responses to the vendor survey and 414 to the researcher survey. View the infographic here [PDF]. Key findings Researcher survey The vast majority of researchers (92%) generally engage in some form of coordinated vulnerability disclosure. When they have gone a different route (e.g., public disclosure) it has generally been because of frustrated expectations, mostly around communication. The threat of legal action was cited by 60% of researchers as a reason they might not work with a vendor to disclose. Only 15% of researchers expected a bounty in return for disclosure, but 70% expected regular communication about the bug. Vendor survey Vendor responses were generally separable into “more mature” and “less mature” categories. Most of the more mature vendors (between 60 and 80%) used all the processes described in the survey. Most “more mature” technology providers and operators (76%) look internally to develop vulnerability handling procedures, with smaller proportions looking at their peers or at international standards for guidance. More mature vendors reported that a sense of corporate responsibility or the desires of their customers were the reasons they had a disclosure policy. Only one in three surveyed companies considered and/or required suppliers to have their own vulnerability handling procedures. Building on the data for a brighter future With the rise of the Internet of Things we are seeing unprecedented levels of complexity and connectivity for technology, introducing cybersecurity risk in all sorts of new areas of our lives. Adopting robust mechanisms for identifying and reporting vulnerabilities, and building productive models for collaboration between researchers and technology providers/operators has never been so critical. It is our hope that this data can help guide future efforts to increase awareness and adoption of recommended disclosure and handling practices. We have already seen some very significant evolutions in the vulnerability disclosure landscape – for example, the DMCA exemption for security research; the FDA post-market guidance; and proposed vulnerability disclosure guidance from NHTSA. Additionally, in the past year, we have seen notable names in defense, aviation, automotive, and medical device manufacturing and operating all launch high profile vulnerability disclosure and handling programs. These steps are indicative of an increased level of awareness and appreciation of the value of vulnerability disclosure, and each paves the way for yet more widespread adoption of best practices. The survey data itself offers a hopeful message in this regard - many of the respondents indicated that they clearly understand and appreciate the benefits of a coordinated approach to vulnerability disclosure and handling. Importantly, both researchers and more mature technology providers indicated a willingness to invest time and resources into collaborating so they can create more positive outcomes for technology consumers. Yet, there is still a way to go. The data also indicates that to some extent, there are still perception and communication challenges between researchers and technology providers/operators, the most worrying of which is that 60% of researchers indicated concern over legal threats. Responding to these challenges, the report advises that: “Efforts to improve communication between researchers and vendors should encourage more coordinated, rather than straight-to-public, disclosure. Removing legal barriers, whether through changes in law or clear vulnerability handling policies that indemnify researchers, can also help. Both mature and less mature companies should be urged to look at external standards, such as ISOs, and further explanation of the cost-savings across the software development lifecycle from implementation of vulnerability handling processes may help to do so.” The bottom line is that more work needs to be done to drive continued adoption of vulnerability disclosure and handling best practices. If you are an advocate of coordinated disclosure – great! – keep spreading the word. If you have not previously considered it, now is the perfect time to start investigating it. ISO 29147 is a great starting point, or take a look at some of the example policies such as the Department of Defense or Johnson and Johnson. If you have questions, feel free to post them here in the comments or contact community [at] rapid7 [dot] com. As a final thought, I would like to thank everyone that provided input and feedback on the surveys and the resulting data analysis - there were a lot of you and many of you were very generous with your time. And I would also like to thank everyone that filled in the surveys - thank you for lending us a little insight into your experiences and expectations. ~ @infosecjen

Vulnerability Management: Live Assessment and the Passive Scanning Trap

With the launch of Nexpose Now in June, we've talked a lot about the “passive scanning trap” and “live assessment” in comparison. You may be thinking: what does that actually mean?  Good question. There has been confusion between continuous monitoring and continuous vulnerability assessment – and…

With the launch of Nexpose Now in June, we've talked a lot about the “passive scanning trap” and “live assessment” in comparison. You may be thinking: what does that actually mean?  Good question. There has been confusion between continuous monitoring and continuous vulnerability assessment – and I'd like to propose that a new term “continuous risk monitoring” be used instead, which is where Adaptive Security and Nexpose Now fits. The goal of a vulnerability management program is to understand your risk from vulnerabilities and manage it effectively, based upon what is acceptable to your organization. First ask, “What does ‘Continuous Monitoring' actually mean?” “Continuous” admits that our networks, and the systems on them, are not static. System configurations change, users install stuff, admins deploy things. Users move around the building, plug into network jacks, or leave stuff plugged in. “Monitoring” speaks to the need to answer that question “What is on my network?” and “Are the systems on my network patched and configured in a way we are comfortable with?”. Because these things are changing continuously, we need to be able to monitor them continuously to be secure. Then ask, “How are other folks using this ‘continuous monitoring' concept?” There are different definitions from best practices and regulatory standards that use the words “continuous”, like SANS (now CIS) Critical Security Controls and NIST [PDF]. The definitions vary. SANS says “Run automated vulnerability scanning tools against all systems on the network on a weekly or more frequent basis and deliver prioritized lists of the most critical vulnerabilities to each responsible system administrator along with risk scores”. NIST says “Information security continuous monitoring (ISCM) is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.” With that said, the intent behind “continuous” is the same…it is to provide you as close to real-time visibility into risk in your environment that is actionable, to ultimately reduce your risk of a breach (side note: Rapid7 was also recently recognized as the top company for meeting the SANS top 20 controls, so this is just one of 19 controls we can help with!) Many Approaches Available There are different approaches to continuous risk monitoring that range from running back-to-back vulnerability scans, or passively finding vulnerabilities using network traffic, to running event-driven vulnerability assessments. Back-to-Back Scans This approach is basically running an endless loop of vulnerability scans back to back, so when one scan finishes you run another scan.  While this approach ensures that you always have a full picture of the risk on your network, during the time between when the scan starts and ends you have a potential blindspot in your risk posture. Not only is this noisy and expensive from a network bandwidth perspective, a risky asset could join and be removed during this window without your knowledge. Passively identifying vulnerabilities using network traffic The other approach to continuously monitoring risk is to put a network sniffer throughout your network to find vulnerability risk.  This approach sounds pretty good, however, it is limited as it relies only on clear text network traffic on the network. The volume of vulnerabilities is limited when compared to active vulnerability scanning, and is more likely to generate false-positives needing tracked down and explained to your IT organization.  Buyers should also be aware that network traffic is increasingly encrypted –Google is even rewarding sites that leverage HTTPS through better rankings – this limits visibility of data that can be used for vulnerability assessment. Because of these limitations it's tough to use passive vulnerability scanning alone as true continuous monitoring; you still need active vulnerability scanning in order to have an actionable view of your risk posture. Which is fine, but the deployment architecture is eerily similar to IDS and would be duplicated if you already have an IDS deployed in your environment.  Many organizations have made the upgrade to IPS over the classic IDS because if you are going to go through the effort of sniffing network traffic, you might as well have a solution that can actually prevent an attack from happening instead of just knowing about it. What's even more interesting is that Gartner says “In 2015, 40% of enterprises have a standalone IPS deployed.  However, it is decreasing down to 30% by the end of 2017.” That seems odd, right?  Well, IPS technology is getting baked into next-generation firewalls which is becoming a more and more popular choice for enterprises. This is the trap that most people fall into: thinking they can rely on “passive scanning” to do continuous monitoring, when they a) often have very similar capabilities already baked into their next-generation security tools and b) are overloaded with false positives that provide more noise than actual monitoring. This is what lead us to a new approach. A Live approach for vulnerability management: Adaptive Security Nexpose Now The Adaptive Security approach, which was released with Nexpose 6, is a dynamic event-driven automated workflow approach that provides between-vulnerability-scan visibility to changes that occur in your network and real-time. These adaptive security features provide actionable insight into the impact on your organization's risk. Dynamic data collection is made possible by the Nexpose integration with asset sources like DHCP and VMWare to identify when an asset joins the network. The automated actions workflow enables instant scanning of these assets, tagging and/or adding to a site. Thus, when a new asset or vulnerability joins the network, Nexpose can automatically assess it and add it to you reports, without any additional deployment and with minimal impact on network performance, and only provides vulnerability insight and actionable information for the events you want to track – no alert fatigue. Now this can be coupled with Nexpose's Liveboards to get an instantly updating scoreboard of how your environment is doing. Integrating a new subnet into your network after an acquisition? Adaptive Security will instantly scan it and you'll see how it affects your overall risk in (near) real time. New critical vulnerability come out over the weekend? Walk into the office on Monday with a list of all assets that are affected and have the ability to assign remediation to the right IT group. Check out this blog post for more information on Adaptive Security. Ready to get started? Download a free trial of Nexpose to test drive the new adaptive security features!

Vulnerability Assessment Reports in Nexpose: The Right Tool for the Right Job

Nexpose supports a variety of complementary reporting solutions that allows you to access, aggregate, and take action upon your scan data. However, knowing which solution is best for the circumstance can sometimes be confusing, so let's review what's available to help you pick the right…

Nexpose supports a variety of complementary reporting solutions that allows you to access, aggregate, and take action upon your scan data. However, knowing which solution is best for the circumstance can sometimes be confusing, so let's review what's available to help you pick the right tool for the job. I want to pull a vulnerability assessment report out of Nexpose. What are my options? Web Interface The Nexpose web interface provides a quick and easy way to navigate through your data. You can drill-down and navigate through cross references and tables support exporting to CSV. Dashboards are a more flexible and configurable way to organize and visualize the data and printable reports support more comprehensive aggregation. The web interface is best suited for ad-hoc exploratory analysis of data. Dashboards Dashboards provide a rich way to visualize and analyze your data in real time. Dashboards in Nexpose Now are highly configurable, flexible, and adaptable to your reporting needs. Cards in the dashboard are easy to use and can be exported to CSV, but are not printable or distributable outside of a web interface natively. Built-in and/or custom report templates are a better option for scheduled distribution and printing. Built-in Report Templates Built-in vulnerability assessment report templates allow configurable reporting for common use cases, such as prioritizing remediation, providing overview of remediation progress, auditing results, etc. Each template allows simple user-interface configuration of the scope of the report, as well as scheduling, distribution and other settings that can make automated workflows simple to execute. Built-in report templates are the first feature you should use to get familiar with Nexpose reporting capabilities, format, etc. Built-in report templates may also be configured and generated through the external XML-based application programming interface (API) for even more control. If you are satisfied with the level of control and configuration, but would like alternate printable templates, consider using custom report templates. Custom Report Templates Custom report templates extend the built-in report templates with various additional reports. Several are available here on the community but you may also engage with the Rapid7 professional services team to customize the building and deployment of a report specifically suited to your needs. This option is ideal when your organization has little SQL expertise or other reporting infrastructure in place. SQL Query Export SQL Query Export provides fine-grained control over the data output in a CSV-formatted reporting. Raw SQL queries against the Reporting Data Model allow any combination, slicing, and intersection of data that is required. This lightweight option is best when the scale of the report is limited, and the CSV format is ideal for consumption. SQL Query Export works well with adhoc API reporting and other scripting-oriented solutions. For large scale deployments that want to have efficient, indexed access to raw data, consider using Data Warehouse Export instead. Data Warehouse Export The Data Warehouse Export feature allows Nexpose to perform an extract transform and load (ETL) process to an external data warehouse. The export supports a highly-optimized, indexed, and efficient dimensional model that any business intelligence (BI) tool can easily connect to. If you are familiar with a BI tool or your organization already has access to one, then warehousing may be a good fit. The data warehouse export runs on regularly scheduled intervals and as such will have some latency before data is available in the warehouse. The data warehouse is best suited for large scale enterprise deployments where hundreds of reports may generate on a daily basis. The more active your organization is at reporting, the more benefit you get from the warehouse. However, the data warehouse does require a separately managed and installed PostgreSQL instance to export into and does not provide the built-in capabilities such as role-based access control, distribution, or scheduling natively. BI tools can be used to provide these report management capabilities, such as Tableau, Qlik, Pentaho, Domo, JasperReports Server and many others. How do I know which reporting solution is right for me? The following chart highlights some key similarities and differences between the various reporting solutions, which you can use to help select the reporting capabilities best for you and your organization. Web Interface Dashboards Built-in Reports Custom Reports SQL Query Export Data Warehouse Export Output Format CSV CSV CSV, HTML, PDF, RTF, XML PDF, HTML, RTF CSV SQL Distribution (e.g. SMTP) ✔ ✔ ✔ Scheduling ✔ ✔ ✔ Access Control ✔ ✔ ✔ ✔ ✔ Printable Output Format ✔ ✔ Customizable Output ✔ ✔ ✔ ✔ API ✔ ✔ ✔ Localizable ✔ ✔ ℹ ℹ Enterprise Scalability ✔ ℹ ✔ Raw Data Access ℹ ✔ JDBC/ODBC Access ✔ ✔ Full support ℹ Partial support (varies)

Never Fear, Vulnerability Disclosure is Here

Last week, some important new developments in the way the US government deals with hackers were unveiled: the first ever vulnerability disclosure policy of the Department of Defense. Touted by Secretary Ash Carter as a ‘see something, say something' policy for the digital domain,…

Last week, some important new developments in the way the US government deals with hackers were unveiled: the first ever vulnerability disclosure policy of the Department of Defense. Touted by Secretary Ash Carter as a ‘see something, say something' policy for the digital domain, this not only provides guidelines for reporting security holes in DoD websites, it also marks the first time since hacking became a felony offense over 30 years ago, that there is a legal, safe channel for helpful hackers who want to help secure DoD websites, a way to do so without fear of legal prosecution.This is historic.We don't often see this type of outreach to the hacker community from most private organizations, let alone the US government. In a survey of the Forbes Global 2000 companies last year, only 6% had a public method to report a security vulnerability. That means 94% of well-funded companies that spend millions on security today, buying all kinds of security products and conducting “industry best practice” security measures and scans, have not yet established a clear public way to tell them about the inevitable: that their networks or products still have security holes, despite their best efforts.The simple fact that all networks and all software can be hacked isn't a surprise to anyone, especially not attackers. Breach after breach is reported in the news. Yet the software and services upon which the Internet is built have never had the broad and consistent benefit of friendly hacker eyes reporting security holes to get them fixed. Holes go unplugged, breaches continue.This is because instead of creating open door policies for vulnerability disclosure, too many organizations would rather postpone having to deal with it, often until it's too late. Instead, helpful hackers who “see something,” often don't “say something” because they were afraid that it might land them in jail.I myself as a hacker have observed and not reported vulnerabilities in the past that I stumbled upon outside of a paid penetration test because of that very real fear. I've built vulnerability disclosure programs at two major vendors (Symantec and Microsoft), in order to shield employees of those companies who found other organizations' vulnerabilities from the concerns that they may face an angry bug recipient alone.Even then, wearing the mighty cape of a mega corporation, I and others trying to disclose security holes to other organizations encountered the same range of reactions that independent researchers face: from ignoring the report, to full on legal threats, and one voicemail that I wish I'd saved because I learned new swear words from it. For me, that is rare. But what wasn't rare was the fear that often fuels those negative reactions from organizations that haven't had a lot of experience dealing with vulnerability disclosure.Fear, as they say in Dune, is the mind-killer. Organizations must not fear the friendly hacker, lest they let the little death bring total oblivion.There is no excuse for organizations letting fear of working with hackers prevent them from doing so for defense. There is no excuse for lacking a vulnerability disclosure policy, in any organization, private or public sector.The only barrier is building capabilities to handle what can be daunting in terms of facing the world of hackers. Big companies like Google, Apple, and Microsoft have had to deal with this issue for a very long time, and have worked out systems that work for them. But what about smaller organizations? What about other industries outside of the tech sector? What about IoT? And what about governments, who must walk the line between getting the help they need from the hacker community without accidentally giving free license to nation-states to hack them with an overly permissive policy?There are guidelines for this process in the public domain, too many to list. 2017 will mark my ninth year attending ISO standards meetings, where I've dedicated myself to helping create the standards for ISO 29147 Vulnerability disclosure, and ISO 30111 Vulnerability handling processes. Until April of 2016, both of these standards were not available for free. Now the essential one to start with, ISO 29147, is available for download from ISO at no cost. Most people don't even know it exists, let alone that it's now free. But both standards act as a guide for best practices, not a roadmap for an organization to start building their vulnerability disclosure program, bit by bit.Enter the first Vulnerability Coordination Maturity Model – a description of 5 capability areas in vulnerability disclosure, that I designed to help organizations gauge their readiness to handle incoming vulnerability reports. These capabilities go beyond engineering, and look at an organization's overall strengths in executive support, communication, analytics, and incentives.The VCMM provides an initial baseline, and a way forward for any organization, small or large, public or private, that wants to confront their fear of working with friendly hackers, in defense against very unfriendly attackers.The model was built over my years of vulnerability disclosure experiences, on all sides of that equation. I've done so in open source and closed source companies, as the hacker finding the vulnerability or as the vendor responding to incoming vulnerabilities, and as the coordinator between multiple vendors in issues that affected shared libraries, many years before Heartbleed was a common term heard around the dinner table.I was fortunate to be able to present this Vulnerability Coordination Maturity Model at the Rapid7 UNITED Summit a few weeks ago, and my company was honored to work directly with the Department of Defense on this latest bit of Internet history. And though I'm known for creating Microsoft's first ever bug bounty programs, and advised the Pentagon on the first ever bug bounty program of theirs, now my work focuses more heavily on core vulnerability disclosure capability-building, and helping organizations overcome their fears in dealing with hackers.The way I see it, if 94% of Forbes Global 2000 is still lagging behind the US government in its outreach to helpful hackers, my work is best done far earlier in an organization's life than when they are ready to create cash incentives for bugs. In fact, not everyone is ready for bug bounties, not public ones anyway, unless they have the fundamentals of vulnerability disclosure ready. But that's a topic for another day.Today, as we bear witness to a significant positive shift in the US government's public work with hackers, I'm filled with hope. Hope that the DoD's new practice of vulnerability disclosure programs and bounties will expand as a successful model to the rest of the US government, hope that other governments will start doing this too, hope that the rest of the Forbes top 2000 will catch up, and hope for every interconnected device looming on the Internet of Things to come.Today, we have no time to fear our friends, no matter where in the world or on the Internet they come from. There is no room for xenophobia when it comes to defending the Internet. Together, we must act as defenders without borders, without fear.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Upcoming Event

UNITED 2017

Rapid7's annual security summit is taking place September 12-14, 2017 in Boston, MA. Join industry peers for candid talks, focused trainings, and roundtable discussions that accelerate innovation, reduce risk, and advance your business.

Register Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now