Rapid7 Blog

Endpoints  

Announcing Microsoft Azure Asset Discovery in InsightVM

Almost every security or IT practitioner is familiar with the ascent and continued dominance of Amazon Web Services (AWS). But you only need to peel back a layer or two to find Microsoft Azure growing its own market share and establishing its position as the…

Almost every security or IT practitioner is familiar with the ascent and continued dominance of Amazon Web Services (AWS). But you only need to peel back a layer or two to find Microsoft Azure growing its own market share and establishing its position as the most-used, most-likely-to-renew public cloud provider. Azure is a force to be reckoned with. Many organizations benefit from this friendly competition and not only adopt Azure but increasingly use both Azure and AWS. In this context, security teams are often caught on the swinging end of the rope. A small shake at the top of the rope triggers big swings at the bottom. A credit card is all that is needed to spin up new VMs, but as security teams know, the effort to secure the resulting infrastructure is not trivial. Built for modern infrastructure One way you can keep pace is by using a Rapid7 Scan Engine from the Azure Marketplace. You can make use of a pre-configured Rapid7 Scan Engine within your Azure infrastructure to gain visibility to your VMs from within Azure itself. Another way is to use the Rapid7 Insight Agent on your VM images within Azure. With Agents, you get visibility into your VMs as they spin up. This sounds great in a blog post, but since assets in Microsoft Azure are virtual, they come and go without much fanfare. Remember the bottom-of-the-rope metaphor? You're there now. Security needs visibility to identify vulnerabilities in infrastructure to get on the path to remediation, but this is complicated by a few questions: Do you know when a VM is spun up? How can you assess risk if the VM appears outside your scan window? Do you know when a VM is decommissioned? Are you reporting on VMs that no longer exist? Do you know what a VM is used for? Is your reporting simply a collection of VMs, or do those VMs mean something to your stakeholders? You might struggle with answering these questions if you employ tools that weren't designed with the behavior of modern infrastructure in mind. Automatically discover and manage assets in Azure InsightVM and Nexpose, our vulnerability management solutions offer a new discovery connection to communicate directly to Microsoft Azure. If you know about our existing discovery connection to AWS you'll find this familiar, but we've added new powers to fit the behavior of modern infrastructure: Automated discovery: Detect when assets in Azure are spun up and trigger visibility when you need it using Adaptive Security. Automated cleanup: When VMs are destroyed in Azure, automatically remove them from InsightVM/Nexpose. Keep your inventory clean and your license consumption cleaner. Automated tag synchronization: Synchronize Azure tags with InsightVM/Nexpose to give meaning to the assets discovered in Azure. Eliminate manual efforts to keep asset tags consistent. Getting started First, you'll need to configure Azure to allow InsightVM/Nexpose to communicate with it directly. Follow this step-by-step guide in Azure Resource Manager docs. Specifically, you will need the following pieces of information to set up your connection: Application ID and Application Secret Key Tenant ID Once you have this information, navigate to Administration > Connections > Create Select Microsoft Azure from the dropdown menu. Enter a Connection name, your Tenant ID, Application ID and Application Secret key (a.k.a. Authentication Key). Next, we'll select a Site we want to use to contain the assets discovered from Azure. We can control which assets we want to import with Azure tags. Azure uses a : format for tags. If you want to enter multiple tags, use as a delimiter, e.g., Class:DatabaseType:Production. Check Import tags to import all tags from Azure. If you don't care to import all tags in Azure, you can specify exactly which ones to import. The tags on the VM in Azure will be imported and associated automatically with Assets as they are discovered. When there are changes to tag assignment in Azure, InsightVM/Nexpose will automatically synchronize tag assignments. Finally, as part of the synchronization when VMs are destroyed within Azure, the corresponding asset in InsightVM/Nexpose will be deleted automatically, ensuring your view remains as fresh and current as your modern infrastructure. Great success! Now what...? If you've made it this far, you're at the point where you have your Azure assets synchronized with InsightVM/Nexpose, and you might even have a handful of tags imported. Here are a few ideas to consider when looking to augment your kit: Create an Azure Liveboard: Use Azure tags as filtering criteria to create a tailored dashboard. Scan the site or schedule a scan of a subset of the site. Create Dynamic Asset Groups using tags to subdivide and organize assets. Create an automated action to trigger a scan on assets that haven't been assessed. All of our innovations are built side-by-side with our customers through the Rapid7 Voice program. Please contact your Rapid7 CSM or sales representative if you're interested in helping us make our products better. Not a customer of ours? Try a free 30- day trial of InsightVM today.

The CIS Critical Security Controls Explained - Control 6: Maintenance, Monitoring and Analysis of Audit Logs

In your organizational environment, Audit Logs are your best friend. Seriously. This is the sixth blog of the series based on the CIS Critical Security Controls. I'll be taking you through Control 6: Maintenance, Monitoring and Analysis of Audit Logs, in helping you to understand…

In your organizational environment, Audit Logs are your best friend. Seriously. This is the sixth blog of the series based on the CIS Critical Security Controls. I'll be taking you through Control 6: Maintenance, Monitoring and Analysis of Audit Logs, in helping you to understand the need to nurture this friendship and how it can bring your information security program to a higher level of maturity while helping gain visibility into the deep dark workings of your environment. In the case of a security event or incident, real or perceived, and whether it takes place due to one of the NIST-defined incident threat vectors, or falls into the “Other” category, having the data available to investigate and effectively respond to anomalous activity in your environment, is not only beneficial, but necessary. What this Control Covers: This control has six sections which cover everything from NTP configuration, to verbose logging of traffic from network devices to how the organization can best leverage a SIEM for a consolidated view and action points, and how often reports need to be reviewed for anomalies. There are many areas where this control runs alongside or directly connects to some of the other controls as discussed in other CIS Critical Control Blog posts. How to Implement It: Initial implementation of the different aspects of this control range in complexity from a “quick win” to full configuration of log collection, maintenance, alerting and monitoring. Network Time Protocol: Here's your quick win. By ensuring that all hosts on your network are using the same time source, event correlation can be accomplished in a much more streamlined fashion. We recommend leveraging the various NTP pools that are available, such as those offered from pool.ntp.org. Having your systems check in to a single regionally available server on your network, which has obtained its time from the NTP pool will save you hours of chasing down information. Reviewing and Alerting: As you can imagine, there is a potential for a huge amount of data to be sent over to your SIEM for analysis and alerting. Knowing what information to capture and retain is a huge part of the initial and ongoing configuration of the SIEM. Fine tuning of alerts is a challenge for a lot of organizations. What is a critical alert? Who should be receiving these and how should they be alerted? What qualifies as a potential security event? SIEM manufacturers and Managed Service Providers have their pre-defined criteria, and for the most part, are able to effectively define clear use cases for what should be alerted upon, however your organization may have additional needs. Whether these needs are the result of compliance requirements or you needing to keep an eye on a specific critical system for anomalous activity, defining your use cases and ensuring that alerts are sent for the appropriate level of concern as well as having them sent to the appropriate resources is key in avoiding alert fatigue. Events that may not require immediate notification still have to be reviewed. Most regulatory requirements state that logs should be reviewed "regularly" but remain vague on what this means. A good rule of thumb is to have logs reviewed on a weekly basis, at a minimum. While your SIEM may have the analytical capabilities to draw correlations, there will undoubtedly be items that you find that will require action. What should I be collecting? There is a lot of technology out there to “help” secure your environment. Everything from Active Directory auditing tools, which allow you to pull nicely formatted and predefined reports, to the network configuration management tools. There are all flavors out there that are doing the same thing that your SIEM tool can do with appropriately managed alerting and reporting. It should be able to be a one stop shop for your log data. In a perfect world, where storage isn't an issue, each of the following items would have security related logs sent to the SIEM. Network gear Switches Routers Firewalls Wireless Controllers and their APs. 3rd Party Security support platforms Web proxy and filtration Anti-malware solutions Endpoint Security platforms (HBSS, EMET) Identity Management solutions IDS/IPS Servers Special emphasis on any system that maintains an identity store, including all Domain Controllers in a Windows environment. Application servers Database servers Web Servers File Servers – Yes, even in the age of cloud storage, file servers are still a thing, and access (allowed or denied) needs to be logged and managed. Workstations All security log files This list is by no means exhaustive, and even at the level noted we are talking about large volumes of information. This information needs a home. This home needs to be equipped with adequate storage and alerting capabilities. Local storage is an alternative, but it will not provide the correlation, alerting or retention capabilities as a full blown SIEM implementation. There has been some great work done in helping organizations refine what information to include in log collections. Here are a few resources I have used. SANS - https://www.sans.org/reading-room/whitepapers/auditing/successful-siem-log-manag ement-strategies-audit-compliance-33528 NIST SP 800-92 - http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-92.pdf Malware Archeology - https://www.malwarearchaeology.com/cheat-sheets/ Read more on the CIS Critical Security Controls: The CIS Critical Security Controls Explained - Control 1: Inventory of Authorized and Unauthorized Devices The CIS Critical Security Controls Explained - Control 2: Inventory of Authorized and Unauthorized Software The CIS Critical Security Controls Explained – Control 3: Secure Configurations for Hardware & Software The CIS Critical Security Controls Explained – Control 4: Continuous Vulnerability Assessment & Remediation The CIS Critical Security Controls Explained – Control 5: Controlled Use of Administrative Privilege

Live Vulnerability Monitoring with Agents for Linux...and more

A few months ago, I shared news of the release of the macOS Insight Agent. Today, I'm pleased to announce the availability of the the Linux Agent within Rapid7's vulnerability management solutions. The arrival of the Linux Agent completes the trilogy that Windows and macOS…

A few months ago, I shared news of the release of the macOS Insight Agent. Today, I'm pleased to announce the availability of the the Linux Agent within Rapid7's vulnerability management solutions. The arrival of the Linux Agent completes the trilogy that Windows and macOS began in late 2016. For Rapid7 customers, all that really matters is you've got new capabilities to add to your kit. Introducing Linux Agents Take advantage of the Linux Agent to: Get a live view into your exposures: Automatically collect data from your endpoints and seamlessly update your Liveboards, which are always populated with real time data with out the need to hit refresh or rescan. Get visibility into remote workers: Remote workers rarely, or in some cases never, connect to the corporate network and often miss scheduled scan windows. Our lightweight agents can be deployed to monitor risks posed by the mobile workforce. Eliminate restricted asset blind spots: Some assets are just too critical to the business to be actively scanned. With our agents, you'll get visibility into assets with strict vulnerability scanning restrictions, while removing the need to manage credentials to gain access. Get visibility into elastic or ephemeral assets by building the Insight Agent into your base machine images or VM templates. Of course, Linux isn't a monolithic OS like Windows or macOS. In order for our customers to get the widest possible coverage, Linux Insight Agents support an array of distributions: Debian 7.0 - 8.2 CentOS 5.2 - 7.3 Red Hat Enterprise Linux (RHEL) Client 5.2 - 7.3 Red Hat Enterprise Linux (RHEL) Server 5.2 - 7.3 Red Hat Enterprise Linux (RHEL) Workstation 5.2 - 7.3 Oracle Enterprise Linux (OEL) Server 5.2 - 7.3 Ubuntu 11.04 - 16.10 Fedora 17 - 25 SUSE Linux Enterprise Server (SLES) 11 -12 SUSE Linux Enterprise Desktop (SLED) 11 -12 openSUSE LEAP (42.1 - 42.2) Amazon Linux With such a diverse list, we hope you're able to find a match for your environment. Ready to get started? Check out the steps to download and install, and you'll be up and running in no time. ...and more If you've read this far, you may be wondering: “Hey, what about the ‘...and more' promised in the title?” Since the release of Insight Agents for vulnerability management in late 2016, we've received great feedback from our customers. In particular, we heard that customers liked the visibility they were able to attain, but found the management capabilities lacking. With our most recent release, we've now brought management capabilities to your Assets with Agents. You can now treat your Assets with Agents just like any other asset in your system. You are now able to: Add Assets with Agents to groups Tag Assets with Agents Run standard reporting from the Console on Assets with Agents Correlate using Asset Linking Apply vulnerability exceptions All of your Assets with Agents will be synchronized from the Insight Platform into an automatically created “Rapid7 Insight Agents” site so you'll always know where to find them. I hope you grab a moment to give these new tools a spin and let us know what you think! All of our innovations are built side-by-side with our customers through the Rapid7 Voice program. Please contact your Rapid7 CSM or sales representative if you're interested in helping us make our products better. Download a free 30-day trial of InsightVM.

Addressing the issue of misguided security spending

It's the $64,000 question in security – both figuratively and literally: where do you spend your money? Some people vote, at least initially, for risk assessment. Some for technology acquisition. Others for ongoing operations. Smart security leaders will cover all the above and more.…

It's the $64,000 question in security – both figuratively and literally: where do you spend your money? Some people vote, at least initially, for risk assessment. Some for technology acquisition. Others for ongoing operations. Smart security leaders will cover all the above and more. It's interesting though – according to a recent study titled the 2017 Thales Data Threat Report, security spending is still a bit skewed. For instance, security compliance is the top driver of security spending. One would think that business risk and common sense would be core drivers but we all know how the world works. The Thales study also found that network and endpoint security were their top spending priorities yet 30 percent of respondents say their organizations are 'very vulnerable' or 'extremely vulnerable' to security attacks. So, people are spending money on security solutions that may not be addressing their true challenges. Perhaps more email phishing testing needs to be performed. I'm finding that to be one of the most fruitful exercises anyone can do to improve their security program – as long as it's being done the right way. Maybe more or better security assessments are required. Only you – and the team of people in charge of security – will know what's best.  The mismatch of security priorities and spending is something I see all the time in my work. Security policies are documented, advanced technologies are implemented, and executives are assuming that all is well with security given all the effort and money being spent. Yet, ironically, in so many cases not a single vulnerability scan has been run, much less a formal information risk assessment has been performed. Perhaps testing has been done but maybe it wasn't the right type of testing. Or, the right technologies have been installed but their implementation is sloppy or under-managed. This mismatch is an issue that's especially evident in healthcare (i.e. HIPAA compliance checkbox) but affects businesses large and small across all industries. It's the classic case of putting the cart before the horse. I strongly believe in the concept of “you cannot secure what you don't acknowledge”. But you first have to properly acknowledge the issues – not just buy into them because they're “best practice”. Simply going through the motions and spending money on security will make you look busy and perhaps demonstrate to those outside of IT and security that something is being done to address your information risks. But that's not necessarily the right thing to do. The bottom line, don't spend that hard-fought $64,000 on security just for the sake of security. Step back. Know what you've got, understand how it's truly at risk, and then, and only then, should you do something about it. Look at the bigger picture of security – what it means for your organization and how it can best be addressed based on your specific needs rather than what someone else is eager to sell you.

12 Days of HaXmas: The Gift of Endpoint Visibility and Log Analytics

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Machine generated log data is probably the simplest and one of the most used data source for everyday use cases such as troubleshooting, monitoring, security investigations … the list goes on. Since log data records exactly what happens in your software over time it is extremely useful for understanding what had caused an outage or security vulnerability. With technologies like InsightOps, it can also be used to monitor systems in real time by looking at live log data which can contain anything from resource usage information, to error rates, to user activity etc. So in short when used for the right job, log data is extremely powerful... until it's NOT! When is it not useful to look at logs? When your logs don't contain the data you need. How many times during an investigation have your logs contained enough information to point you in the right direction, but then fell short of giving you the complete picture. Unfortunately, it is quite common to run out of road when looking at log data; if only you had recorded 'user logins', or some other piece of data that was important with hindsight, you could figure out what user installed some malware and your investigation would be complete. Log data, by its very nature, provides an incomplete view of your system, and while log and machine data is invaluable for troubleshooting, investigations and monitoring, it is generally at its most powerful when used in conjunction with other data sources. If you think about it, knowing exactly what to log up front to give you 100% code or system coverage is like trying to predict the future. Thus when problems arise or investigations are underway, you may not have the complete picture you need to identify the true root cause. So our gift to you this HaXmas is the ability to generate log data on the fly through our new endpoint technology, InsightOPs, which enables you to  fill in any missing information during troubleshooting or investigations. InsightOps is pioneering the ability to generate log data on the fly by allowing end users to ask questions of their environment, InsightOps is pioneering the ability to generate log data on the fly by returning answers in the form of logs. Essentially, it will allow you to create synthetic logs which can be combined with your traditional log data - giving you the complete picture! It also gives you all this information in one place (so no need to combine a bunch of different IT monitoring tools to get all the information you need). You will be able to ask anything from 'what processes are running on every endpoint in my environment' to ‘what is the memory consumption' of a given process or machine. In fact, our vision is to allow users to ask any question that might be relevant for their environment such that you will never be left in the dark and never again have to say ‘if only I had logged that.' Interested in trying InsightOps for yourself? Sign up here: https://www.rapid7.com/products/insightops/ Happy HaXmas!

macOS Agent in Nexpose Now

As we look back on a super 2016, it would be easy to rest on one's laurels and wax poetic on the halcyon days of the past year. But at Rapid7 the winter holidays are no excuse for slowing down: The macOS Rapid7 Insight Agent…

As we look back on a super 2016, it would be easy to rest on one's laurels and wax poetic on the halcyon days of the past year. But at Rapid7 the winter holidays are no excuse for slowing down: The macOS Rapid7 Insight Agent is now available within Nexpose Now. Live Monitoring for macOS Earlier this year, we introduced Live Monitoring for Endpoints with the release of a Windows agent for use with Nexpose Now. The feedback from the Community has been great (and lively!) and now we're back with another round. Recall, by adding agents into your threat and vulnerability management routine, you can: Get a live view into your exposures: Automatically collect data from your endpoints and seamless integrates it into Nexpose Now, so your Liveboards are always populated with real time data without the need to hit refresh or rescan. Get visibility into remote workers: Remote workers rarely, or in some cases never, connect to the corporate network and often miss scheduled scan windows. Our lightweight agents can be deployed to monitor risks posed by the mobile workforce. Eliminate restricted asset blindspots: Some assets are just too critical to the business to be actively scanned. With our agents, you'll get visibility into assets with strict scanning restrictions, while removing the need to manage credentials to gain access. These same powers may now be pointed at your macOS population. macOS adoption has been on the rise for years. Windows adoption is not in danger of being eclipsed, but many customers need visibility into their pockets of macOS machines within the environment. This makes sense -- when IT can't always mandate a common hardware platform, entire business units adopt what works for them, and C-suite executives use the hardware they desire; a Security team simply needs visibility to what's weak on the systems that mean the most to them. Getting Started Just like its Windows counterpart, the macOS agent is easy to install (interactive or silent), easy to manage (directly from Liveboards), and most importantly performs its duty with minimal resource consumption and no user interference. Ready to get started? Here's how: First, navigate to your Liveboards and if you haven't done so already, add an Agent card. Click on the Manage Agents link and then the Download Mac Agent button. Run the installer package on your Macs of choice and you've taken a first step into a larger world. The Rapid7 Insight Agent takes care of the rest, performing initial and regular data collection, securely transmitting the data back to Nexpose Now for assessment. All of this takes place whether the user is connected to your network or just the internet, reducing the effort for you to get the visibility you need. We expect every organization may deploy or configure things a little differently, so we've provided more information and a FAQ on Rapid7 Insight Agents. tl;dr, at launch the macOS Agent is compatible with macOS Yosemite 10.10 and onwards. You keep using that word... Since launching Nexpose Now early in the year and following up with Live Monitoring for Endpoints and Remediation Workflow, we've received questions on the minor, but obvious (Beta), label visible within some parts of Nexpose Now. While on the topic of new capabilities, we thought we'd take the opportunity to share some of the Q&A with you all. What is in (Beta) in Nexpose Now? Remediation Workflow and Live Monitoring for Endpoints are the two current features that have this label applied. We've opened up these new capabilities to all users of Nexpose Now without restriction. Why is <feature> Beta? We want to get new capabilities into your hands as soon as possible, so you can start getting value and provide feedback to Rapid7 on how we can improve. We continue to work on improvements that will make the user experience more seamless, more capable and more performant. Beta is used to let customers know Rapid7 is actively working to deliver value: more goodness to come! Are you releasing untested functionality? All features are fully tested before being released. Users will get a high quality experience across many workflows, with more features and workflows being added to the product based on feedback we receive. Is (Beta) functionality supported? Yes. Features offered in Beta form are fully supported by Rapid7 Technical Support. May I use these features in production? Yes. That is why we've released them into the world, so they may deliver their intended value to you NOW. Haven't tried Nexpose Now but are interested? Check out our Help page to learn how to get started with Nexpose Now. All of our innovations are built side-by-side with our customers through the Rapid7 Voice program. Please contact your Rapid7 CSM or sales representative if you're interested in helping us make our products better.

SIEM Tools Aren't Dead, They're Just Shedding Some Extra Pounds

Security Information and Event Management (SIEM) is security's Schrödinger's cat. While half of today's organizations have purchased SIEM tools, it's unknown if the tech is useful to the security team… or if its heart is even beating or deployed. In response to…

Security Information and Event Management (SIEM) is security's Schrödinger's cat. While half of today's organizations have purchased SIEM tools, it's unknown if the tech is useful to the security team… or if its heart is even beating or deployed. In response to this pain, people, mostly marketers, love to shout that SIEM is dead, and analysts are proposing new frameworks with SIEM 2.0/3.0, Security Analytics, User & Entity Behavior Analytics, and most recently Security Operations & Analytics Platform Architecture (SOAPA).However, SIEM solutions have also evolved from clunky beasts to solutions that can provide value without requiring multiple dedicated resources. While some really want SIEM dead, the truth is it still describes the vision we all share: reliably find insights from security data and detect intruders early in the attack chain. What's happened in this battle of survival of the fittest is that certain approaches and models simply weren't working for security teams and the market.What exactly has SIEM lost in this sweaty regimen of product development exercise? Three key areas have been tightened and toned to make the tech something you actually want to use.No More Hordes of Alerts without User Behavior ContextUser Behavior Analytics. You'll find this phrase at every SIEM vendor's booth, and occasionally in their technology as well. Why? This entire market segment explosion spawned from two major pain-points in legacy SIEM tech: (1) too many false-positive, non-contextual alerts, and a (2) failure to detect stealthy, non-malware attacks, such as the use of stolen credentials and lateral movement.By tying every action on the network to the users and assets behind them, security teams spend less time retracing user activity to validate and triage alerts, and can detect stealthy, malicious behavior earlier in the attack chain. Applying UBA to SIEM data results in higher quality alerts and faster investigations, as teams are spending less time retracing IPs to users and running tedious log searches.Detections now Cover Endpoints Without Heavy LiftingEndpoint Detection and Response. This is another super-hot technology of 2016, and while not every breach originates from the endpoint, endpoints are often an attacker's starting point and provide crucial information during investigations. There are plenty of notable behaviors that if detected, are reliable signs of “investigate-me” behavior.A couple examples:Log DeletionFirst Time Admin Action (or detection of privilege exploit)Lateral MovementAny SIEM that doesn't offer built-in endpoint detection and visibility, or at the very least, automated ways to consume endpoint data (and not just anti-virus scans!), leaves gaps in coverage and across the attack chain. Without endpoint data, it's very challenging to have visibility into traveling and remote workers or detect an attacker before critical assets are breached. It can also complicate and slow incident investigations, as endpoint data is critical for a complete story. The below highlights a standard investigation workflow along with the relevant data sources to consult at each step.Incident investigations are hard. They require both incident response expertise (how many breaches have you been a part of?) and also data manipulation skills to get the information you need. If you can't search for endpoint data from within your SIEM, that slows down the process and may force you to physically access the endpoint to dig deeper.Leading SIEMs today now offer a combination of Agents or an Endpoint Scan to ingest this data, detect local activity, and have it available for investigations. We do all of this and supplement our Endpoint detections with Deception Technology, which includes decoy Honey Credentials that are automatically injected into memory to better detect pass-the-hash and credential attacks.Drop the Fear, Uncertainty, and Doubt About Data ConsumptionThere are a lot of things that excite me, for example, the technological singularity, autonomous driving, loading my mind onto a Westworld host. You know what isn't part of that vision? Missing and incomplete data. Today's SIEM solutions derive their value from centralizing and analyzing everything. If customers need to weigh the value of inputting one data set against another, that results in a fractured, frustrating experience. Fortunately, this too is now a problem of the past.There are a couple of factors behind these winds of change. Memory capacity continues to expand close to a Moore's Law pace, which is fantastic, as our log storage needs are heavier than ever before.Vendors now are offering mature cloud architectures that can securely store and retain log data to meet any compliance need, along with faster search and correlation activity than most on-premise deployments can dream about. The final shift, and one that's currently underway today, is with vendor pricing. Today's models revolve around Events per Second and Data Volume Indexed. But, what's the point of considering endpoint, cloud, and log data if the inevitable data volume balloon means the org can't afford to do so?We've already tackled this challenge and customers have been pleased with it. Over the next few years, new and legacy vendors alike will also shed existing models to also reflect the demand for sensible data pricing that finally arms incident investigators with the data and context they need.There's a lot of pain with existing SIEM technology – we've experienced it ourselves, from customers, and every analyst firm we've spoken with. However, that doesn't mean the goal isn't worthy or the technology has continually failed to adapt. Can you think of other ways SIEM vendors have collectively changed their approach over the years? Share it in the comments! If you're struggling with an existing deployment and are looking to augment or replace, check out our webcast, “Demanding More From Your SIEM”, for recommendations and our approach to the SIEM you've always wanted.

Announcing InsightOps - Pioneering Endpoint Visibility and Log Analytics

Our mission at Rapid7 is to solve complex security and IT challenges with simple, innovative solutions. Late last year Logentries joined the Rapid7 family to help to drive this mission. The Logentries technology itself had been designed to reveal the power of log data to…

Our mission at Rapid7 is to solve complex security and IT challenges with simple, innovative solutions. Late last year Logentries joined the Rapid7 family to help to drive this mission. The Logentries technology itself had been designed to reveal the power of log data to the world and had built a community of 50,000 users on the foundations of our real time, easy to use yet powerful log management and analytics engine. Today we are excited to announce InsightOps, the next generation of Logentries. InsightOps builds on the fundamental premise that in a world where systems are increasingly distributed, cloud-based and made up of connected/smart devices, log and machine data is inherently valuable to understand what is going on, be that from a performance perspective, troubleshooting customer issues or when investigating security threats. However, InsightOps also builds on a second fundamental premise, which is that log data is very often an incomplete view of your system, and while log and machine data is invaluable for troubleshooting, investigations and monitoring, it is generally at its most powerful when used in conjunction with other data sources. If you think about it, knowing exactly what to log up front to give you 100% code or system coverage is like trying to predict the future. Thus when problems arise or investigations are underway, you may not have the complete picture you need to identify the true root cause. To solve this problem InsightOps allows users to ask questions of specific endpoints in your environment. The endpoints return answers to these questions, in seconds, in the form of log events such that they can be correlated with your existing log data. I think of it as being able to generate 'synthetic logs' on the fly - logs designed to answer your questions as you investigate or need vital missing information. How often have you said during troubleshooting or an investigation "I wish I had logged that…”? Now you can ask questions in real time to fill in the missing details e.g. “who was the last person to have logged into this machine?” InsightOps combines both log data and endpoint information such that users can get a more complete understanding of their infrastructure and applications through a single solution. InsightOps will now deliver this IT data in one place and thus avoids the need for IT professionals to jump between several, disparate tools in order to get a more complete picture of their systems. By the way - this is the top pain point IT professionals have reported across lots and lots of conversations that we have had, and that we continue to have, with our large community of users. To say I am excited about this is an understatement - I've been building and researching log analytics solutions for more than 10 years and I truly believe the power provided by combining logs and endpoints will be a serious game changer for anybody who utilizes log data as part of their day to day responsibilities -- be that for asset management, infrastructure monitoring, maintaining compliance or simply achieving greater visibility, awareness and control over your IT environment. InsightOps will also be providing some awesome new capabilities beyond our new endpoint technology, including: Visual Search: Visual search is an exciting new way of searching and analyzing trends in your log data by interacting with auto-generated graphs. InsightOps will automatically identify key trends in your logs and will visualize these when in visual search mode. You can interact with these to filter your logs allowing you to search and look for trends in your log data without having to write a single search query. New Dashboards and Reporting: We have enhanced our dashboard technology making it easier to configure dashboards as well as providing a new, slicker look and feel. Dashboards can also be exported to our report manager where you can store and schedule reports, which can be used to provide a view of important trends e.g. reporting to management or for compliance reporting purposes. Data Enrichment: Providing additional context and structuring log data can be invaluable for easier analysis and ultimately to drive more value from your log and machine data. InsightOps enhances your logs by enriching them in 2 ways, (1) by combining endpoint data with your traditional logs to provide additional context and (2) by normalization your logs into a common JSON structure such that it is easier for users to work with, run queries against, build dashboards etc. As always check it out and let us know what you think - we are super excited to lead the way into the next generation of log analytics technologies. You can apply for access to the InsightOps beta program here: https://www.rapid7.com/products/insightops/beta-request

User Behavior Analytics and Privacy: It's All About Respect

When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment.…

When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment. It's an effective approach: an analytics engine that triggers based on known attack methods as well as users straying from their normal behavior results in high fidelity detection. Our conversations center on technical features and objections – how can we detect lateral movement, or what does the endpoint agent do, and how can we manage it? That's the nature of technical sales, I suppose. I'm the sales engineer, and the analysts and engineers that I'm speaking with want to know how our stuff works. The content can be complex at times, but the nature of the conversation is simple. An important conversation that is not so simple, and that I don't have often enough, is a discussion on privacy and IDR. Privacy is a sensitive subject in general, and over the last 15 years (or more), the security community has drawn battle lines between privacy and security.  I'd like to talk about the very real privacy concerns that organizations have when it comes to the data collection and behavioral analysis that is the backbone of any IDR program. Let's start by listing off some of the things that make employers and employees leery about incident detection and response. It requires collecting virtually everything about an environment. That means which systems users access and how often, which links they visit, interconnections between different users and systems, where in the world users log in from – and so forth. For certain solutions, this can extend to recording screen actions and messages between employees. Behavioral analysis means that something is always “watching,” regardless of the activity. A person needs to be able to access this data, and sift through it relatively unrestricted. I've framed these bullets in an intentionally negative light to emphasize the concerns. In each case, the entity that either creates or owns the data does not have total control or doesn't know what's happening to the data. These are many of the same concerns privacy advocates have when large-scale government data collection and analysis comes up. Disputes regarding the utility of collection and analysis are rare. The focus is on what else could happen with the data, and the host of potential abuses and misuses available. I do not dispute these concerns – but I contend that they are much more easily managed in a private organization. Let's recast the bullets above into questions an organization needs to answer. Which parts of the organization will have access to this system? Consider first the collection of data from across an enterprise. For an effective IDR program, we want to pull authentication logs (centrally and from endpoints – don't forget those local users!), DNS logs, DHCP logs, firewall logs, VPN, proxy, and on and on. We use this information to profile “normal” for different users and assets, and then call out the aberrations. If I log into my workstation at 8:05 AM each morning and immediately jump over to ESPN to check on my fantasy baseball team (all strictly hypothetical, of course), we'll be able to see that in the data we're collecting. It's easy to see how this makes employees uneasy. Security can see everything we're doing, and that's none of their business! I agree with this sentiment. However, taking a magnifying glass to typical user behavior, such as websites visited or messages sent isn't the most useful data for the security team. It might be interesting to a human resources department, but this is where checks and balances need to start. An information security team looking to bring in real IDR capabilities needs to take a long and hard look at its internal policies and decide what to do with information on user behavior. If I were running a program, I would make a big point of keeping this data restricted to security and out of the hands of HR. It's not personal, HR – there's just no benefit to allowing witch hunts to happen. It'll distract from the real job of security and alienate employees. One of the best alerting mechanisms in every organization isn't technology, it's the employees. If they think that every time they report something it's going to put a magnifying glass on every inane action they take on their computer, they're likely to stop speaking up when weird stuff happens. Security gets worse when we start using data collected for IDR purposes for non-IDR use cases. Who specifically will have access, to what information, and how will that be controlled? What about people needing unfettered access to all of this data? For starters, it's absolutely true. When Bad Things™ are detected, at some point a human is going to have to get into the data, confirm it, and then start to look at more data to begin the response. Consider the privacy implications, though; what is to stop a person from arbitrarily looking at whatever they want, whenever they want, from this system? The truth is organizations deal with this sort of thing every day anyway. Controlling access to data is a core function of many security teams already, and it's not technology that makes these decisions. Security teams, in concert with the many and varied business units they serve, need to decide who has access to all of this data and, more importantly, regularly re-evaluate that level of access. This is a great place for a risk or privacy officer to step in and act as a check as well. I would not treat access into this system any differently than other systems. Build policy, follow it, and amend regularly. Back to if I was running this program. I would borrow pretty heavily from successful vulnerability management exception handling processes. Let's say there's a vulnerability in your environment that you can't remediate, because a business critical system relies on it. In this case, we would put an exception in for the vulnerability. We justify the exception with a reason, place a compensating control around it, get management sign off, and tag an expiration date so it isn't ignored forever. Treat access into this system as an “exception,” documenting who is getting access, why, and define a period in which access will be either re-evaluated or expire, forcing the conversation again. An authority outside of security, such as a risk or privacy officer, should sign off on the process and individual access. Under what circumstances will this system be accessed, and what are the consequences for abusing that access? There need to be well-defined consequences for those that violate the rules and policies set forth around a good incident detection and response system. In the same way that security shouldn't allow HR to perform witch hunts unrelated to security, the security team shouldn't go on fishing trips (only phishing and hunts). Trawls through data need to be justified. This is for the same reasons as the HR case. Alienating our users hurts everyone in the long run. Reasonable people are going to disagree over what is acceptable and what is not, and may even disagree with themselves. One Rapid7 customer I spoke with talked about using an analytics tool to track down a relatively basic financial scam going on in their email system. They were clearly justified in both extracting the data and further investigating that user's activity inside the company. “In an enterprise,” they said, “I think there should be no reasonable expectation of privacy – so any privacy granted is a gift. Govern yourself accordingly.” Of course, not every organization will have this attitude. The important thing here is to draw a distinct line for day to day use, and note what constitutes justification for crossing that line. That information should be documented and be made readily available, not just in a policy that employees have to accept but never read. Take the time to have the conversation and engage with users. This is a great way to generate goodwill and hear out common objections before a crisis comes up, rather than in the middle of one or after. Despite the above practitioner's attitude towards privacy in an enterprise, they were torn. “I don't like someone else having the ability to look at what I'm doing, simply because they want to.” If we, the security practitioners, have a problem with this, so do our users. Let's govern ourselves accordingly. Technology based upon data collection and analysis, like user behavior analytics, is powerful and enables security teams to quickly investigate and act on attackers. The security versus privacy battle lines often get drawn here, but that's not a new battle and there are plenty of ways to address concerns without going to war. Restrict the use of tools to security, track and control who has access, and make sure the user population understands the purpose and rules that will govern the technology. A security organization that is transparent in its actions and receptive to feedback will find its work to be much easier.

Live Monitoring with Endpoint Agents

At the beginning of summer, we announced some major enhancements to Nexpose including Live Monitoring, Threat Exposure Analytics, and Liveboards, powered by the Insight Platform. These capabilities help organizations using our vulnerability management solution to spot changes as it happens and prioritize risks for remediation.…

At the beginning of summer, we announced some major enhancements to Nexpose including Live Monitoring, Threat Exposure Analytics, and Liveboards, powered by the Insight Platform. These capabilities help organizations using our vulnerability management solution to spot changes as it happens and prioritize risks for remediation.We've also been working on a new way for organizations to get a real time view into their exposures. Rapid7 Insight Agents (Beta), along with our active scanning and Adaptive Security capabilities, allow you to monitor your network and endpoints for risks. This week we're opening up this new capability to all Nexpose Enterprise and Ultimate users.5 Reasons why you should try Rapid7 Insight Agents (Beta)1. Get a live view into exposuresOur agents automatically collect data from your endpoints and seamless integrates it into Nexpose Now, so your Liveboards are always populated with real time data without the need to hit refresh or rescan.2. Endpoint security for remote workersRemote workers rarely, or in some cases never, connect to the corporate network and often miss scheduled scan windows. Our lightweight agents can be deployed to monitor risks posed by the mobile workforce.3. Eliminate restricted asset blindspotsSome assets are just too critical to the business to be actively scanned. With our agents, you'll get visibility into assets with strict scanning restrictions, while removing the need to manage credentials to gain access.4. Track and manage agents centrallyMonitor the status of your agents from your Liveboards to identify any discrepancies or errors that require attention. You can also see when was the last data collection and which agents are currently online or offline.5. One agent to rule them allThe same agent is used for all solutions on the Insight Platform, including Nexpose Now and InsightIDR, so you only need a single endpoint agent for both vulnerability management and endpoint threat detection.To start using Rapid7 Insight Agents, you'll need to log in to Nexpose and opt-in to Nexpose Now. If you have already opted in to Nexpose Now, click on Manage Agents on one of the Agents Liveboard cards. This takes you to the Agents page where you can download the Windows agent installer and monitor your agents.All of our innovations are built side-by-side with our customers through the Rapid7 Voice program. Please contact your Rapid7 CSM or sales representative if you're interested in helping us make our products better.

[Q&A] User Behavior Analytics as Easy as ABC Webcast

Earlier this week, we had a great webcast all about User Behavior Analytics (UBA). If you'd like to learn why organizations are benefiting from UBA, including how it works, top use cases, and pitfalls to avoid, along with a demo of Rapid7 InsightIDR, check out…

Earlier this week, we had a great webcast all about User Behavior Analytics (UBA). If you'd like to learn why organizations are benefiting from UBA, including how it works, top use cases, and pitfalls to avoid, along with a demo of Rapid7 InsightIDR, check out on-demand: User Behavior Analytics: As Easy as ABC or the UBA Buyer's Tool Kit. During the InsightIDR demo, which includes top SIEM, UBA, and EDR capabilities in a single solution, we had a lot of attendee questions (34!). We grouped the majority of questions into key themes, with seven Q&A listed below. Want more? Leave a comment!1. Is [InsightIDR] a SIEM?Yes. We call InsightIDR the SIEM you've always wanted, armed with the detection you'll always need. Built hand-in-hand with incident responders, our focus is to help you reliably find intruders earlier in the attack chain. This is accomplished by integrating with your existing network and security stack, including other log aggregators. However, unlike traditional SIEMs, we require no hardware, come prebuilt with behavior analytics and intruder traps, and monitor endpoints and cloud solutions – all without having to dedicate multiple team members to the project.2. Is InsightIDR a cloud solution?Yes. InsightIDR was designed to equip security teams with modern data processing without the significant overhead of managing the infrastructure. Your log data is aggregated on-premise through an Insight Collector, then securely sent to our multi-tenant analytics cloud, hosted on Amazon Web Services. More information on the Insight Platform cloud architecture.3. Does InsightIDR assist with PCI or SOX compliance, or would I need a different Rapid7 solution?Not with every requirement, but many, including tricky ones. As InsightIDR helps you detect and investigate attackers on your network, it can help with many unique compliance requirements. The underlying user behavior analytics will save you time retracing user activity (who had what IP?), as well as increase the efficiency of your existing stack (over the past month, which users generated the most IPS alerts?). Most notably, you can aggregate, store, and create dashboards out of your log data to solve tricky requirements like, “Track and Monitor Access to Network Resources and Cardholder Data.” More on how InsightIDR helps with PCI Compliance.4. Is it possible to see all shadow cloud SAAS solutions used by our internal users?Yes. InsightIDR gets visibility into cloud services in two ways: (1) direct API integrations with leading services, such as Office 365, Salesforce, and Box, and (2) analyzing Firewall, Web Proxy, and DNS traffic. Through the latter, InsightIDR will identify hundreds of cloud services, giving your team visibility into what's really happening on the network.5. Where does InsightUBA leave off and InsightIDR begin?InsightIDR includes everything in InsightUBA, along with major developments in three key areas:Fully Searchable Data SetEndpoint Interrogation and HuntingCustom Compliance DashboardsFor a deeper breakdown, check out “What's the difference between InsightIDR & InsightUBA?”6. Can we use InsightIDR/UBA with Nexpose?Yes! Nexpose and InsightIDR integrate to provide visibility and security detection across assets and the users behind them. With this combination, you can see exactly which users have which vulnerabilities, putting a face and context to the vuln. If you dynamically tag assets in Nexpose as critical, such as those in the DMZ or containing a software package unique to domain controllers, those are automatically tagged in InsightIDR as restricted assets. Restricted assets in InsightIDR come with a higher level of scrutiny – you'll receive an alert for notable behavior like lateral movement, endpoint log deletion, and anomalous admin activity.7. If endpoint devices are not joined to the domain, can the agents collect endpoint information to send to InsightIDR?Yes. From working with our pen testers and incident response teams, we realize it's essential to have coverage for the endpoint. We suggest customers deploy the Endpoint Scan for the main network, which provides incident detection without having to deploy and manage an agent. For remote workers and critical assets not joined to the domain, our Continuous Agent is available, which provides real-time detection, endpoint interrogation, and even a built-in Intruder Trap, Honey Credentials, to detect pass-the-hash and other password attacks.Huge thanks to everyone that attended the live or on-demand webcast – please share your thoughts below. If you want to discuss if InsightIDR is right for your organization, request a free guided demo here.

Detecting Stolen Credentials Requires Endpoint Monitoring

If you are serious about detecting advanced attackers using compromised credentials on your network, there is one fact that you must come to terms with: you need to somehow collect data from your endpoints. There is no way around this fact. It is not only…

If you are serious about detecting advanced attackers using compromised credentials on your network, there is one fact that you must come to terms with: you need to somehow collect data from your endpoints. There is no way around this fact. It is not only because the most likely way that these attackers will initially access your network is via an endpoint. Yes, that is true, but there are also behaviors, both simple and stealthy, that can only be detected if you have access to data on the systems themselves. Let me explain with the help of Matt Damon. Monitor the endpoints or miss the activity A year and a half ago, the InsightIDR team and I published a technical paper that walks through a series of scenarios in which centralized logs show either no trace of lateral movement taking place or an indecipherable amount of information about what actually took place. To quickly summarize our findings laid out in this paper: if you are looking to evade detection, it is very easy to do so by stealing passwords and hashes from endpoints and using them to access other endpoints in an organization. Attackers can typically move from one system to another as fast or slow as they wish because no user behavior analytics solution will see them unless they've configured their endpoints to forward event logs to their SIEM or installed a more flexible software agent to do it. You must have endpoint data to stand a chance of detecting the early indicators of an intruder, but more likely, you need it to detect an attack at all. If you are not monitoring the endpoints in your organization, the odds that you would identify anomalous activity before the attacker reaches a critical server are extremely small. Think of it like Jason Bourne in the Bourne trilogy. The only times they were close to catching him were when he: Passed through a customs checkpoint - this is like detecting an attacker first reaching your network Approached his final target - this is the same as trying to spot data exfiltration from your critical server containing PCI data or intellectual property of some kind If you are monitoring your endpoints, it is like having a view into every possible car that Bourne could steal while moving about the country between those two points. If you want to increase your chances of detecting attackers earlier and at more stages of their attack, you need to have a method of detecting them as they move from endpoint to endpoint. I described a few examples in an earlier blog, but we are always adding new indicators for this detection. Related Content – Learn more about the steps attackers must take to steal data, and how to detect intruders earlier in their hunt for a mission target InsightIDR makes it easy with both proven Nexpose and newer Rapid7 technology Partnerships and integrations are great for getting more value from a tight security budget, but when we recognized just how mandatory it is that InsightIDR monitor endpoints, we took advantage of existing Rapid7 resources: we re-purposed our proven Nexpose scan technology to offer negligible-bandwidth agent-less endpoint monitoring to every InsightIDR customer. Think of it as a Nexpose scan without the burden of looking for tens of thousands of vulnerabilities. The scan is extremely targeted at the data that helps us identify the indicators of lateral movement, as if you could just have every car in the country look for Jason Bourne's face and report back, rather than having to search through millions of hours of video feeds. If you want to also watch the endpoints as they venture outside your perimeter, you understand a core reason why we developed our continuous agent. Then, as you use more Rapid7 products, you will realize benefits from the same agent [but that's a topic for another blog]. So if you start talking to a user behavior analytics vendor that promises to detect compromised accounts and advanced attacks, ask them how they collect the endpoint data to do so. If their response is to simply push all of the data into your SIEM and they will take care of it, prepare yourself for another high-maintenance process of custom scripts or configuration settings on all endpoints to forward the right data to that central place. Plus, the SIEMs may charge you a lot of money for endpoint data, which Rapid7 doesn't. Why give customers a reason not to send important data to their detection solution? If you want to learn more about InsightIDR and use it to detect attacks on your organization, watch this on-demand demo. We won't rely on someone else to collect the data you need.

Attackers Love When You Stop Watching Your Endpoints, Even For A Minute

One of the plagues of the incident detection space is the bias of functional fixedness. The accepted thought is that your monitoring is only effective for systems that are within the perimeter and communicating directly with the domain controller. And, the logic continues, when they…

One of the plagues of the incident detection space is the bias of functional fixedness. The accepted thought is that your monitoring is only effective for systems that are within the perimeter and communicating directly with the domain controller. And, the logic continues, when they are away from this trusted realm, your assets are protected only by the preventive software running on them. Given the continuous rise of remote workers (telecommuting rose 79 percent from 2005 to 2012), it's now time to demand detection solutions monitor all of your endpoints. By default. There's no time your assets are more susceptible than when they're outside the perimeter Anyone who's experienced driving the remote countryside of the British Isles will share stories of sheep crossing, blocking, and otherwise invading the narrow roads that require delicate navigation. Think about the risk management involved in the sheep farmer's strategy to make a profit from these herds. Only so much can be spent on fencing and training sheepdogs [or the more experimental guardian llama] to protect the flock from predators and cars before it becomes too significant to the reward side of the equation. So the shepherds augment protections on their meadows with distinct spray paint tags to identify the sheep in the wide world and hold out hope they aren't fatally injured, stolen, or worse: diseased before returning to the safety of the meadow. Your flock of company laptops is not so different (yes, in this analogy the laptops are the sheep, not the people). To maximize productivity, you hand them out with the knowledge they are likely to increase the risk to your organization. While they are connected to the network, you have various perimeter devices monitoring and blocking traffic headed their way, but there is always a limit to how much you're willing to lock them down when not on the VPN because it prevents work from being accomplished. This means you are accepting that, much more likely than the intriguing USB storage device left outside your company's headquarters, an IT-provisioned laptop will be victimized by a phishing attack or innocently downloaded malware while connected to an outside network. Then, when it inevitably reconnects to the network, attackers have access without ever having been near your perimeter. They are the diseased sheep, and in this case have access to a trusted asset and eventually the rest of your flock (ok, enough sheep). Even the most sophisticated detection is ineffective at detecting when it can't see And why is it that your various detection technologies are rendered ineffective here? The answer is simple: they cannot see those laptops. Whether you are monitoring network traffic, log data, or endpoints, you typically see nothing happening on your remote laptops or between them and the open internet. If Ethan Hunt in [the first of many] Mission: Impossible merely had to access a CIA operative's laptop and could wait until that operative went back into the ridiculous room with weight sensors, humidistats, and audio sensors to connect the compromised laptop to the lonely desktop containing the NOC list, it would have been a lot less dramatic, but much more realistic. Monitoring for malicious behavior and human perspiration are equally useless if the pre-approved laptop is already compromised when it rejoins the network and comes back into your purview. Allowing you to build a complex proxy to leverage the monitoring you purchased is not a “solution” The vast majority of detection solutions available today have workarounds to make it theoretically possible to monitor your remote endpoints, but it typically requires a great deal of effort and ingenuity to reach a functional state. You might need to force all remote browser traffic through a corporate web proxy or implement a proxy for the endpoint agents to communicate back to their central server. Suddenly, the product you've purchased is demanding a lot more from you just to use its core functionality across all the assets you're paying to monitor. Imagine if an airline promised you inflight WiFi so you can reply to that influx of email or simply keep up with Season 2 of Daredevil. However, it comes with an accessory antenna you'll need to manually hold up whenever the satellite is out of reach. You are going to keep giving your employees laptops, so monitoring them should be the default So if we all accept that organizations are going to continue providing laptops, we should also agree that you shouldn't have to completely swallow them as a known risk, nor maintain your own communication system for keeping them in view. The standard package for your detection solutions should include the flexibility to see your assets whether they are on or off the traditional network. Why would you want to invest so much in detection and not include these high value target systems by default? Every InsightUBA and InsightIDR customer has the option to deploy the Rapid7 continuous agent on its endpoints. Assets which are never taken off-site will always be monitored via scan, but we designed and built this continuous agent so that you could still watch for concerning behavior on your organization's assets when they're being used on a home network, at a coffee shop, or at a tropical resort. If the continuous agent can contact your Insight Collector, it will communicate through it. If it cannot reach any of your organization's Collectors, it will communicate directly to your instance on the Insight platform. No extra work. No additional fees. This is meant to be simple and standard because you should have as few gaps in visibility as possible. If you'd like to learn more, check out an on-demand demo, or get a free guided demo to see how InsightIDR can benefit your organization.

IDC says 70% of successful breaches originate on the endpoint

This is part 2 of a blog post series on a new IDC infographic covering new data on compromised credentials and incident detection. Check out part 1 now if you missed it. Most organizations focus on their server infrastructure when thinking about security – a fact…

This is part 2 of a blog post series on a new IDC infographic covering new data on compromised credentials and incident detection. Check out part 1 now if you missed it. Most organizations focus on their server infrastructure when thinking about security – a fact we often see in our Nexpose user base where many companies only scan their servers. However, IDC finds that 70% of successful breaches originate on the endpoint. This does not necessarily imply insider threats, it is rather a sign that phishing is prevalent, cheap, and surprisingly effective in compromising machines. Given this compelling data, I strongly urge security professionals responsible for vulnerability management to consider scanning their endpoints to spot and remediate vulnerabilities in browsers, office packages and other typical endpoint software to reduce the risk of compromising endpoints. At the risk of over-emphasizing the point: the recent JP Morgan breach, which exposed half of U.S. households and millions of small businesses, started with a compromised endpoint. Incident detection must therefore also take endpoints into account. With Rapid7 InsightIDR, we detect endpoint compromises using an agentless scanning technology that is built on the fast, efficient, and proven technology of Rapid7 Nexpose, which has years of experience in this area. In addition, InsightIDR helps protect against phishing emails and detects post-compromise lateral movement on the network, giving you many chances to detect and respond to attackers as soon as they get access to your network. User credentials are the weakest link The number one attack vector for breaches remains credentials. These are often obtained through the following means: Social engineering the help desk Trying default passwords Guessing passwords Installing keylogging malware Phishing users Accessing orphaned accounts Protecting against these attack vectors is hard, but there are several ways to test if your environment is vulnerable. For example, Rapid7 Nexpose includes vulnerability checks that test for known default credentials, giving you visibility into your weakness, and enabling you to protect yourself. Rapid7 Metasploit has great, new functionality for testing for weak and reused credentials as part of a penetration test. This can highlight issues where passwords are shared across account types and trust zones. It also exposes common security issues such as the use of one local domain administrator account password across the entire organization, which helps our penetration testers own entire network in a heartbeat using pass the hash attacks. While prevention is necessary, no network is flawless, so detecting attackers using compromised credentials is quickly becoming a critical part of any security program. Compromised credentials are leveraged in three out of four breaches but they are hard to detect because attackers look like a bona fide user to most monitoring solutions. Rapid7 InsightIDR was built specifically to detect stealthy use of compromised credentials across your domain, local accounts, and in the cloud. It integrates with leading SIEMs and threat intelligence solutions such as Splunk, HP ArcSight ESM and FireEye TAP. IDC's recommendations on how to protect your organization IDS is making six recommendations to help protect against these risks: Re-allocate budget from prevention to detection: Nobody suggests that you should end your prevention efforts. Prevention continues to be necessary, but you now must also assume that you will be breached and expand your focus on detection and response. Monitor user behavior: Users are at the heart of your operation. They produce value to your organization, and are the origin of your productivity. This makes them a huge target for attackers, who know that they have they keys to the kingdom. Security analytics solutions such as Rapid7 InsightIDR can help you detect and investigate malicious user behavior, whether it's because of an insider threat or an attacker masking as an internal user. What's best: if you already have a SIEM, deploying this technology becomes even faster. Get visibility into unmanaged cloud applications: Whether your organization is an avid user of cloud services or not, your users probably are. Rapid7 InsightIDR customers are always surprised to discover how many of their users turn out to have cloud applications installed, even when it's against company policy. In organizations using enterprise-grade cloud services such as Salesforce.com or Amazon Web Services InsightIDR's direct integration with key cloud providers also helps you detect and correlate logon activities that don't originate from your network, dramatically improving detection capabilities and security visibility. Monitor endpoints (including mobile devices!): Monitoring endpoints is critical to detect local account compromises and other malicious activity, and the same is true for mobile devices. Rapid7 InsightIDR can detect compromises of mobile devices even in BYOD environments by integrating with key choke points, such as mail servers. Eliminate default passwords: Our penetration testers frequently get access to a network because someone forgot to change a default credential. While this is an easy mistake to make, this lack of basic security hygiene can have dire consequences. Rapid7 Nexpose can help you identify default passwords on all of your hosts so you can swap them out. Harden endpoints: Hardening your endpoints encompasses several things. Scan your endpoints for client-side vulnerabilities that could be leveraged in phishing attacks and remediate them. You should also look at deploying exploitation prevention toolkits, which are available for free for Windows and other platforms and ensure that your mass-malware endpoint solution is installed, active, and up to date. Rapid7 ControlsInsight is a great beacon to help you track how effective your endpoint and server controls are today and where you can get the biggest bang for your buck. It also helps you track progress in improving controls and to show your management the positive impact you have on your organization's security posture. If you'd like to check out some of the Rapid7 solutions we discussed in this blog post today, please fill in our contact us form. You can also download Nexpose and Metasploit directly, or request a free, guided InsightIDR demo on the Rapid7 website.

How to use Nexpose to find all assets affected by DROWN

Introduction DROWN is a cross-protocol attack against OpenSSL. The attack uses export cipher suites and SSLv2 to decrypt TLS sessions. SSLv2 was developed by Netscape and released in February 1995. Due to it containing a number of security flaws, the protocol was completely redesigned and…

Introduction DROWN is a cross-protocol attack against OpenSSL. The attack uses export cipher suites and SSLv2 to decrypt TLS sessions. SSLv2 was developed by Netscape and released in February 1995. Due to it containing a number of security flaws, the protocol was completely redesigned and SSLv3 was released in 1996. Even though SSLv2 was declared obsolete over 20 years ago, there are still servers supporting the protocol. What's both fascinating and devastating about the DROWN attack, is that servers not supporting SSLv2 can also be vulnerable if they use the same RSA key as a server that does support SSLv2. Since SSL/TLS is application agnostic, it is possible to decrypt HTTPS traffic between clients and a server that doesn't support SSLv2 if it's using the same RSA key as, for example, an email server that supports SSLv2. We have implemented a DROWN vulnerability check in Nexpose to detect if an endpoint is vulnerable to the attack by allowing SSLv2 connections. The check has the Nexpose ID ssl-cve-2016-0800. To find other services that don't support SSLv2 but are also vulnerable to DROWN as they are using the same RSA key as a vulnerable endpoint, we need to use the power of all the data collected by Nexpose during a scan. Generate a report of vulnerable endpoints After a scan of our site, we can see that we have 44 instances of the vulnerability. The report is generated by selecting SQL Query Export as the report model and pasting the SQL query we generated above. This will give us a csv file with the exported data which shows us that we actually have 70 endpoints affected by the DROWN attack. Generate the SQL Query There are a few steps we have to complete to generate our DROWN report. First, we need to get the vulnerability ID used by Nexpose internally. We can get the ID from the dim_vulnerability table using the Nexpose ID. SELECT vulnerability_id FROM dim_vulnerability WHERE nexpose_id = 'ssl-cve-2016-0800' Now when we have the vulnerability ID, we need to find all the vulnerable assets and get the certificate fingerprint. The certificate fingerprint is stored in the table dim_asset_service_configuration and all the vulnerabilities for an asset are stored in the table fact_asset_vulnerability_instance. We are ensuring we are only getting the certificate fingerprints from the vulnerable endpoints by matching the port for the vulnerability instance and the port for the service configuration. SELECT dasc.value FROM dim_asset_service_configuration dasc JOIN fact_asset_vulnerability_instance favi USING (asset_id) WHERE dasc.name = 'ssl.cert.sha1.fingerprint' AND dasc.port = favi.port) Finally, we put it all together and select all assets which are using the vulnerable certificates: WITH drown_vulnerability AS ( SELECT vulnerability_id FROM dim_vulnerability WHERE nexpose_id = 'ssl-cve-2016-0800' ) SELECT da.ip_address, dasc.port, dasc.value FROM dim_asset_service_configuration dasc JOIN dim_asset da USING (asset_id) WHERE dasc.value IN ( SELECT dasc.value FROM dim_asset_service_configuration dasc JOIN fact_asset_vulnerability_instance favi USING (asset_id) WHERE vulnerability_id = (SELECT vulnerability_id FROM drown_vulnerability) AND dasc.name = 'ssl.cert.sha1.fingerprint' AND dasc.port = favi.port) ORDER BY dasc.value, da.ip_address, dasc.port Remediation steps Start by disabling SSLv2 on the endpoints which have it enabled and generate new certificates with a new private key for affected endpoints.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now