Rapid7 Blog

Incident Detection  

SIEM Market Evolution And The Future of SIEM Tools

There’s a lot to be learned by watching a market like SIEM adapt as technology evolves, both for the attackers and the analysis.…

There’s a lot to be learned by watching a market like SIEM adapt as technology evolves, both for the attackers and the analysis.

Rapid7 and NISC work together to help customers with detection and response

Rapid7 and NISC will work together to provide Managed Detection and Response (MDR) services to the NISC member base, powered by the Rapid7 Insight platform and Rapid7 Security Operation Centers (SOCs.)…

Rapid7 and NISC will work together to provide Managed Detection and Response (MDR) services to the NISC member base, powered by the Rapid7 Insight platform and Rapid7 Security Operation Centers (SOCs.)

PCI DSS Dashboards in InsightIDR: New Pre-Built Cards

No matter how much you mature your security program and reduce the risk of a breach, your life includes the need to report across the company, and periodically, to auditors. We want to make that part as easy as possible. We built InsightIDR as a…

No matter how much you mature your security program and reduce the risk of a breach, your life includes the need to report across the company, and periodically, to auditors. We want to make that part as easy as possible. We built InsightIDR as a SaaS SIEM on top of our proven User Behavior Analytics (UBA) technology to address your incident detection and response needs. Late last year, we added the ability to create custom dashboards and reports through the Card Library and the Log Entry Query Language (LEQL). Now, we’ve added seven pre-built cards that align directly to PCI DSS v3.2, to help you find important behaviors and communicate it out across the company, the board, and external auditors. Let’s walk through a quick overview of the seven cards and how it ties to the requirements in PCI DSS v3.2. 1.3.5: Denied Connection Attempts PCI Requirement 1 covers installing and maintaining a firewall configuration to protect cardholder data. InsightIDR can easily ingest and visualize all of your security data, and with our cloud architecture, you don’t need to worry about housing and maintaining a datastore, even as your organization grows with global offices or acquisition. The above card is a standard, important use-case to identify anomalies and trends from your firewall data. In this case, the card runs the query, “where(connection_status=DENY) groupby(source_address)” over your firewall log data. 4.1c: Potential Insecure Connections It’s important to identify traffic with destination to port 80, or the use of outdated SSL/TLS, especially for traffic around the CDE. This can help identify misconfigurations and ensure per Req 4, transmission of cardholder data is encrypted. As with all cards, you can click on the top right gear to pivot into log search, for more context around any particular IP address. 7.1.2b & 8.1.4: Users Accessing the CDE Identifying which users have accessed the PCI environment is important, as is digging a layer deeper. When did they last access the CDE, and from what asset? This is all important context used when identifying the use of compromised credentials. If the creds for Emanuel Osborne, who has access to the cardholder environment, are used to log in from a completely new asset, should your team be worried? We think so—and that’s why our pre-built detections will automatically alert you. From this card, you can pivot to log search to identify the date of last access. On the top global search, any name can be entered to show you all of the assets where those credentials have been used (new asset logon is tracked as a notable behavior). 8.1.1: Shared/Linked Accounts in the CDE Credentials being shared by multiple people is dangerous, as it makes it much more difficult to retrace behavior and identify compromise. This card draws from asset authentication data to identify when the source account is not the destination (where(sourceaccount != destinationaccount) groupby(destinationaccount)), so your team can proactively reduce this risk, especially for the critical CDE. 8.1.3a: Monitor Deactivated Accounts Similar to the above, it’s important to know when deactivated accounts are re-enabled and used to access the CDE—many InsightIDR alerts focus on this attack vector as we’ve found that disabled and service accounts are common targets for lateral movement. Related: See how InsightIDR allows you to detect and investigate lateral movement. This card highlights users with accounts deactivated over the last 30 days. 10.2.4: Highlight Relevant Log Events 10.2.5a: Track & Monitor Authentications Ah, the beefy Requirement 10: track and monitor access to network resources and cardholder data. This is where InsightIDR shines. All of your disparate log data is centralized (Req. 10.2) to detect malicious behavior across the attack chain (Req. 10.6). With the standard subscription, the data is stored and fully searchable for 13 months (Req. 10.7). These two cards highlight failed and successful authentications, so you can quickly spot anomalies and dig deeper. If you’ve been able to use InsightIDR for a few months, you already know that we’ll surface important authentication events to you in the form of alerts and notable events. These cards will ease sharing your findings and current posture outside the team. For a comprehensive list of how InsightIDR can help you maintain PCI Compliance, check out our PCI DSS v3.2 guide here. If you don’t have InsightIDR, check out our interactive product tour to see how it can help unify your data, detect across the attack chain, and prioritize your search with security analytics.

SIEM Security Tools: Four Expensive Misconceptions

Why modern SIEM security solutions can save you from data and cost headaches. If you want to reliably detect attacks across your organization, you need to see all of the activity that's happening on your network. More importantly, that activity needs to be filtered and…

Why modern SIEM security solutions can save you from data and cost headaches. If you want to reliably detect attacks across your organization, you need to see all of the activity that's happening on your network. More importantly, that activity needs to be filtered and prioritized by risk -- across assets & users – to help you report on how the team is measurably chipping away at Risk Mountain™. Today, the only solution capable of flexibly ingesting, correlating, and visualizing data from a sprawling tool stack is a SIEM solution. SIEMs don't get a lot of love – some might say their deployment felt like a data lake glacier, where budget dollars flowed in, never to leave. Advances in SIEM tools and customer pain are converging, as organizations are looking to cut losses on stagnant deployments and try a new approach. In this post, let's cover four misconceptions that you won't have to suffer from today's nimble and adaptive SIEMs. MISCONCEPTION #1: SIEMs are complex, unwieldy tools that take months to deploy, and a large dedicated staff to keep running. REALITY: Cloud architecture makes SIEM deployment quicker and maintenance easier than ever before. More SIEM security tools today offer cloud deployment as an option, so there is no longer the need for a large, initial hardware investment. In addition, SIEM providers now provide pre-built analytics in their solutions, so security teams don't need to spend recurring hours setting up and refining detection rules as analysts comb through more and more data. The simpler setup of SIEMs running in the cloud, combined with pre-built analytics, means that an organization can get started with SIEM security technology in just a few days instead of months, and that they won't have to continually add staff to keep the SIEM up and running effectively. When choosing a SIEM, define the use cases you'd like the deployment to tackle and consider a Proof of Concept (POC) before making a purchase; you'll have better expectations for success and see how quickly it can identify threats and risk. MISCONCEPTION #2: As SIEMs ingest more data, data processing costs skyrocket into the exorbitant. REALITY: Not all SIEMs come with burdensome cost as deployment size increases. Traditional SIEM pricing models charge by the quantity of data processed or indexed, but this model is penalizing the marketplace. SIEMs become more effective at detecting attacks as more data sources are added over time, especially those that can identify attacker behaviors. As a result, any pricing model that discourages you from adding data sources could hamstring your SIEM's efficacy. Work with your SIEM vendor to determine what data sets you need today and may need in the future, so you can scale effectively without getting burned. MISCONCEPTION #3: SIEMs aren't great at detection. They should primarily be used once you know where to look. REALITY: SIEMs with modern analytics can be extremely effective at detecting real-world attack behaviors in today's complex environments. Related to misconception number two above, if you can't process as many data sources as possible—such as endpoints, networks, and cloud services—then you are potentially limiting your SIEM's ability to detect anomalies and attacks in your environment. In fact, there are many traces of attackers that require the comprehensive data sets fed into SIEM. Two examples are detecting the use of stolen passwords and lateral movement, extremely common behaviors once an attacker has access to the network. At Rapid7, we detect this by first linking together IP Address > Asset > User data and then using graph mining and entity relationship modeling to track what is “normal” in each environment. Outside of SIEMs and User Behavior Analytics (UBA) solutions, this is incredibly hard to detect. In a nutshell: SIEM security tools need that data to be effective, so if you restrict the data coming in, it won't be as effective. A SIEM with modern analytics will be capable of detecting real-world attack behaviors earlier in the attack chain. MISCONCEPTION #4: SIEMs can ingest and centralize log files and network data, but have limited coverage for cloud services and remote workers. REALITY: Today's SIEMs can and should account for data coming in from cloud and endpoints. Network-only data sources may be the norm for more traditional SIEMs on the market, but newer SIEMs also pull in data from endpoints and cloud services to make sure you're detecting attacker behavior no matter where it may occur. Just as the perimeter has shifted from the corporate network walls to the individual user, SIEMs have had to adapt to collect more data from everywhere these users work, namely endpoints and cloud services. Make sure any SIEM security solution you're considering can integrate these additional data sources, not just traditional log files and network data. At Rapid7, we feel strongly that customers shouldn't have to deal with these past pitfalls, and this mindset is expressed throughout InsightIDR, our solution for incident detection and response. On Gartner's Peer Insights page, we've been recognized by customers for resetting expectations around time to value and ease of use: “We are able to monitor many sources with a very small security team and provide our clients with the peace of mind usually only achieved with large security departments.” “[InsightIDR]… on its own, mitigated against 75% of identified threats within our organisation, but with the simplicity of use even my granny could get to grips with.” Want to try InsightIDR at your organization? Start with our on-demand 20 minute demo here, or contact us – we want to learn about your challenges and provide you with answers.

Under the Hoodie: Actionable Research from Penetration Testing Engagements

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.This paper covers the often occult art of…

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.This paper covers the often occult art of penetration testing, and seeks to demystify the process, techniques, and tools that pentesters use to break into enterprise networks. By drawing on the experiences of dozens of pentesters in the field, based on real, qualified data drawn from the real-life experiences of those pentesters, we're able to suss out the most common vulnerabilities that are exploited, the most common network misconfigurations that are leveraged, and the most effective methods we've found to compromise high-value credentials.Finding: Detection is EverythingProbably the most actionable finding we discovered is that most organizations that conduct penetration testing exercises have a severe lack of usable, reliable intrusion detection capabilities. Over two-thirds of our pentesters completely avoided detection during the engagement. This is especially concerning given that most assessments don't put a premium on stealth; due to constraints in time and scope, pentesters generate an enormous amount of malicious traffic. In an ideal network, these would be setting off alarm bells everywhere. Most engagements end with recommendations to implement some kind of incident detection and response, regardless of the specific techniques for compromise were used.Finding: Enterprise Size and Industry Doesn't MatterWhen we started this study, we expected to find quantitative differences between small networks and large networks, and between different industries. After all, you might expect a large, financial industry enterprise of over 1,000 employees would be better equipped to detect and defend against unwelcome attackers due to the security resources available and required by various compliance regimes and regulatory requirements. Or, you might believe that a small, online-only retail startup would be more nimble and more familiar with the threats facing their business.Alas, this isn't the case. As it turns out, the detection and prevention rates are nearly identical between large and small enterprises, and no industry seemed to fare any better or worse when it came to successful compromises.This is almost certainly due to the fact that IT infrastructure pretty much everywhere is built using the same software and hardware components. Thus, all networks tend to be vulnerable to the same common misconfigurations that have the same vulnerability profiles when patch management isn't firing at 100%. There are certainly differences in the details -- especially when it comes to custom-designed web applications -- but even those tend to have the same sorts of frameworks and components that power them.The Human TouchFinally, if you're not really into reading a bunch of stats and graphs, we have a number of "Under the Hoodie" sidebar stories, pulled from real-life engagements. For example, while discussing common web application vulnerabilities, we're able to share a story of how a number of otherwise lowish-severity, external web application issues lead to the eventual compromise of the entire internal back-end network. Not only are these stories fun to read, they do a pretty great job of illustrating how unrelated issues can conspire on an attacker's behalf to lead to surprising levels of unauthorized access.I hope you take a moment to download the paper and take a look at our findings; I don't know of any other research out there that explores the nuts and bolts of penetration testing in quite the depth or breadth that this report provides. In addition, we'll be covering the material at our booth at the RSA security conference next week in San Francisco, as well as hosting a number of "Ask a Pentester" sessions. Andrew and I will both be there, and we love nothing more than connecting with people who are interested in Rapid7's research efforts, so definitely stop by.

Incident Detection and Investigation - How Math Helps But Is Not Enough

I love math. I am even going to own up to having been a "mathlete" and looking forward to the annual UVM Math Contest in high school. I pursued a degree in engineering, so I can now more accurately say that I love…

I love math. I am even going to own up to having been a "mathlete" and looking forward to the annual UVM Math Contest in high school. I pursued a degree in engineering, so I can now more accurately say that I love applied mathematics, which have a much different goal than pure mathematics. Taking advanced developments in pure mathematics and applying them to various industries in a meaningful manner often takes years or decades. In this post, I want to provide the necessary context for math to add a great deal of value to security operations, but also explain the limitations and issues that will arise when it is relied upon too heavily. A primer on mathematics-related buzzwords in the security industry There are always new buzzwords used to describe security solutions with the hope that they will grab your attention, but often the specific detail of what's being offered is lost or missing. Let's start with my least favorite buzzphrase: Big Data Analytics - This term is widely used today, but is imprecise and means different things to different people. It is intended to mean that a system can process and analyze data at a speed and scale that would have been impossible a decade ago. But that too is vague. Given the amount of data generated by security devices today, scale of continually growing networks, and the speed with which attackers move, being able to crunch enormous amounts of data is a valuable capability for your security vendors to have, but that capability tells you very little about the value derived from it. If someone tries to sell you their product because it uses Cassandra or MongoDB or another of the dozens of NoSQL database technologies in combination with Hadoop or another map/reduce technology, your eyes should gloss over because it is more important how these technologies are being used. Dig deeper and ask "so your platform can process X terabytes in Y seconds, but how does that specifically help me improve the security of my organization?" Next, let me explain a few of the more specific, but still oversold math-related (and data science) buzzwords: Machine Learning is all about defining algorithms flexible and adaptive enough to learn from historical data and adjust to the changes in a given dataset over time. Some people prefer to call it pattern recognition because it uses clusters of like data and advanced statistical comparisons to predict what would happen if the monitored group were to continue behaving in a reasonably close manner to that previously observed. The main benefit of this field toward security is the possibility of distinguishing the signal from the noise when sifting through tons of data, whether using clustering, prediction models, or something else. Baselining is a part of machine learning that is actually quite simple to explain. Given a significant sample of historical data, you can establish various baselines that show a normal level of any given activity or measurement. The value of baselining comes from detecting when a measured value deviates significantly from the established historical baseline. A simple example is credit card purchases: consider an average credit card user is found to spend between $600 and $800 per week. This is the baseline for credit card spending for this person. Anomaly Detection refers to the area of machine learning that identifies the events or other measurements in a dataset which are significantly different from an established pattern. These detected events are called "outliers", like the Malcolm Gladwell book. Finding anomalous behavior on your network does not inherently mean you have found risky activity, just that these events differ from the vast majority of historically seen events in the organization's baseline. To extend the example above: if the credit card user spends $650 one week and $700 the next, that's in line with previous spending patterns. Even spending $575 or $830 is outside the established baseline, but not much cause for concern. Detecting an anomaly would be to find that the same user spent over $4,000 in a week. That is an uncharacteristic amount to spend, and the purchases that week should probably be reviewed, but it doesn't immediately mean fraud was committed. Artificial Intelligence is not exactly a mathematics term, but it sometimes gets used as a buzzphrase by security vendors as a synonym for "machine learning". Most science fiction movies focus on the potentially negative consequences of creating artificial intelligence, but the goal is to create machines that can learn, reason, and solve problems the way the awesome brains of animals can today. "Big Blue" and "Watson" showed this field's progress for chess and quiz shows, respectively, but those technologies were being applied to games with set rules and still needed significant teams to manage them. If someone uses this phrase to describe their solution, run, because all other security vendors would be out of business if this advanced research could be consistently applied to motivated attackers that play by no such set of rules when trying to steal from you. Peer Group Analysis is actually as simple as choosing very similar actors (peers) that do, or are expected to, act in very similar manners, then using these groups of peers to identify when one outlier begins to exhibit behavior significantly different from its peers. Peer groups can be similar companies, similar assets, individuals with similar job titles, people with historically similar browsing patterns, or pretty much any cluster of entities with a commonality. The power of peer groups is to compare new behavior against the new behavior of similar actors rather than expecting the historical activity of a single actor to continue in perpetuity. Make sure the next time someone starts bombarding you with these terms that they can explain why they are using them and the results that you are going to see. Mathematics will trigger new alerts, but you could just trade one kind of noise for another The major benefit that user behavior analytics promises to security teams today is the ability to stop relying on the rules and heuristics primarily used for detection in their IPS, SIEM, and other tools. Great! Less work for the security team to maintain and research the latest attack, right? It depends. The time that you currently spend writing and editing rules in your monitoring solutions very well could be taken over by training the analytics, adjusting thresholds, tweaking the meaning of "high risk" versus "low risk" and any number of modifications that are not technically rules setting. If you move from rules and heuristics to automated anomaly detection and machine learning, there is no question that you are going to see outliers and risky behaviors that you previously did not. Your rules were most likely aimed at identifying patterns that your team somehow knows indicate malicious activity and anomaly detection tools should not be restricted by the knowledge of your team. However, not involving the knowledge of your team means that a great deal of outliers identified will be legitimate to your organization, so instead of having to sift through thousands of false positives that broke a yes/no rule, you will have thousands of false positives on a risk scale from low to high. I have three examples of the kind of false positives that can occur because human beings are not broadly predictable: Rare events - Certain events occur in our lives that cause significant changes in behavior, and I don't mean having children. When someone changes roles in your organization, they are most likely going to immediately look strange in comparison to their established peer group. Similarly, if your IT staff stays late to patch servers every time a major vulnerability (with graphics and a buzz-name!) is released, this is now some of the most critical administrators and systems in the organization straying from any established baselines. Periodic events - Someone taking vacation is unlikely to skew your alerting because the algorithms should be tuned to account for a week without activity, but what about annual audits for security, IT, accounting, etc.? What about the ongoing change in messaging systems and collaboration tools that constantly lead to data moving through different servers? Rare actors - There are always going to be individuals with no meaningful peer; whether it is a server that is accessed by nearly every user in the organization (without their knowledge) like IIS servers or a user that does extremely unique, cutting edge research like basically everyone on the Rapid7 Research team, mathematics has not reached the point where it can determine enough meaningful patterns to predict the behavior of some portion of the organization that you need to monitor. Aside from dealing with a change in the noise, there is the very real risk that by relying too heavily on canned analytics to detect attacks, you can easily leave yourself open to manipulation. If I believe that your organization is using "big data analytics" as most are, I can pre-emptively start to poison the baseline for what is considered normal by triggering events on your network that set off alerts, but appear to be false positives upon closer investigation. Then, having forced this activity into some form of baseline, it can be used as an attack vector. This is the challenge that scientists always run into when observing humans: anyone that knows they are being observed can choose to act differently than they otherwise would and you won't know. A final note on anomalies is that a great deal of them are going to be stupid behavior. That's right, I guarantee that negligence is a much more common cause of risky activity in your organization than malice, but an unsupervised machine learning approach will not know the difference. InsightIDR blends mathematics with knowledge of attacker behavior This post is not meant to say that applied mathematics have no place in incident detection or investigation. On the contrary, the Rapid7 Data Science team is continuously researching data samples for meaningful patterns to use in both areas. We just believe that you need to apply the science behind these buzzwords appropriately. I would summarize our approach around this in three ways: A blend of techniques: At times, simple alerts are necessary because the activity should either never occur in an organization or occurs so rarely that the security team wants to hear about it - the best example of this is providing someone with domain administrator privileges. Incident response teams always want to know when a new king of the network has been crowned. Some events cannot be assumed good when a solution is baselining or "learning normal", so there should be an extremely easy way for the security team to indicate which activities are permitted to take place in that specific organization. Add domain expertise: Adding security domain knowledge is not unique to Rapid7, but thanks to our research, penetration test, and Metasploit teams, the breadth and depth of our familiarity with the tools attackers use and their stealth techniques is unmatched in the market. We continually use this in our analyses of large datasets to find new indicators of compromise, visualizations, and kinds of data that we will add to InsightIDR. Plus, if we cannot get the new data from your SIEM or existing data source, we will build tools like our endpoint monitor or no-maintenance honeypots to go out there and get the data. Use outliers differently: Almost every user behavior analytics product in the market is using its algorithms to produce an enormous list of events sorted by each one's risk score. We believe in alerting infrequently, so that you can trust it is something worth investigating. Outliers? Anomalies? We are going to expose them and help you to explore the massive amount of data to hopefully discover unwanted activity, but the specific outliers have to pass our own tests for significance and noise level before we will turn them into alerts. Additionally, we will help you look through the data in the context of an investigation because it can often add clarity to traditional "search and compare" methods that your teams are likely using in your SIEM. So if you want to drop mathematics into your network, flip a switch, and to let its artificial intelligence magically save you from the bad guys, we are not the solution for you. Sadly, though, no solution out there is going to fulfill your desire any time soon. If you want to learn more about the way InsightIDR does what I described here, please check out our on-demand demo. We think you will appreciate our approach.

12 Days of HaXmas: Designing Information Security Applications Your Way

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 days of blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 days of blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Are you a busy Information Security professional that prefers bloated web applications, fancy interactions, unnecessary visuals, and overloaded screens that are difficult to make sense? No…I didn't think so! Being a designer, I cringe when I see that sort of stuff, and it's something we avoid at all cost at Rapid7. You don't even have to be a designer to dislike it. My mantra mirrors that of Derek Featherstone, who said “Create the minimum viable interaction by providing the most valuable piece of information for decision-making as early as possible.” And focusing on good design is the gift I bring to you this HaXmas! To bring you this gift, I am always learning about new ways to solve the problems that you and your teams face on a day-to-day basis. That learning comes from many sources, including our customers, books, webinars, blog posts, and events. One notable event this year was the aptly named An Event Apart, held in Boston. An event what? An Event Apart is a tech conference for designers and developers to learn,and to be inspired by, the latest design trends and coding techniques that improve the way we deliver applications. While other conferences tend to focus only on design, this conference does much more by bringing a variety of topics under one umbrella, including coding, web and mobile app design. To that end, every speaker at An Event Apart is pretty famous in our world—it was great to see them in real life! Three days and twelve presentations later, my head was swimming with ideas. But the most important themes I brought away were to: Design the priority Speed it up Be more compassionate Let's look at each of these concepts one-by-one and see how they apply to the way we designed InsightIDR, Rapid7's Incident and Detection Response tool, which allows security teams to detect intruders earlier in the attack chain. Design the priority At the conference, Ethan Marcotte, the father of Responsive Design, said “Design the priority, not the layout”. Ethan mentioned this because designers tend to consider the layout of an application screen first. Unfortunately, this approach has a tendency to throw out the signal-to-noise ratio. Jeffrey Zeldman agreed with Ethan when he said, “Design your system to serve your content, not the other way around.” This concept has really come to the forefront with the Mobile First approach from Luke Wroblewski, who argues that "Mobile forces you to focus". Well, I argue that you do not need to be mobile to focus! This concept is just as important on a 27” screen as it is on a 5” screen. As we design InsightIDR, we design the priority, not the layout, by helping our customers focus on the right content. As you can see on the InsightIDR design to the left, the KPIs are placed in order of importance, with date and trending information, allowing our customers to prioritize their next actions as they protect their organizations. This results in a better user experience, and time saved for other tasks. Speed it up According to Jeffrey Zeldman, the applications we build need to be fast. Very. Fast. Commonsense, I hear you say, and I agree completely. But that's no easy thing when you are collecting, analyzing, and sorting the amount of information that InsightIDR captures. Can we sit back and start to think that our customers would understand if it takes a few seconds for a page to load? Not at all! Yesenia Perez-Cruz, design director at Vox Media, suggests that organizations need to better plan for a more strategic way to decrease the file size of web application pages, while concurrently increasing load times. We have taken Jeffrey's and Yesenia's message to heart, as we strive to ensure the pages and content within InsightIDR load as quickly as possible, so you can get your job done faster. Be more compassionate Being compassionate by standing in the shoes of the people we design for might seem like a no-brainer. After all, the ‘U' stands for ‘User' in my job title ‘UX Designer.' Yet, some designers do not take the time to actually speak with the people they are designing for. But at Rapid7, I speak with customers about their security needs through our customer voice program on a regular basis. The customers that have signed up for the program have a say in the features we design, and they get to see those designs early so they can, in effect, co-design with us by letting us know how to modify the designs to make them more effective. Only then can I and the rest of the UX team at Rapid7 truly design for you. In this respect, as Patty Toland, a regular An Event Apart speaker, says “Design consistency isn't pixels; it is purpose.” Wrapping up At Rapid7, I am always learning about design, about our customers' needs, and about the future of information security. So, if you are in Boston and hear someone on the T softly say “Create the minimum viable interaction by providing the most valuable piece of information for decision-making as early as possible,” that will probably be me as I go to work.  On a more serious note, if you have not done so already, make sure you sign up for our Voice Program to see what's in the works, and have a say in what we do and how we do it. Here are a few links to that program if you are interested: Rapid7 Voice: https://www.rapid7.com/about/rapid7-voice/ Rapid7 Voice email: Rapid7Voice [at] rapid7 [dot] com I look forward to speaking with you in the near future, as we work together to design the next version of InsightIDR! Thanks for reading, and have a wonderful HaXmas! Kevin Lin, UX Designer II Rapid7 Image credits: First image: An Event Apart (©eventifier.com, @heyoka) Second image: insightIDR

SIEM Tools Aren't Dead, They're Just Shedding Some Extra Pounds

Security Information and Event Management (SIEM) is security's Schrödinger's cat. While half of today's organizations have purchased SIEM tools, it's unknown if the tech is useful to the security team… or if its heart is even beating or deployed. In response to…

Security Information and Event Management (SIEM) is security's Schrödinger's cat. While half of today's organizations have purchased SIEM tools, it's unknown if the tech is useful to the security team… or if its heart is even beating or deployed. In response to this pain, people, mostly marketers, love to shout that SIEM is dead, and analysts are proposing new frameworks with SIEM 2.0/3.0, Security Analytics, User & Entity Behavior Analytics, and most recently Security Operations & Analytics Platform Architecture (SOAPA).However, SIEM solutions have also evolved from clunky beasts to solutions that can provide value without requiring multiple dedicated resources. While some really want SIEM dead, the truth is it still describes the vision we all share: reliably find insights from security data and detect intruders early in the attack chain. What's happened in this battle of survival of the fittest is that certain approaches and models simply weren't working for security teams and the market.What exactly has SIEM lost in this sweaty regimen of product development exercise? Three key areas have been tightened and toned to make the tech something you actually want to use.No More Hordes of Alerts without User Behavior ContextUser Behavior Analytics. You'll find this phrase at every SIEM vendor's booth, and occasionally in their technology as well. Why? This entire market segment explosion spawned from two major pain-points in legacy SIEM tech: (1) too many false-positive, non-contextual alerts, and a (2) failure to detect stealthy, non-malware attacks, such as the use of stolen credentials and lateral movement.By tying every action on the network to the users and assets behind them, security teams spend less time retracing user activity to validate and triage alerts, and can detect stealthy, malicious behavior earlier in the attack chain. Applying UBA to SIEM data results in higher quality alerts and faster investigations, as teams are spending less time retracing IPs to users and running tedious log searches.Detections now Cover Endpoints Without Heavy LiftingEndpoint Detection and Response. This is another super-hot technology of 2016, and while not every breach originates from the endpoint, endpoints are often an attacker's starting point and provide crucial information during investigations. There are plenty of notable behaviors that if detected, are reliable signs of “investigate-me” behavior.A couple examples:Log DeletionFirst Time Admin Action (or detection of privilege exploit)Lateral MovementAny SIEM that doesn't offer built-in endpoint detection and visibility, or at the very least, automated ways to consume endpoint data (and not just anti-virus scans!), leaves gaps in coverage and across the attack chain. Without endpoint data, it's very challenging to have visibility into traveling and remote workers or detect an attacker before critical assets are breached. It can also complicate and slow incident investigations, as endpoint data is critical for a complete story. The below highlights a standard investigation workflow along with the relevant data sources to consult at each step.Incident investigations are hard. They require both incident response expertise (how many breaches have you been a part of?) and also data manipulation skills to get the information you need. If you can't search for endpoint data from within your SIEM, that slows down the process and may force you to physically access the endpoint to dig deeper.Leading SIEMs today now offer a combination of Agents or an Endpoint Scan to ingest this data, detect local activity, and have it available for investigations. We do all of this and supplement our Endpoint detections with Deception Technology, which includes decoy Honey Credentials that are automatically injected into memory to better detect pass-the-hash and credential attacks.Drop the Fear, Uncertainty, and Doubt About Data ConsumptionThere are a lot of things that excite me, for example, the technological singularity, autonomous driving, loading my mind onto a Westworld host. You know what isn't part of that vision? Missing and incomplete data. Today's SIEM solutions derive their value from centralizing and analyzing everything. If customers need to weigh the value of inputting one data set against another, that results in a fractured, frustrating experience. Fortunately, this too is now a problem of the past.There are a couple of factors behind these winds of change. Memory capacity continues to expand close to a Moore's Law pace, which is fantastic, as our log storage needs are heavier than ever before.Vendors now are offering mature cloud architectures that can securely store and retain log data to meet any compliance need, along with faster search and correlation activity than most on-premise deployments can dream about. The final shift, and one that's currently underway today, is with vendor pricing. Today's models revolve around Events per Second and Data Volume Indexed. But, what's the point of considering endpoint, cloud, and log data if the inevitable data volume balloon means the org can't afford to do so?We've already tackled this challenge and customers have been pleased with it. Over the next few years, new and legacy vendors alike will also shed existing models to also reflect the demand for sensible data pricing that finally arms incident investigators with the data and context they need.There's a lot of pain with existing SIEM technology – we've experienced it ourselves, from customers, and every analyst firm we've spoken with. However, that doesn't mean the goal isn't worthy or the technology has continually failed to adapt. Can you think of other ways SIEM vendors have collectively changed their approach over the years? Share it in the comments! If you're struggling with an existing deployment and are looking to augment or replace, check out our webcast, “Demanding More From Your SIEM”, for recommendations and our approach to the SIEM you've always wanted.

Web Shells 101: Detection and Prevention

2016 has been a big year for information security, as we've seen attacks by both cybercriminals and state actors increase in size and public awareness, and the Internet of Things comes into its own as a field of study. But today we'd like to talk…

2016 has been a big year for information security, as we've seen attacks by both cybercriminals and state actors increase in size and public awareness, and the Internet of Things comes into its own as a field of study. But today we'd like to talk about a very old (but no less dangerous) type of attacker tool – web shells – and new techniques Rapid7 is developing for identifying them quickly and accurately. What is a Web Shell? Web shells are web-based applications that provide a threat actor with the ability to interact with a system – anything from file access and upload to the ability to execute arbitrary code on the exploited server. They're written in a variety of languages, including PHP, ASP, Java and JavaScript, although the most common is PHP (since the majority of systems support PHP). Once they're in your system, the threat actor can use them to steal data or credentials, gain access to more important servers in the network, or as a conduit to upload more dangerous and extensive malware. Why should I care? Because web shells can hit pretty much anyone. They most commonly show up on small business web presences, particularly Wordpress-powered sites, as Wordpress plugins and themes are a favoured target for web shell authors (since vulnerabilities show up in them all the time). Wordpress isn't, of course, alone - virtually all web applications are released with vulnerabilities from time to time. So if you have a website that accepts and stores any kind of user input, from forum posts to avatar images, now is a fine time to learn about web shells, because you could very well be vulnerable to them. How Web Shells Work and How They're Used The first step with a web shell is uploading it to a server, from which the attacker can then access it. This “installation” can happen in several ways, but the most common techniques involve: exploiting a vulnerability in the server's software, getting access to an administrator portal, or taking advantage of an improperly configured host. As an example, Rapid7's Incident Response Team has dealt with several engagements where the attackers took advantage of a vulnerability in a third-party plugin used by a customer's CMS enabling them to upload a simple PHP web shell. Once a web shell is uploaded, it's used to exploit the system. What this looks like differs from actor to actor, and from web shell to web shell, because shells can come with a variety of capabilities. Some are very simple and simply open a connection to the outside world, allowing an actor to drop in more precise or malicious code, and then execute whatever they receive. Others are more complex and come with database or file browsers, letting the attacker rifle through your code and data from thousands of miles away. Whatever the design, web shells can be both extremely dangerous and common – US-CERT has identified them as a regularly used tool of both cyber-criminals and Advanced Persistent Threats (APTs). If they're not detected and eliminated, they can provide an attacker with not only a solid, persistent backdoor into your environment but potentially root access, depending on what they compromise. Web Shell Detection Web shells aren't new, and people have spent a lot of time working to detect and halt them. Once the breach of a system is discovered, it's fairly straightforward (although time consuming) to just go through the server looking at the upload and modification dates of files, relative to the discovery date, and manually check suspicious-looking uploads to see if they're the source of the problem. But what about detecting web shells before they're used to cause harm? There are a couple of ways of doing it. One approach is to have an automated system look at the contents of newly uploaded or changed files and see if they match a known web shell, just as antivirus software does with other forms of malware. This works well if an attacker is using a known web shell, but quickly falls apart when confronted with custom code. Another technique is to use pattern matching to look for code fragments (down to the level of individual function calls) that are commonly malicious, such as calls out to the system to manipulate files or open connections. The problem there is that web shell authors are fully aware of this technique, and deliberately write their code in a very opaque and confusing way that makes pattern matching extraordinarily difficult to do with any real accuracy. A Better Way If we can detect web shells, we can stop them, and if we can stop them, we can protect our customers – but as you see, all the existing approaches have some pretty severe drawbacks. Meaning they miss a lot. Rapid7 Labs has been working on a system that uses data science to classify web shell threats based on static and dynamic analysis of PHP files. In a static analysis context, our classifier looks for both dangerous looking function calls and file signatures plus coding techniques that developers simply wouldn't do if they were writing legitimate, production ready code – things that only appear when the developer is trying to hide their purpose. In a dynamic analysis context the potentially malicious file is executed on a monitored, standalone system so our classifier can see what it does. The results from both these methods are then fed into a machine learning model, which predicts whether the file is malicious or not, and the accuracy rate has been extremely promising, with the system detecting 99% of the hundreds of web shells we've tested it on, including custom, single use shells, with only a 1% false-positive rate. The result is that (to broadly generalize), if our AR team is faced with 1,000 files, they only need to manually check 10. (For those ML-nerds out there, yes we've checked for over-fitting.) In the future we hope to use the system to pre-emptively detect web shells, identifying and isolating them before they exploit the system. Until that point, it's being used by our managed detection and response team, letting them identify the source of customer breaches far more quickly than teams relying solely on traditional, arduous and error-prone manual methods. Oliver Keyes, Senior Data Scientist Tim Stiller, Senior Software Engineer

Introspective Intelligence: What Makes Your Network Tick, What Makes It Sick?

In my last blog post, we reviewed the most prevalent detection strategies and how we can best implement them. This post dives into understanding how to catch what our other systems missed, using attacker behavior analytics and anomaly detection to improve detection.Understand Your Adversary…

In my last blog post, we reviewed the most prevalent detection strategies and how we can best implement them. This post dives into understanding how to catch what our other systems missed, using attacker behavior analytics and anomaly detection to improve detection.Understand Your Adversary – Attack Methodology DetectionContextual intelligence feeds introduce higher fidelity and the details needed to gain insight into patterns of attacker behavior. Attackers frequently rotate tools and infrastructure to avoid detection, but when it comes to tactics and techniques, they often stick with what works. The methods they use to deliver malware, perform reconnaissance, and move laterally in a network do not change significantly.A thorough understanding of attacker methodology leads to the creation and refinement of methodology-based detection techniques. Knowledge of applications targeted by attackers enables more focused monitoring of those applications for suspicious behaviors, thus optimizing the effectiveness and efficiency of an organization's detection program. An example of application anomaly-based detection is webshells on IIS systems:It is anomalous for IIS to spawn a command prompt, and the execution of “whoami.exe” and “net.exe” indicate likely reconnaissance activity. By understanding the methods employed by attackers we generate detections that will identify activity without relying on static indicators such as hashes or IPs. In this case we are using the low likelihood of IIS running CMD and the rare occurrence of CMD executing ‘whoami' and ‘net [command]' to drive our detection of potential attacker activity.Additionally, attackers must reconnoiter networks both internally and externally to identify target systems and users. Reviewing logs for unusual user-to-system authentication events, suspicious processes (for example, ‘dsquery', ‘net dom', ‘ping –n 1', and ‘whoami'), especially over abbreviated time periods, can provide investigative leads to identify internal reconnaissance.Even without a constant stream of real-time data from endpoints, we can model behavior and identify anomalies based upon the frequency of an item across a group of endpoints. By gathering data on persistent executables across a network, for example, we can perform frequency analysis and identify rare or unique entries for further analysis. Simple techniques like frequency analysis will often reveal investigative leads from prior (or even current) compromises, and can be applied to all manner of network and endpoint data.Expanding beyond a reliance primarily on traditional static indicator-based detection and adding a focus on attacker behavior increases the likelihood of detecting previously unknown malware and skilled attackers. A culmination of multiple detection strategies is necessary for full coverage and visibility: proactive detection technology successfully blocks known-bad, contextual intelligence assists in identifying less common malware and attackers, and methodology-based evidence gathered from thorough observation provides insight into potential indicators of compromise.Use the Knowledge You HaveIT and security staff know their organization's users and systems better than anyone else. They work diligently on their networks every day ensuring uptime of critical components, enablement of user actions, and expedient resolution of problems. Their inherent knowledge of the environment provides incredible depth of detection capabilities. In fact, IT support staff are frequently the first to know something is amiss, regardless if the problem is caused by attacker activity.  Compromised systems may often exhibit unusual symptoms and become problematic for users, who report the problems to their IT support staff.Environment-specific threat detection is Rapid7's specialty. Our InsightIDR platform continuously monitors user activity, authentication patterns, and process activity to spot suspicious behavior. By tracking user authentication history, we can identify when a user authenticates to a new system, over a new protocol, and from a new IP. By tracking the processes executed on each system we can identify if a user is running applications that deviate from their normal patterns or if they are running suspicious commands (based on our knowledge of attacker methodology). Combining user authentication with process execution history ensures that even if an attacker accesses a legitimate account, his tools and reconnaissance techniques will give him away. Lastly, by combining this data with threat intelligence from previous findings, industry feeds, and attacker profiles we ensure that we prioritize high-fidelity investigative leads and reduce overall detection time, enabling faster and more effective response.Let's walk through an example: Bob's account is compromised internally:After compromising the system, an attacker would execute reconnaissance commands that are historically dissimilar to Bob's normal activity. Bob does not typically run ‘whoami' on the command line or execute psexec, nor has Bob ever executed a powershell command – those behaviors are investigative elements that individually are not significant enough to alert on, but in aggregate present a trail of suspicious behavior that warrants an investigation.Knowledge of your environment and what is statistically ‘normal' per user and per system enables a ‘signature-less' addition to your detection strategy. Regardless of the noisy and easily bypassed malware hashes, domains, IPs, IDS alerts, firewall blocks, and proxy activity your traditional detection technology provides, you can identify attacker activity and ensure that you are not missing events due to stale or inaccurate intel.Once you have identified an attack based on user and system anomaly detection, extract useful indicator data from your investigation and build your own ‘known-bad' threat feed. By adding internal indicators to your traditional detection systems, you have greater intel context and you can simplify the detection of attacker activity throughout your environment. Properly combining detection strategies dramatically increases the likelihood of attack detection and provides you with the context you need to differentiate between ‘weird', ‘bad', and ‘there goes my weekend'.

Introspective Intelligence: Understanding Detection Techniques

To provide insight into the methods devised by Rapid7, we'll need to revisit the detection methods implemented across InfoSec products and services and how we apply data differently. Rapid7 gathers volumes of threat intelligence on a daily basis - from new penetration testing tools, tactics,…

To provide insight into the methods devised by Rapid7, we'll need to revisit the detection methods implemented across InfoSec products and services and how we apply data differently. Rapid7 gathers volumes of threat intelligence on a daily basis - from new penetration testing tools, tactics, and procedures in Metasploit, vulnerability detections in Nexpose, and user behavior anomalies in InsightIDR. By continuously generating, refining and applying threat intelligence, we enable more robust detection strategies to identify adversaries wherever they may hide. Slicing Through the Noise There are many possible combinations of detection strategies deployed in enterprise environments, with varying levels of efficacy. At a minimum, most organizations have deployed Anti-Virus (AV) software and firewalls, and mature organizations may have web proxies, email scanners, and intrusion detection systems (IDS). These "traditional" detection technologies are suitable for blocking "known-bad" activity, but they provide little insight into the origin, purpose, and intent of detections. Additionally, many of these techniques falter against uncommon threats due to a lack of applicable rulesets or detection context. Consider an AV detection for Mimikatz, a well-known credential dumper: Mimikatz may be detected by AV; however, standard AV detection alerts do not provide the background information required to accurately understand or prioritize the threat. The critical context in this scenario is that the presence of Mimikatz typically indicates an active, human attacker rather than an automated commodity malware infection. Additionally, a Mimikatz detection indicates that an attacker has already circumvented perimeter defenses, has the administrator rights required to dump credentials, and is moving laterally through your environment. Without a thorough understanding or explanation of the samples your detection technologies identify as malicious you do not have the information required to understand the severity of detections. Responders who are not armed with appropriate context cannot differentiate or prioritize low, medium, and high severity events, and they often resort to chasing commodity malware and low severity alerts. Adding Context – Intelligence Implementation Many organizations integrate ‘threat feeds' into their existing technology to compensate for the lack of context and to increase detections for less common threats. Threat feeds come in many forms, from open source community-driven lists to paid private feeds. The effectiveness of these feeds strongly depends on a number of factors: Intel type (hash, IP, domain, contextual, strategic) Implementation Indicator age Intelligence source When consuming intelligence feeds, context remains the critical element – feeds containing only hashes, domains, and IPs are the least effective form of threat intelligence due to the ease with which an attacker can modify infrastructure and tools. It is important to understand why a particular indicator has been associated with attacker activity, how old the intelligence is (as domains, IPs, tools are often rotated by attackers), and how widely the intelligence has been disseminated (does the attacker know that we know?). We routinely work in environments wherein the customers have enabled every open source threat intel feed and every IDS rule available in their detection products, and they chase thousands of false positives daily. Effective threat intelligence application requires diligence, review, and active research into the origin, age, and type of indicators coming in through threat feeds. Contextual intelligence feeds provide customers not only with indicators of compromise but also a thorough explanation of the attacker use of infrastructure, tools, and particular methodologies. Feeds containing contextual information are far more effective for successful threat detection, for example: MALWARE DETECTED: FUZZY KOALA BACKDOOR The ‘Fuzzy Koala Backdoor' is a fully-functional remote access utility that communicates to legitimate, compromised servers over DNS using a custom binary protocol. This backdoor provides file upload, file download, command execution, and VNC-type capabilities. The ‘Fuzzy Koala Backdoor' is typically delivered via spearphishing emails containing Office documents with malicious macros, and is sent via the ‘EvilSpam' mail utility. Files Created: %systemdrive%\programdata\iexplore.exe %systemdrive%\programdata[a-z]{6}%UUID%.dll Persistence: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon Shell=explorer.exe,%systemdrive%\programdata\iexplore.exe Network Indicators: Domains: SuperCoolEngineeringConference.com With that context, a successful detection team can: Look for other anomalous DNS traffic matching the attacker's protocol to catch additional domains Look for unusual emails containing documents with macros Including header data provided by the attacker's mail client Identify systems on which Office applications spawned child processes Identify file-based and registry-based indicators of compromise Monitor for traffic to the legitimate compromised domain Similarly, a successful incident detection and response team will build additional strategies to identify underlying attacker techniques and cycle out stale static indicators to minimize false positives. Traditional detection mechanisms, including contextual intelligence feeds, provide security teams the ability to identify and respond to threats in the wild. In our next blog post we'll discuss approaches for finding previously-unseen malware and attacker activity using hunting and anomaly detection.

User Behavior Analytics and Privacy: It's All About Respect

When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment.…

When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment. It's an effective approach: an analytics engine that triggers based on known attack methods as well as users straying from their normal behavior results in high fidelity detection. Our conversations center on technical features and objections – how can we detect lateral movement, or what does the endpoint agent do, and how can we manage it? That's the nature of technical sales, I suppose. I'm the sales engineer, and the analysts and engineers that I'm speaking with want to know how our stuff works. The content can be complex at times, but the nature of the conversation is simple. An important conversation that is not so simple, and that I don't have often enough, is a discussion on privacy and IDR. Privacy is a sensitive subject in general, and over the last 15 years (or more), the security community has drawn battle lines between privacy and security.  I'd like to talk about the very real privacy concerns that organizations have when it comes to the data collection and behavioral analysis that is the backbone of any IDR program. Let's start by listing off some of the things that make employers and employees leery about incident detection and response. It requires collecting virtually everything about an environment. That means which systems users access and how often, which links they visit, interconnections between different users and systems, where in the world users log in from – and so forth. For certain solutions, this can extend to recording screen actions and messages between employees. Behavioral analysis means that something is always “watching,” regardless of the activity. A person needs to be able to access this data, and sift through it relatively unrestricted. I've framed these bullets in an intentionally negative light to emphasize the concerns. In each case, the entity that either creates or owns the data does not have total control or doesn't know what's happening to the data. These are many of the same concerns privacy advocates have when large-scale government data collection and analysis comes up. Disputes regarding the utility of collection and analysis are rare. The focus is on what else could happen with the data, and the host of potential abuses and misuses available. I do not dispute these concerns – but I contend that they are much more easily managed in a private organization. Let's recast the bullets above into questions an organization needs to answer. Which parts of the organization will have access to this system? Consider first the collection of data from across an enterprise. For an effective IDR program, we want to pull authentication logs (centrally and from endpoints – don't forget those local users!), DNS logs, DHCP logs, firewall logs, VPN, proxy, and on and on. We use this information to profile “normal” for different users and assets, and then call out the aberrations. If I log into my workstation at 8:05 AM each morning and immediately jump over to ESPN to check on my fantasy baseball team (all strictly hypothetical, of course), we'll be able to see that in the data we're collecting. It's easy to see how this makes employees uneasy. Security can see everything we're doing, and that's none of their business! I agree with this sentiment. However, taking a magnifying glass to typical user behavior, such as websites visited or messages sent isn't the most useful data for the security team. It might be interesting to a human resources department, but this is where checks and balances need to start. An information security team looking to bring in real IDR capabilities needs to take a long and hard look at its internal policies and decide what to do with information on user behavior. If I were running a program, I would make a big point of keeping this data restricted to security and out of the hands of HR. It's not personal, HR – there's just no benefit to allowing witch hunts to happen. It'll distract from the real job of security and alienate employees. One of the best alerting mechanisms in every organization isn't technology, it's the employees. If they think that every time they report something it's going to put a magnifying glass on every inane action they take on their computer, they're likely to stop speaking up when weird stuff happens. Security gets worse when we start using data collected for IDR purposes for non-IDR use cases. Who specifically will have access, to what information, and how will that be controlled? What about people needing unfettered access to all of this data? For starters, it's absolutely true. When Bad Things™ are detected, at some point a human is going to have to get into the data, confirm it, and then start to look at more data to begin the response. Consider the privacy implications, though; what is to stop a person from arbitrarily looking at whatever they want, whenever they want, from this system? The truth is organizations deal with this sort of thing every day anyway. Controlling access to data is a core function of many security teams already, and it's not technology that makes these decisions. Security teams, in concert with the many and varied business units they serve, need to decide who has access to all of this data and, more importantly, regularly re-evaluate that level of access. This is a great place for a risk or privacy officer to step in and act as a check as well. I would not treat access into this system any differently than other systems. Build policy, follow it, and amend regularly. Back to if I was running this program. I would borrow pretty heavily from successful vulnerability management exception handling processes. Let's say there's a vulnerability in your environment that you can't remediate, because a business critical system relies on it. In this case, we would put an exception in for the vulnerability. We justify the exception with a reason, place a compensating control around it, get management sign off, and tag an expiration date so it isn't ignored forever. Treat access into this system as an “exception,” documenting who is getting access, why, and define a period in which access will be either re-evaluated or expire, forcing the conversation again. An authority outside of security, such as a risk or privacy officer, should sign off on the process and individual access. Under what circumstances will this system be accessed, and what are the consequences for abusing that access? There need to be well-defined consequences for those that violate the rules and policies set forth around a good incident detection and response system. In the same way that security shouldn't allow HR to perform witch hunts unrelated to security, the security team shouldn't go on fishing trips (only phishing and hunts). Trawls through data need to be justified. This is for the same reasons as the HR case. Alienating our users hurts everyone in the long run. Reasonable people are going to disagree over what is acceptable and what is not, and may even disagree with themselves. One Rapid7 customer I spoke with talked about using an analytics tool to track down a relatively basic financial scam going on in their email system. They were clearly justified in both extracting the data and further investigating that user's activity inside the company. “In an enterprise,” they said, “I think there should be no reasonable expectation of privacy – so any privacy granted is a gift. Govern yourself accordingly.” Of course, not every organization will have this attitude. The important thing here is to draw a distinct line for day to day use, and note what constitutes justification for crossing that line. That information should be documented and be made readily available, not just in a policy that employees have to accept but never read. Take the time to have the conversation and engage with users. This is a great way to generate goodwill and hear out common objections before a crisis comes up, rather than in the middle of one or after. Despite the above practitioner's attitude towards privacy in an enterprise, they were torn. “I don't like someone else having the ability to look at what I'm doing, simply because they want to.” If we, the security practitioners, have a problem with this, so do our users. Let's govern ourselves accordingly. Technology based upon data collection and analysis, like user behavior analytics, is powerful and enables security teams to quickly investigate and act on attackers. The security versus privacy battle lines often get drawn here, but that's not a new battle and there are plenty of ways to address concerns without going to war. Restrict the use of tools to security, track and control who has access, and make sure the user population understands the purpose and rules that will govern the technology. A security organization that is transparent in its actions and receptive to feedback will find its work to be much easier.

Warning: This blog post contains multiple hoorays! #sorrynotsorry

Hooray for crystalware! I hit a marketer's milestone on Thursday – my first official award ceremony, courtesy of the folks at Computing Security Awards, which was held at The Cumberland Hotel in London. Staying out late on a school night when there's a 16 month old…

Hooray for crystalware! I hit a marketer's milestone on Thursday – my first official award ceremony, courtesy of the folks at Computing Security Awards, which was held at The Cumberland Hotel in London. Staying out late on a school night when there's a 16 month old teething toddler in the house definitely took it's toll the following morning, but the tiredness was definitely softened by the sweet knowledge that we'd left the award ceremony brandishing some crystalware. In the two categories that Rapid7 solutions were shortlisted as finalists - SME Security Solution of the Year (Nexpose) and Best New Product of the Year (InsightIDR) - we were awarded winner and runner-up respectively. What's particularly cool about the Computing Security Awards is that the majority of awards, including the two we were up for, are voted for by the general public, so receiving these accolades is very special to us. We'd like to say an absolutely massive THANK YOU to everyone who voted for our products, we are truly very grateful for your support. Hooray for Nexpose! Nexpose storming to the win in the SME category, a space that isn't always top of mind to some security vendors, really validates for me how well designed and engineered the product is. Our customers come in all shapes and sizes, and the maturity of their vulnerability management programs vary just as much, but Nexpose caters for all. In SME the concept of a dedicated security team is certainly less common. More often than not we see that IT teams have security as just one of their many disciplines – so they need a vulnerability management tool which is easy to use, and allows them to quickly prioritise remediation efforts with live data that's relevant to their environment. Nexpose determines and constantly updates vulnerability risk scoring using RealRisk – scoring vulnerabilities from 1-1000, thus removing the nightmare of having umpteen hundred ‘'criticals” which are seemingly all equal. Liveboards (because dashboards don't actually dash – they should really be called meanderboards) provide admins with real time data – you know at all times exactly how well you are winning at remediating. If you're reading this blog and you're thinking about implementing a new VM solution, you should download a free trial here and experience it in action for yourself. Hooray for InsightIDR! InsightIDR receiving an honourable mention in the Best New Product category makes Sam very happy. This product was frankly one of the main reasons I came to work for Rapid7. When I first heard of it back in March my interest was immediately sparked, as I'd never seen anything quite like it.  I've worked in incident response in a previous life, and have seen a vast number of organisations really struggle to find answers when they are in the unfortunate situation of a cyberattack. Some didn't even know they'd been under attack until they received notification from a third party. Incidents would regularly go on for many days, with teams having to work around the clock with great pressure to balance business continuity and incident response, which is the juggling act from hell. More often than not, investigations and Root Cause Analysis reports would take months and months, and would frequently be lacking in details. If you can't see what's happening, you can't properly respond, and you have pretty much a zero chance of taking away any solid learnings from the event. InsightIDR solves these problems by combining SIEM, EDR and UBA capabilities, which mean it detects attacks early in the attack chain, finds compromised credentials, and it provides a clear investigation timeline. It's truly an amazing piece of kit, and I know that every incident I ever worked on would undoubtedly have had a better outcome had InsightIDR been in place at the time. Seeing in this case will definitely result in believing – I'd heartily recommend you arrange a demo today. Hooray for Integrated Solutions! So before I give a shout out to the incredible people behind these two superb products, there's one further piece of good news: you can now integrate [PDF] them too! Hooray for Moose! Our people, our “Moose”, who design, build, test, sell, support and of course market (obvs.) these products are all the winners here. I don't use the term ‘incredible' lightly either – I am privileged to have represented them at the awards ceremony, we have an amazing team across the globe jam-packed with smart, creative, brilliant people. Our solutions are testament to the work they do, their combined knowledge solves difficult customer problems, providing insight to security professionals all over the world. Congratulations Moose – you are a bloody awesome bunch! Thanks again to everyone who voted for our solutions, and a big cheers to the folks at Computing Security who held a brilliant awards bash. We hope to see you again next year!

Looking for a Managed Detection & Response Provider? You'll Need These 38 Evaluation Questions

Managed Detection and Response (MDR) services are still a relatively new concept in the security industry. Just recently, Gartner published their first Market Guide on Managed Detection & Response, which further defines the MDR Services market. MDR Services combines human expertise with tools to provide…

Managed Detection and Response (MDR) services are still a relatively new concept in the security industry. Just recently, Gartner published their first Market Guide on Managed Detection & Response, which further defines the MDR Services market. MDR Services combines human expertise with tools to provide 24/7 monitoring and alerting, as well as remote incident investigation and response that can help you detect and remediate threats. We like to think of MDR services as an army of cyber guardians. Do you want an army of cyber guardians? Who doesn't? The challenge is finding the right army, one that knows how to protect your unique organization. Unlike some vendor selections, this isn't a matter of just checking the boxes. It's important to ask a lot of questions, collect evidence, and do a thorough evaluation of the talent and technology that will be used to close the gaps in your incident detection and response practice. Don't worry. It can be done! To help you with your selection, our security experts put together a list of 38 vital questions you should ask each vendor during your search for the perfect partner. These questions cover nine categories that are critical to detecting and responding to threats, including provider expertise, communication processes, how the deployment works, and how the service can be tailored. The list also covers the full attack chain with specific questions around threat detection, incident response, and remediation and mitigation.  I won't go into detail with the full list right here, but I've pulled some of my favorites: Does the solution propose to detect known and unknown threats by applying several threat detection methodologies? Which ones?Can the solution detect threats across multiple platforms? How?Does the solution propose to notify you about attacker campaigns against your industry? Request an example.Does the solution include endpoint technology for higher fidelity validation?What is the SLA for reporting a threat within your environment?Is information provided that is digestible by both executive and technical customer contacts?Does it include business-focused remediation recommendations and mitigation techniques?Is information provided that is digestible by both executive and technical customer contacts?See the full listAs you can see, the questions are pretty thorough. However, any potential partner that's worth your time should be able to answer these questions quickly and confidently. Be sure to trust your gut and avoid any answers that seem fishy (or phishy). The extra diligence now will go a long way in your organization's ongoing security health. Are you ready to recruit your army? Check out the full evaluation brief at: https://information.rapid7.com/choosing-a-managed-detection-and-response-provide r.htmlYou can also learn more about Rapid7's MDR service, Analytic Response, at https://www.rapid7.com/services/analytic-response.jsp

UNITED 2016: Want to share your experience?

Key trends. Expert advice. The latest techniques and technology. UNITED 2016 is created from the ground up to provide the insight you need to drive your security program forward, faster. This year, we're also hoping you can provide us with the insight we need to…

Key trends. Expert advice. The latest techniques and technology. UNITED 2016 is created from the ground up to provide the insight you need to drive your security program forward, faster. This year, we're also hoping you can provide us with the insight we need to make our products and services even better. That's why we're running two UX focus groups on November 1, 2016. We'd love to see you there—after all, your feedback is what keeps our solutions ever-evolving.UX Focus Group: Help us make Nexpose Now even betterStale results. False alerts. Windows of wait. We heard your issues with traditional scanning and released Nexpose Now to help you resolve them. Now that you've been using it for several months, we'd love to know: how's it going? Actually, we have way more questions than that, but they're all in the service of making sure Nexpose Now is meeting – or exceeding – your needs. And the only person who can tell us that is you! So please join us for this 1.5-hour focus group, where you – along with other Nexpose Now users – can share your list of loves and loathes. It's the perfect opportunity to speak with Rapid7, as well as your peers, about your Nexpose Now experience, so we can help make it even more exceptional.UX Focus Group: Creating personalized and exceptional experiencesHere at Rapid7, we think we've done some pretty great stuff, but we also know we can do some things even better. Though, frankly, what we think doesn't really matter—as a Rapid7 customer, the only opinion we care about is yours. And we want it! Why? Well, as our favorite customer experience author John A. Goodman put it, “We can solve only the problems we know about.” So join this 1.5-hour focus group and let us know: from the first time you heard about our solutions to the last time you used them, what's worked well and what could work better? Your participation will really help us to understand the experience from your perspective, and how we can further personalize and improve that experience moving forward.Want in?Saving a seat in our focus groups is easy. If you haven't yet registered for UNITED, you can register for a UX session while registering for the conference.If you have already registered for UNITED, just head back to the conference registration page, enter the email address you used to register – along with your confirmation number – and tack on the session that makes sense for you.Space is limited, so act soon! We are looking forward to seeing you!Ger JoyceSenior UX Researcher, Rapid7

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More


Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now


Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now