Rapid7 Blog

Incident Response  

The Legal Perspective of a Data Breach

The following is a guest post by Christopher Hart, an attorney at Foley Hoag and a member of Foley Hoag’s cybersecurity incident response team. This is not meant to constitute legal advice; instead, Chris offers helpful guidance for building an incident preparation and breach…

The following is a guest post by Christopher Hart, an attorney at Foley Hoag and a member of Foley Hoag’s cybersecurity incident response team. This is not meant to constitute legal advice; instead, Chris offers helpful guidance for building an incident preparation and breach response framework in your own organization. A data breach is a business crisis that requires both a quick and a careful response. From my perspective as a lawyer, I want to provide the best advice and assistance I possibly can to help minimize the costs (and stress) that arise from a security incident. When I get a call from someone saying that they think they’ve had a breach, the first thing I’m often asked is, “What do I do?” My response is often something like, “Investigate.” The point is that normally, before the legal questions can be answered and the legal response can be crafted, as full a scope of the incident as possible first needs to be understood. I typically think of data breaches as having three parts: Planning, managing, and responding. Planning is about policy-making and incident preparation. Sometimes, the calls that I get when there is a data breach involve conversations I’m having for the first time—that is, the client has not yet thought ahead of time about what would happen in a breach situation, and how she might need to respond. But sometimes, they come from clients with whom I have already worked to develop an incident response plan. In order to effectively plan for a breach, think about the following questions: What do you need to do to minimize the possibility of a breach? What would you need to do if and when a breach occurs? Developing a response plan allows you to identify members of a crisis management team—your forensic consultant, your legal counsel, your public relations expert—and create a system to take stock of your data management. I can’t emphasize enough how important this stage is. Often, clients still think of data breaches as technical, IT issues. But the trend I am seeing now, and the advice I often give, is to think of data security as a risk management issue. That means not confining the question of data security to the tech staff, but having key players throughout the organization weigh in, from the boardroom on down. Thinking about data security as a form of managing risk is a powerful way of preparing for and mitigating against the worst case scenario. Managing involves investigating the breach, patching it and restoring system security, notifying affected individuals, notifying law enforcement authorities as necessary and appropriate, and taking whatever other steps might be necessary to protect anyone affected. A good plan will lead to better management. When people call me (or anyone at my firm’s Cybersecurity Incident Response Team, a group of lawyers at Foley Hoag who specialize in data breach response) about data breaches, they are often calling me about how to manage this step. But this is only one part of a much broader and deeper picture of data breach response. Responding can involve investigation and litigation. If you’ve acted reasonably and used best practices to minimize the possibility of a breach; and if you’ve quickly and reasonably complied with your legal obligations; and if you’ve done all you can to protect consumers, then not only have you minimized the damage from a breach—which is good for your company and for the individuals affected by a breach—but you’ve also minimized your risks in possible litigation. In any event, this category involves responding to inquiries and investigation demands from state and federal authorities, responding to complaints from individuals and third parties, and generally engaging in litigation until the disputes have been resolved. This can be a frustratingly time-consuming and expensive process. This should give you a good overall picture of how I, or any lawyer, thinks about data security incidents. I hope it helps give you a framework for thinking about data security in your own organizations. Need assistance? Check out Rapid7's incident response services if you need assistance developing or implementing an incident response plan at your organization.

Running an Effective Incident Response Tabletop Exercise

Are you ready for an incident? Are you confident that your team knows the procedures, and that the procedures are actually useful? An incident response tabletop exercise is an excellent way to answer these questions. Below, I've outlined some steps to help ensure success for…

Are you ready for an incident? Are you confident that your team knows the procedures, and that the procedures are actually useful? An incident response tabletop exercise is an excellent way to answer these questions. Below, I've outlined some steps to help ensure success for your scenario-based threat simulations. First, identify your audience. This will help inform which type of exercise you'll want to run. Will it be an executive exercise or technical in nature? It does not make sense to invite your entire C-Suite to a technical exercise, just like it would not go over well to have your technical incident responders drive an exercise that focuses on executive oversight and compliance. This does not mean that there cannot be overlap (some exercises can combine both executive and technical aspects), but the exercise must be managed closely to ensure it's a good use of time for everyone. You can also involve counsel at this point. Legal counsel provides invaluable guidance and advice for navigating an increasingly complex regulatory environment. Now that your scope and audience have been set, it is time to define your scenario. This is where many exercises go off the rails. You must set a realistic scenario that truly exercises your organization. Remember, this is a time when you will be pulling together many people who have cleared off their schedules for a few hours. Make it worth their time. Use the maturity of your organization's incident response (IR) capabilities and the threats to your business to help guide the selection of a scenario for the exercise. For instance, a defense contractor will not have much need to practice a case of adware infection on a handful of machines, and a restaurant will not greatly benefit from preparing for a nation-state threat. You have to find the sweet spot to ensure a successful exercise. It should not be out your team's reach, yet it also shouldn't be a softball. And if you intend to conduct multiple exercises over time (which we highly recommend), you will want to keep the audience engaged and ensure they do not dread the effort. With the audience set and the scenario defined, you can move into scripting the exercise itself. While it is good to set an outline for the time you have everyone together, leave enough flexibility to improvise when needed. This phase of your planning should not involve every potential exercise participant. Limit this to a handful of trusted agents. This is not a case in which more is better. Having all participants help write the test only ensures that the results will be artificially inflated and the assessment will be inaccurate. Unwavering candor is necessary to help the organization truly know where it stands in its preparedness. You would much rather discover deficiencies during practice than during a live event. Now that you have fully prepared, the steps that remain are executing the exercise and reporting the results. You should not be afraid to call out areas for improvement in your program. Narrow your assessment down to specific facets of incident response; we like to look at clients' incident response plans, their adherence to those plans, coordination among IR teams, communications (internal and external), and technical analysis. As mentioned before, it is helpful to go over the results with your legal counsel and seek advice for how to proceed with improvements. At the end of the day, you may not be able to address every finding. As with any aspect of security, your decisions on what to address and how to go about it should be risk-based. We wish you luck in assessing your program, and our experts are happy to help when needed. You can learn more about the role tabletop exercises play in incident response by watching this Whiteboard Wednesday, and be sure to keep an eye out for more posts around the role legal counsel plays in IR. If you are interested in partnering with Rapid7 to help you develop a robust incident response plan at your organization, check out our incident response services.

12 Days of HaXmas: Designing Information Security Applications Your Way

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 days of blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 days of blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Are you a busy Information Security professional that prefers bloated web applications, fancy interactions, unnecessary visuals, and overloaded screens that are difficult to make sense? No…I didn't think so! Being a designer, I cringe when I see that sort of stuff, and it's something we avoid at all cost at Rapid7. You don't even have to be a designer to dislike it. My mantra mirrors that of Derek Featherstone, who said “Create the minimum viable interaction by providing the most valuable piece of information for decision-making as early as possible.” And focusing on good design is the gift I bring to you this HaXmas! To bring you this gift, I am always learning about new ways to solve the problems that you and your teams face on a day-to-day basis. That learning comes from many sources, including our customers, books, webinars, blog posts, and events. One notable event this year was the aptly named An Event Apart, held in Boston. An event what? An Event Apart is a tech conference for designers and developers to learn,and to be inspired by, the latest design trends and coding techniques that improve the way we deliver applications. While other conferences tend to focus only on design, this conference does much more by bringing a variety of topics under one umbrella, including coding, web and mobile app design. To that end, every speaker at An Event Apart is pretty famous in our world—it was great to see them in real life! Three days and twelve presentations later, my head was swimming with ideas. But the most important themes I brought away were to: Design the priority Speed it up Be more compassionate Let's look at each of these concepts one-by-one and see how they apply to the way we designed InsightIDR, Rapid7's Incident and Detection Response tool, which allows security teams to detect intruders earlier in the attack chain. Design the priority At the conference, Ethan Marcotte, the father of Responsive Design, said “Design the priority, not the layout”. Ethan mentioned this because designers tend to consider the layout of an application screen first. Unfortunately, this approach has a tendency to throw out the signal-to-noise ratio. Jeffrey Zeldman agreed with Ethan when he said, “Design your system to serve your content, not the other way around.” This concept has really come to the forefront with the Mobile First approach from Luke Wroblewski, who argues that "Mobile forces you to focus". Well, I argue that you do not need to be mobile to focus! This concept is just as important on a 27” screen as it is on a 5” screen. As we design InsightIDR, we design the priority, not the layout, by helping our customers focus on the right content. As you can see on the InsightIDR design to the left, the KPIs are placed in order of importance, with date and trending information, allowing our customers to prioritize their next actions as they protect their organizations. This results in a better user experience, and time saved for other tasks. Speed it up According to Jeffrey Zeldman, the applications we build need to be fast. Very. Fast. Commonsense, I hear you say, and I agree completely. But that's no easy thing when you are collecting, analyzing, and sorting the amount of information that InsightIDR captures. Can we sit back and start to think that our customers would understand if it takes a few seconds for a page to load? Not at all! Yesenia Perez-Cruz, design director at Vox Media, suggests that organizations need to better plan for a more strategic way to decrease the file size of web application pages, while concurrently increasing load times. We have taken Jeffrey's and Yesenia's message to heart, as we strive to ensure the pages and content within InsightIDR load as quickly as possible, so you can get your job done faster. Be more compassionate Being compassionate by standing in the shoes of the people we design for might seem like a no-brainer. After all, the ‘U' stands for ‘User' in my job title ‘UX Designer.' Yet, some designers do not take the time to actually speak with the people they are designing for. But at Rapid7, I speak with customers about their security needs through our customer voice program on a regular basis. The customers that have signed up for the program have a say in the features we design, and they get to see those designs early so they can, in effect, co-design with us by letting us know how to modify the designs to make them more effective. Only then can I and the rest of the UX team at Rapid7 truly design for you. In this respect, as Patty Toland, a regular An Event Apart speaker, says “Design consistency isn't pixels; it is purpose.” Wrapping up At Rapid7, I am always learning about design, about our customers' needs, and about the future of information security. So, if you are in Boston and hear someone on the T softly say “Create the minimum viable interaction by providing the most valuable piece of information for decision-making as early as possible,” that will probably be me as I go to work.  On a more serious note, if you have not done so already, make sure you sign up for our Voice Program to see what's in the works, and have a say in what we do and how we do it. Here are a few links to that program if you are interested: Rapid7 Voice: https://www.rapid7.com/about/rapid7-voice/ Rapid7 Voice email: Rapid7Voice [at] rapid7 [dot] com I look forward to speaking with you in the near future, as we work together to design the next version of InsightIDR! Thanks for reading, and have a wonderful HaXmas! Kevin Lin, UX Designer II Rapid7 Image credits: First image: An Event Apart (©eventifier.com, @heyoka) Second image: insightIDR

User Behavior Analytics and Privacy: It's All About Respect

When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment.…

When I speak with prospects and customers about incident detection and response (IDR), I'm almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment. It's an effective approach: an analytics engine that triggers based on known attack methods as well as users straying from their normal behavior results in high fidelity detection. Our conversations center on technical features and objections – how can we detect lateral movement, or what does the endpoint agent do, and how can we manage it? That's the nature of technical sales, I suppose. I'm the sales engineer, and the analysts and engineers that I'm speaking with want to know how our stuff works. The content can be complex at times, but the nature of the conversation is simple. An important conversation that is not so simple, and that I don't have often enough, is a discussion on privacy and IDR. Privacy is a sensitive subject in general, and over the last 15 years (or more), the security community has drawn battle lines between privacy and security.  I'd like to talk about the very real privacy concerns that organizations have when it comes to the data collection and behavioral analysis that is the backbone of any IDR program. Let's start by listing off some of the things that make employers and employees leery about incident detection and response. It requires collecting virtually everything about an environment. That means which systems users access and how often, which links they visit, interconnections between different users and systems, where in the world users log in from – and so forth. For certain solutions, this can extend to recording screen actions and messages between employees. Behavioral analysis means that something is always “watching,” regardless of the activity. A person needs to be able to access this data, and sift through it relatively unrestricted. I've framed these bullets in an intentionally negative light to emphasize the concerns. In each case, the entity that either creates or owns the data does not have total control or doesn't know what's happening to the data. These are many of the same concerns privacy advocates have when large-scale government data collection and analysis comes up. Disputes regarding the utility of collection and analysis are rare. The focus is on what else could happen with the data, and the host of potential abuses and misuses available. I do not dispute these concerns – but I contend that they are much more easily managed in a private organization. Let's recast the bullets above into questions an organization needs to answer. Which parts of the organization will have access to this system? Consider first the collection of data from across an enterprise. For an effective IDR program, we want to pull authentication logs (centrally and from endpoints – don't forget those local users!), DNS logs, DHCP logs, firewall logs, VPN, proxy, and on and on. We use this information to profile “normal” for different users and assets, and then call out the aberrations. If I log into my workstation at 8:05 AM each morning and immediately jump over to ESPN to check on my fantasy baseball team (all strictly hypothetical, of course), we'll be able to see that in the data we're collecting. It's easy to see how this makes employees uneasy. Security can see everything we're doing, and that's none of their business! I agree with this sentiment. However, taking a magnifying glass to typical user behavior, such as websites visited or messages sent isn't the most useful data for the security team. It might be interesting to a human resources department, but this is where checks and balances need to start. An information security team looking to bring in real IDR capabilities needs to take a long and hard look at its internal policies and decide what to do with information on user behavior. If I were running a program, I would make a big point of keeping this data restricted to security and out of the hands of HR. It's not personal, HR – there's just no benefit to allowing witch hunts to happen. It'll distract from the real job of security and alienate employees. One of the best alerting mechanisms in every organization isn't technology, it's the employees. If they think that every time they report something it's going to put a magnifying glass on every inane action they take on their computer, they're likely to stop speaking up when weird stuff happens. Security gets worse when we start using data collected for IDR purposes for non-IDR use cases. Who specifically will have access, to what information, and how will that be controlled? What about people needing unfettered access to all of this data? For starters, it's absolutely true. When Bad Things™ are detected, at some point a human is going to have to get into the data, confirm it, and then start to look at more data to begin the response. Consider the privacy implications, though; what is to stop a person from arbitrarily looking at whatever they want, whenever they want, from this system? The truth is organizations deal with this sort of thing every day anyway. Controlling access to data is a core function of many security teams already, and it's not technology that makes these decisions. Security teams, in concert with the many and varied business units they serve, need to decide who has access to all of this data and, more importantly, regularly re-evaluate that level of access. This is a great place for a risk or privacy officer to step in and act as a check as well. I would not treat access into this system any differently than other systems. Build policy, follow it, and amend regularly. Back to if I was running this program. I would borrow pretty heavily from successful vulnerability management exception handling processes. Let's say there's a vulnerability in your environment that you can't remediate, because a business critical system relies on it. In this case, we would put an exception in for the vulnerability. We justify the exception with a reason, place a compensating control around it, get management sign off, and tag an expiration date so it isn't ignored forever. Treat access into this system as an “exception,” documenting who is getting access, why, and define a period in which access will be either re-evaluated or expire, forcing the conversation again. An authority outside of security, such as a risk or privacy officer, should sign off on the process and individual access. Under what circumstances will this system be accessed, and what are the consequences for abusing that access? There need to be well-defined consequences for those that violate the rules and policies set forth around a good incident detection and response system. In the same way that security shouldn't allow HR to perform witch hunts unrelated to security, the security team shouldn't go on fishing trips (only phishing and hunts). Trawls through data need to be justified. This is for the same reasons as the HR case. Alienating our users hurts everyone in the long run. Reasonable people are going to disagree over what is acceptable and what is not, and may even disagree with themselves. One Rapid7 customer I spoke with talked about using an analytics tool to track down a relatively basic financial scam going on in their email system. They were clearly justified in both extracting the data and further investigating that user's activity inside the company. “In an enterprise,” they said, “I think there should be no reasonable expectation of privacy – so any privacy granted is a gift. Govern yourself accordingly.” Of course, not every organization will have this attitude. The important thing here is to draw a distinct line for day to day use, and note what constitutes justification for crossing that line. That information should be documented and be made readily available, not just in a policy that employees have to accept but never read. Take the time to have the conversation and engage with users. This is a great way to generate goodwill and hear out common objections before a crisis comes up, rather than in the middle of one or after. Despite the above practitioner's attitude towards privacy in an enterprise, they were torn. “I don't like someone else having the ability to look at what I'm doing, simply because they want to.” If we, the security practitioners, have a problem with this, so do our users. Let's govern ourselves accordingly. Technology based upon data collection and analysis, like user behavior analytics, is powerful and enables security teams to quickly investigate and act on attackers. The security versus privacy battle lines often get drawn here, but that's not a new battle and there are plenty of ways to address concerns without going to war. Restrict the use of tools to security, track and control who has access, and make sure the user population understands the purpose and rules that will govern the technology. A security organization that is transparent in its actions and receptive to feedback will find its work to be much easier.

Looking for a Managed Detection & Response Provider? You'll Need These 38 Evaluation Questions

Managed Detection and Response (MDR) services are still a relatively new concept in the security industry. Just recently, Gartner published their first Market Guide on Managed Detection & Response, which further defines the MDR Services market. MDR Services combines human expertise with tools to provide…

Managed Detection and Response (MDR) services are still a relatively new concept in the security industry. Just recently, Gartner published their first Market Guide on Managed Detection & Response, which further defines the MDR Services market. MDR Services combines human expertise with tools to provide 24/7 monitoring and alerting, as well as remote incident investigation and response that can help you detect and remediate threats. We like to think of MDR services as an army of cyber guardians. Do you want an army of cyber guardians? Who doesn't? The challenge is finding the right army, one that knows how to protect your unique organization. Unlike some vendor selections, this isn't a matter of just checking the boxes. It's important to ask a lot of questions, collect evidence, and do a thorough evaluation of the talent and technology that will be used to close the gaps in your incident detection and response practice. Don't worry. It can be done! To help you with your selection, our security experts put together a list of 38 vital questions you should ask each vendor during your search for the perfect partner. These questions cover nine categories that are critical to detecting and responding to threats, including provider expertise, communication processes, how the deployment works, and how the service can be tailored. The list also covers the full attack chain with specific questions around threat detection, incident response, and remediation and mitigation.  I won't go into detail with the full list right here, but I've pulled some of my favorites: Does the solution propose to detect known and unknown threats by applying several threat detection methodologies? Which ones?Can the solution detect threats across multiple platforms? How?Does the solution propose to notify you about attacker campaigns against your industry? Request an example.Does the solution include endpoint technology for higher fidelity validation?What is the SLA for reporting a threat within your environment?Is information provided that is digestible by both executive and technical customer contacts?Does it include business-focused remediation recommendations and mitigation techniques?Is information provided that is digestible by both executive and technical customer contacts?See the full listAs you can see, the questions are pretty thorough. However, any potential partner that's worth your time should be able to answer these questions quickly and confidently. Be sure to trust your gut and avoid any answers that seem fishy (or phishy). The extra diligence now will go a long way in your organization's ongoing security health. Are you ready to recruit your army? Check out the full evaluation brief at: https://information.rapid7.com/choosing-a-managed-detection-and-response-provide r.htmlYou can also learn more about Rapid7's MDR service, Analytic Response, at https://www.rapid7.com/services/analytic-response.jsp

UNITED 2016: Want to share your experience?

Key trends. Expert advice. The latest techniques and technology. UNITED 2016 is created from the ground up to provide the insight you need to drive your security program forward, faster. This year, we're also hoping you can provide us with the insight we need to…

Key trends. Expert advice. The latest techniques and technology. UNITED 2016 is created from the ground up to provide the insight you need to drive your security program forward, faster. This year, we're also hoping you can provide us with the insight we need to make our products and services even better. That's why we're running two UX focus groups on November 1, 2016. We'd love to see you there—after all, your feedback is what keeps our solutions ever-evolving.UX Focus Group: Help us make Nexpose Now even betterStale results. False alerts. Windows of wait. We heard your issues with traditional scanning and released Nexpose Now to help you resolve them. Now that you've been using it for several months, we'd love to know: how's it going? Actually, we have way more questions than that, but they're all in the service of making sure Nexpose Now is meeting – or exceeding – your needs. And the only person who can tell us that is you! So please join us for this 1.5-hour focus group, where you – along with other Nexpose Now users – can share your list of loves and loathes. It's the perfect opportunity to speak with Rapid7, as well as your peers, about your Nexpose Now experience, so we can help make it even more exceptional.UX Focus Group: Creating personalized and exceptional experiencesHere at Rapid7, we think we've done some pretty great stuff, but we also know we can do some things even better. Though, frankly, what we think doesn't really matter—as a Rapid7 customer, the only opinion we care about is yours. And we want it! Why? Well, as our favorite customer experience author John A. Goodman put it, “We can solve only the problems we know about.” So join this 1.5-hour focus group and let us know: from the first time you heard about our solutions to the last time you used them, what's worked well and what could work better? Your participation will really help us to understand the experience from your perspective, and how we can further personalize and improve that experience moving forward.Want in?Saving a seat in our focus groups is easy. If you haven't yet registered for UNITED, you can register for a UX session while registering for the conference.If you have already registered for UNITED, just head back to the conference registration page, enter the email address you used to register – along with your confirmation number – and tack on the session that makes sense for you.Space is limited, so act soon! We are looking forward to seeing you!Ger JoyceSenior UX Researcher, Rapid7

Malware and Advanced Threat Protection: A User-Host-Process Model

[Editor's Note: This is a sneak peek at what Tim will be presenting at UNITED 2016 in November. Learn more and secure your pass at http://www.unitedsummit.org!]In today's big data and data science age, you need to think outside the box when…

[Editor's Note: This is a sneak peek at what Tim will be presenting at UNITED 2016 in November. Learn more and secure your pass at http://www.unitedsummit.org!]In today's big data and data science age, you need to think outside the box when it comes to malware and advanced threat protection. For the Analytic Response team at our 24/7 SOC in Alexandria, VA, we use three levels of user behavior analytics to identify and respond to threats. The model is defined as User-Host-Process, or UHP. Using this model and its supporting datasets allows our team to quickly neutralize and protect against advanced threats with a high confidence rate.What is the User-Host-Process Model?The UHP model supports our incident response and SOC analysts by adding context to every finding and pinpointing anomalous behavior. At its essence, it asks three main questions:What users are on the network?What hosts are they accessing?What processes are users running on those hosts?This model also includes several enrichment sources such as operating system anomalies, whitelisting and known evil to help in the decision-making process. Once these datasets are populated, the output from the model can be applied in a variety of different ways.For example, most modern SIEM solutions alert if a user logs in from a new, foreign country IP address. If you need to validate the alert armed only with log files, you'd be hard-pressed to confirm if the activity is malicious or benign.  Our Analytic Response team uses the UHP model to automatically bring in contextual data on users, hosts, and processes to help validate the alert. Here are artifact examples below:User Account InformationAccount created, Active Directory, accessed hosts, public IPs...Host InformationDestination host purpose, location, owner, operating system, service pack, criticality, sensitivity...Process InformationProcess name, process id, parent process id, path, hashes, arguments, children, parents, execution order, network connections...With this supporting data, we build a profiles for each user or artifact found. Circling back to our example “user logged in from a new IP address in a foreign country”, we can add this context:Does the user typically log in and behave in this way?Day/time of login, process execution order, duration of loginHow often does the user run these particular processes?Common, unique, rareHow common is this user's authentication onto this system?How often have these processes executed on this system?Armed with UHP model data, we have a baseline of user activity to aid in threat validation. If this user has never logged in from this remote IP, seldom logs into the destination system, and their process execution chain deviates from historical activity, we know that this alert needs further investigation.Analyzing Malware, the UHP WayAdhering to a UHP model means that for every executable, important metadata and artifacts are collected not only during execution, but also as a static binary. When you're able to compare binary commonality, arguments, execution frequency and other lower level attributes, you now have additional context to make nuanced decisions about suspected malware.For example, for the question, “How unique is a process?”, there are several layers to the question. Let's look at four:Process commonality on a single assetSingle host baselineProcess commonality at an organizational levelAcross all of my assets, how many are running this process?Process commonality at an industry/sector levelAcross organizations in the same vertical, how common is this process?Process commonality for all available datasets.To be most effective, the User, Host, and Process model applies multiple datasets to a specific question to aid in validation. So in the event that the “U” or user dataset finds no anomalies, the next Host layer is applied.  Finally, the Process layer is applied to find anomalies.Use Case: (Webshell)Rapid7 was called to assist on an Incident Response engagement involving potential unauthorized access and suspicious activity on a customer's public facing web server. The customer had deployed a system running Windows Internet Information Services (IIS) to serve static/dynamic content web pages for their clients.We started the engagement by pulling data around the users in the environment, hosts, and real-time process executions to build up the UHP model. While in this case, User and Host models didn't detect any initial anomalies, the real-time process tracking, cross process attributes, baselines and context models was able to identify suspicious command-line execution from the parent process w3wp.exe. This process happens to be the IIS process responsible for running the webserver. Using this data, we pivoted to the weblogs, which identified the suspicious web shell being accessed from a remote IP address. From there we were able to thoroughly remediate the attack.SummaryThe Analytic Response team uses models such as UHP to help automate alert validation and add context to findings. Adding in additional datasets from external sources such as VirusTotal, NSRL and IP related tools helps infuse additional context to the alerts, increasing analyst confidence and slashing incident investigation times. For each of our Analytic Response customers, we take into account their unique user, host, and process profiles. By applying the UHP model during alert triage, hunting and incident response, we can quickly identify and protect against advanced threats and malware in your enterprise quickly and accurately.If you'd like to learn more about Analytic Response, check out our Service Brief [PDF]. If you need Incident Response services, we're always available: 1-844-RAPID-IR.

The Calm Heroes Fighting Cyber Crime

The call everyone had been waiting for came in: the shuffleboard table arrived, and was ready to be brought upstairs and constructed! The team had been hard at work all morning in the open-style office space with conference rooms and private offices along the perimeter.…

The call everyone had been waiting for came in: the shuffleboard table arrived, and was ready to be brought upstairs and constructed! The team had been hard at work all morning in the open-style office space with conference rooms and private offices along the perimeter. The Security Operations Center (SOC) with computers, many monitors and an open layout was behind a PIN activated door. The team wanted something fun in the office to do when they took a break from defending networks.My office-mates for the week were casually dressed in jeans and either t-shirts or button downs, and they were sweating while laughing and strategizing for how to get a 20-foot shuffleboard table up two flights of stairs and into the office. About five minutes later, the shuffleboard table parts were placed in the open space in the office, and the team was back downstairs figuring out how to dispose of the wood and other protective covering that came with it.  They were calm and happy—the consistent mood throughout the week even when larger puzzles arose. The next morning, the table was fully assembled and there were tests underway for how to straighten the slope.What does a shuffleboard table have to do with my trip to Alexandria and the team I visited? The shuffleboard assembly showed me a lot about how some of the best problem solvers work together to get the job done. The team quickly, quietly, and efficiently solves problems regularly, and they have a lot of fun doing so. They work well together—they collaborate together, eat together, smoke together, and joke together. One way that they mark their success: you never heard about the incident that they solved, it's just solved—similar to how they built the shuffleboard table. One minute, there were many parts in a box that needed to be brought up the stairs and constructed.  A day later, there was a shuffleboard table set up and the packaging has been recycled. Most of the time, however, this teamwork is put to solving some of the largest, most complicated cyber security breaches and problems. Everyone on the team has a distinct role and they rely on each other to creatively problem solve. These are the crime fighters that you don't see or hear. So, how do they do it?They divide and conquer. The team is broken up into three smaller teams—there's an analytic response team, an incident response team, and a threat intelligence team. Their knowledge and collaboration enable quicker threat detection and response and a deep, unparalleled understanding of the threat landscape, user behavior, and attacker behavior.What are these three different teams and how are they not duplicative?Analytic ResponseThe Analytic Response team is a group of people who work in the security operations center and continuously keep an organization's environment safe. The combination of people and technology of Analytic Response act as “detectors” in the environment. With this team monitoring, detecting, and responding to what's going on in your environment, when an incident comes up, you gain an understanding of what is happening and how serious it is. There are three tiers of analysts in the SOC, and each has a different role in detecting and responding. They make it possible to detect and respond to threats in hours instead of months. These people eat, sleep, and breathe problem solving and do so calmly and with ease. Many of these analysts have been coding and participating in hacking events since they were young and have a lot of experience spotting anomalies.Incident ResponseThe Incident Response team is another subset of this larger IDR ecosystem. This group helps teams come up with proactive strategies so that they have a program. They are also the boots on the ground if there's an issue; as the team lead put it, “we're the people you don't want to see at your organization.” When the Incident Response team is called in unexpectedly, it's because there's a cyber-incident that needs to be solved, immediately. They examine and make sense of the virtual crime scene.Threat IntelligenceThe Threat Intelligence team analyzes information on threats and generates intelligence that feeds both analytic and incident response and gives all of the teams situational awareness of emerging and evolving threats. Our leader of the threat intelligence practice is a former Marine Corps network warfare analyst. Threat intelligence helps defenders understand threats and their implications and speeds decision making in the most urgent situations.The three teams that make up Rapid7's broader IDR Services all support each other and make it better for the customer. They may seem like three distinct teams, but they all come together to solve problems quickly and create a vast amount of knowledge to be used by all. The analytic response team is made more efficient by threat intelligence, and the incident response team helps customers experiencing major incidents and utilizes the work done by both teams to solve the problems. They are a integrated, fun, quirky team that calmly and easily solves problems… and they also find time for shuffleboard!Learn more about Analytic Response here.

10 Years Later: What Have We Learned About Incident Response?

When we take a look at the last ten years, what's changed in attacker methodology, and how has it changed our response? Some old-school methods continue to find success - attackers continue to opportunistically exploit old vulnerabilities and use weak/stolen credentials to move around…

When we take a look at the last ten years, what's changed in attacker methodology, and how has it changed our response? Some old-school methods continue to find success - attackers continue to opportunistically exploit old vulnerabilities and use weak/stolen credentials to move around the network. However, the work of the good guys, reliably detecting and responding to threats, has shifted to accommodate an attack surface that now includes mobile devices, cloud services, and a global workforce that expects access to critical information anywhere, anytime.Today, failure across incident detection to remediation not only results in risk for your critical data, but can result in an attacker overstaying their welcome. We discussed this topic with our incident response teams, who have responded to hundreds of breaches, to develop a new whitepaper that shares how Incident Response has changed and how they prioritize strategic initiatives today. This comes with a framework we use with customers today to measure and improve security programs. Download your copy of A Decade of Incident Response: IDR Evolution & Evaluation here.Incident Detection & Response, Then and NowSince 2006, every step in breach response has continued to evolve – this infographic highlights key differences. For example, breach readiness was an afterthought to availability and optimizing the speed of business processes. Previously, there was little chance of falling victim to a sophisticated targeted attack leveraging a combination of vulnerabilities, compromised credentials, and malware.But today, IT teams are expected to prepare thoroughly in the event of a breach, implementing network defense in depth and organizing and restricting data along least privilege principles. If we look back a decade, it was much easier to retrace how and where an incident occurred and respond accordingly. Today's IR pros must combine expertise in a growing list of areas from forensics to incident management and ensure breach response covers everything from technical analysis to getting the business back up and running.On the other hand, at containment and recovery has continued to improve over the past decade. Thanks to well-rehearsed programs, combined with system image and data restoration processes, IT can return a user's machine in just a day. Security teams can contain threats remotely and use technology to provide scrutiny over previously compromised users/assets.Incident Response MaturityYou can find out more on all of this in the infographic and the new Rapid7 whitepaper: A Decade of Incident Response. Too many security professionals are concerned with how their programs compare to those of their peers. This is the wrong approach. As you evolve your security program, worry only about one thing: how your program measures up against your attackers.In the paper, you're asked seven questions to determine the maturity of your Incident Detection and Response program. We've based this framework on decades of Rapid7 industry experience and we think it'll provide a great place to start evaluating where you need to make changes. Want to learn more about Rapid7's technology and services for incident detection and response? Check out InsightIDR, which combines the best capabilities of UBA, SIEM, and EDR to relentlessly detect attacks across your network.Eric Sun

From the trenches: Breaches, Stories, Simple Security Solutions - from MacAdmins at PSU

Over the last few months, Jordan Rogers and I have been speaking about the benefits of doing the basics right in information security. Reducing noise, avoiding the waste of precious budget dollars on solutions that will not be used to their fullest, as well as…

Over the last few months, Jordan Rogers and I have been speaking about the benefits of doing the basics right in information security. Reducing noise, avoiding the waste of precious budget dollars on solutions that will not be used to their fullest, as well as improving the overall security of your enterprise are all goals that can be achieved with some of these simple tips. We presented a hybrid Mac/Windows version of this talk at the MacAdmins conference at PSU, where it was filmed and uploaded to YouTube. Take a look if you'd like to hear the perspective of an Incident Response person combined with a Blue Team person, information from real problems we observed, as well as recommendations on how to mitigate those issues! From the trenches: Breaches, Stories, Simple Security Solutions - YouTube

Applying Poker Theory to Incident Detection & Response

Editors Note: Calling Your Bluff: Behavior Analytics in Poker and Incident Detection was really fun and well received, so here's an encore!Hold'em & Network Security: Two Games of Incomplete InformationWhen chatting about my past poker experience, there's one statement that pops up time and…

Editors Note: Calling Your Bluff: Behavior Analytics in Poker and Incident Detection was really fun and well received, so here's an encore!Hold'em & Network Security: Two Games of Incomplete InformationWhen chatting about my past poker experience, there's one statement that pops up time and time again: “So… as a 'pro'… you probably bluff a lot.” A bluff is a bet made knowing that if called, you have no chance to win at showdown. At its core, a bluff is an attack - betting and raising in an effort to win, or some may say, “steal," the pot. Deciding to attack your opponent is always a risk. Your target merely needs to call to drag in the pot - if your opponent has a strong hand, this can be the equivalent of burning money.   Tom Dwan, one of the most intuitive and aggressive hand-readers of all time.Similar to poker, an intruder on your network is also making decisions based on incomplete information. He or she doesn't have perfect information on your vulnerabilities, incumbent technology, or the security stack you have in place. Getting an initial foothold on the network involves risk - he or she is forced to leave behind traces in order to make headway.So how do attackers know when to bluff, and how does this relate to incident detection and response? For poker, in theory, it's quite easy.Imagine...magically, you could see the other person's hole cards - you'd never make a mistake. Not only could you extract optimal value every time you had a better hand, you could also attack with impunity when your opponent holds absolute weak holdings (say, a weak pair or worse). Therefore when deciding to bluff, your own cards don't matter, and two tenets hold true:Attack players who fold too much, or adapt poorly to a high level of aggressionAnalyze opponent behavior to read their hand and identify signs of weaknessHow Attackers Choose Their MethodsIn security, it's much easier to launch an attack. First of all, there's no getting stared down eye-to-eye after you throw in the payload salvo. Second, attackers are largely opportunistic and motivated by quick financial gain. Instead of developing an intricate plan against a single target, attackers can knock on thousands of doors for a quick win. This means attackers:Target organizations with both monetizable data and an immature security programUse previously successful signs of weakness (e.g. what's worked in the past?)In both endeavors, the aggressor usually has only one good shot before the defenses go up. A poorly executed attack greatly reduces the chance of succeeding again. With this mindset, how do you improve your security program to detect an attack? At Rapid7, instead of comparing your security program to similar organizations, we recommend modeling to the Attack Chain, pictured below. Following this model, here are three best practices to improve your security program:1. Ensure you have detection for previously successful attack vectors.If it ain't broke, don't fix it. In this year's Verizon DBIR, 63% of organizations leveraged credentials in the attack. This ranges from stolen credentials to weak passwords and policies (non-expiring passwords, anyone?).2. Detect earlier in the attack chain.If you only get an alert when unauthorized access of your critical assets occurs, that's really late into the game. By detecting an intruder during initial compromise and attacker reconnaissance, your team can catch the attack earlier, ideally before monetarily valuable data is breached.3. Have coverage on each of the steps appropriate to your security bandwidth.From Rapid7 research, our white and black hat teams, and the Metasploit project, we've found that organizations are adequately identifying malware, but leave gaps in detecting credential based attacks, endpoint detection (including malicious local lateral movement), and cloud services. A true detection-in-depth should identify anomalous behavior at each step in the chain, for a variety of attack vectors, across the network ecosystem.Of course, that's easier said than done. From our annual Incident Detection & Response survey, we found that (1) organizations have strained security teams, (2) are plagued with too many alerts from their existing technology, and (3) incident investigations - especially for false positives - just take too much time.At Rapid7, we focus on detecting and stopping intruders anywhere they go in your ecosystem. Our team's experience and research on attacker methodology can assist your security team whether you have gaps in people, process, or technology. If this piqued your interest, check out our recent research report on how intruders attack passwords in The Attacker's Dictionary – Auditing Criminal Credential Attacks.

Attackers Prey on Incident Response Bottlenecks

Organizations are taking too long to detect attacks. The average length of time to detect an intruder ranges from 2 months to 229 days across many reports and anecdotal evidence from publicized breaches supports these numbers. This means that attackers are taking advantage of the…

Organizations are taking too long to detect attacks. The average length of time to detect an intruder ranges from 2 months to 229 days across many reports and anecdotal evidence from publicized breaches supports these numbers. This means that attackers are taking advantage of the challenges inherent to the flood of information bombarding your incident response team every day. This is a problem that we need to address by improving the process with better tools. The incident handling process is similar to continuous flow manufacturing Continuous flow manufacturing has various nodes that are continuously receiving an item and passing it on to the next stage. Incidents follow a similar pattern: Alerted to potential incident --> Alert is triaged to strip false positives --> Incident is analyzed --> Action is taken to remediate An incident is not detected simply because an alert is triggered. It needs to reach the final stage where the issue is confirmed and the impact is understood before detection can be claimed. Because of this similarity, it reminds me of a mandatory reading in most business schools, The Goal. This book looked at the manufacturing process according to a "Theory of Constraints", which views any manageable system as being limited in achieving more of its goals by a very small number of constraints. These constraints are also commonly referred to as bottlenecks because no matter how many items look to flow through them, there is a limit to how much can pass at any one time (like the neck of a bottle). If you want to respond to incidents faster and speed attack detection, you need to make sure that each stage of your incident handling process is not a daunting bottleneck. The problem is that, in most organizations, there are two bottlenecks that need to be eliminated if we are to improve response time from months to hours. IR Bottleneck #1: Alert triage is a decision without time for proper deliberation because of the noise All too often, a significant incident response bottleneck is the first human action: alert triage. Whether a 24-hour SOC or an ad hoc small team, the number of noisy security tools currently in use causes CSIRTs to see thousands of alerts each day and that is causing them to both: (a) ignore a great deal of alerts based on extremely quick decisions with little context, and (b) take a long time to parse the alert and decide to confirm it as an incident and begin the investigation. In every massive breach made public, someone in the media jumps to point out that an alert was triggered, but no action was taken. I wrote about alert fatigue a great deal here, but the security solutions that value quantity of alerts over quality of alerts are as much to blame for incident response teams missing the right alerts as the teams and processes built around those tools. If too many alerts are being generated for a team to take a minute apiece to understand, the true incidents are never going to get investigated quickly enough. The InsightIDR team invests heavily in introducing new user behavior analytics detection capabilities that are designed to alert infrequently. We are also trying to ingest the alerts from your other security products and reduce the noise by adding context about the responsible user and trends within the data generated. Our goal is to reduce the number of alerts on such a scale - from thousands per day to tens per day - that you can triage them all without rushing and have time for your many other responsibilities. Our customers have told us that we are delivering on this promise with 20 to 50 alerts per day in the larger organizations, but our aim is to continue to keep the volume low while detecting more and more concerning behaviors. IR bottleneck # 2: Incident analysis requires a very high level of skill due to available tools Even more frequently, the investigation process for an incident takes a great deal of time. Taking an incident and tying all related information together to paint a picture of the broader compromise and map out the impact is primarily done by taking one piece of information, such as an IP address, from a noisy alert and conducting dozens of searches through mounds of collected data. There are three reasons that this frequently becomes a bottleneck: Not having the right data - Given the bottleneck in alert triage, not every incident is identified within thirty days, but most solutions only keep data searchable for this period of time. If you can obtain data from a date far enough in the past, every attack follows a different path, so you may not have collected from the sources (i.e. endpoints) necessary to map out an intruder's path through your network. Not knowing what questions to ask of your data - There is a well-known concern that a very small number of highly skilled incident response professionals know what to look for in advanced attacks. This means that either a select few organizations have multiple experts at incident analysis or a large number of organizations have one expert. The truth is closer to somewhere in the middle, but without improving the tools and methods, the learning curve to create more of these experts has a troubling slope. High speed answers without the right questions - Existing software solutions have focused too much on speeding manual searches through the data and too little on providing access to noteworthy patterns and relevant context in the search results that will help less experienced incident responders gain the understanding necessary to move to remediation faster. InsightIDR's provides a variety of tools for our customers to immediately query endpoints, search logs, or explore patterns in behavior to reduce this sizable investigation bottleneck because we feel that we need to provide better ways to analyze data from any time or source and tie it to the users and assets involved so that you can fully scope the incident and determine the impact to the organization. If you want to see how InsightIDR can help you minimize your bottlenecks, you can watch our brief video or register for a free, guided demo of InsightIDR. I think you'll quickly see how we can improve your incident handling flow.

What Makes SIEMs So Challenging?

I've been at the technical helm for dozens of demonstrations and evaluations of our incident detection and investigation solution, InsightIDR, and I've been running into the same conversation time and time again: SIEMs aren't working for incident detection and response.  At least, they aren't working…

I've been at the technical helm for dozens of demonstrations and evaluations of our incident detection and investigation solution, InsightIDR, and I've been running into the same conversation time and time again: SIEMs aren't working for incident detection and response.  At least, they aren't working without investing a lot of time, effort, and resources to configure, tune, and maintain a SIEM deployment.  Most organizations don't have the recommended three to five dedicated SIEM caretakers, and the result is the system produces a high volume of false positives that are difficult to investigate.  Why is this happening, and what makes it so hard? Let's visualize the incident detection and response process from start to finish: raw log ingestion to incident containment (Points A & B below).  Between Point A and Point B, we need to correlate the logs, produce meaningful alerts, and act on those alerts. This gives us three interdependent components as we move from Point A (ingesting logs) to Point B (containing an incident): The yellow segment represents the data flowing into the system and how the system correlates it. The green segment represents the system's ability to detect malicious behavior. The blue section represents the admin's ability to react to alerts and contain incidents. The idea is, the higher quality data the system has to work with, the farther right the yellow bar expands.  On the other end of the spectrum, the better the capabilities of incident response are, the further left the blue section extends.  The ultimate goal is to eliminate as much of the green section as possible, reducing the amount of time spent on building and maintaining rules. Let's take a look at how SIEMs are tackling the challenge: Quality of Data The length of the yellow segment corresponds to the quality of data, meaning how well the system is able to ingest and correlate logs from various sources without human help. Correlation is the key here - stronger correlation between various pieces of log data (IPs, assets, users, accounts, etc.) makes it much easier for the system and the admin to understand how these different log events interrelate.  Strong correlation results in less work on the administrator's part for building rules because the system is automatically correlating data and the admin is just responsible for defining alerts based on anomalous or malicious behavior.  On the other hand, if data is arriving in the system in raw log format with no pre-built correlation, then the admin will need to define correlation rules before he or she can build detection rules; without correlation, alerts aren't contextual and end up producing a deafening amount of noise.  To illustrate: Another major factor here is coverage – most SIEM solutions can handle internal logs, but enterprise cloud solutions (eg, AWS, Office365, Okta, etc.) are increasingly common, enabling employees to work from home more easily.  These remote users and cloud networks are ignored by most SIEM solutions.  Additionally, very few organizations that I've spoken to are gathering data from all their endpoints – this makes it impossible to detect more sophisticated attacker techniques such as lateral movement, event log deletion, and local privilege escalation. Time to Build Detection Rules The green segment's job is basically to bridge the gap between the data in the system and the capabilities of the incident response team.  Ideally, this section is nonexistent, meaning the admin doesn't need to spend any time building or maintaining rules.  He or she spends all their time responding to and containing incidents.  Realistically though, every organization will, at the very least, have some unique rules to build based on internal policies, network architecture, and what they consider sensitive data.  Even so, if we're able to provide some detection rules without configuration, and leave only a handful of organization-specific rules to be configured, we've successfully cut down the green section. Unfortunately, extensive manual configuration is unavoidable with SIEM solutions – every log source, every correlation rule, every dashboard, and every alert has to be setup by the admin.  Per Gartner, this results in an average setup time of six months just to get basic use cases and workflows configured, with a full deployment taking upwards of three years.  Oh, and these statistics are assuming you have 3-5 individuals dedicated to deploying, maintaining, and tuning the SIEM – full time.  Not many security teams that I talk to have the resources to dedicate even a single person to their SIEM, let alone five, which leaves us here: Capabilities of Incident Response The last piece of this equation are the capabilities of the incident response team.  Like the quality of data segment, we actually want this portion to be as extensive as possible – the larger the blue section is, the better poised the IR team is to quickly contain incidents.  There are a lot of variables here, but the key factors are security tools and human resources.  As most organizations have a fairly static headcount, our goal here is to provide useful tools to help facilitate incident response.  Good tools make it easier for an incident responder to understand the what, where, and who of a security incident - what happened, where in the network did it occur, and who was involved.  This gives the incident response team the information they need to swiftly and successfully contain the incident. The detection rules previously created produce an alert – the more finely tuned and contextual the alert is, the easier it is for the incident responders to identify the users and/or assets that have been compromised.  The incident responders typically use log search tools to find other affected users/assets, and only once all compromised targets have been identified can the IR team move on to containment.  Once identified, the IR team contains the incident by resetting passwords/pulling assets offline.  Finally, the team can analyze the attack to find the root cause and remediate appropriately. What's the bottom line? There are two key metrics SIEM systems are supposed to address: time to detect, and time to contain.  According to the Verizon Data Breach Investigations Report (DBIR), time to detect a breach is still measured in months.  Time to contain a breach once it has been detected is still measured in days or weeks, and almost a quarter of breaches still take months to contain.  This is largely due to the fact that traditional SIEM technologies require admins to build, maintain, analyze, and respond to alerts.  If the past 10 years of SIEM technologies have taught us anything, it's that the problems are still there, and we need a new approach. How does InsightIDR help? InsightIDR provides an out-of-the-box analytics and correlation engine to ingest logs from about 70 different security and IT solutions, correlate the data, and provide the context needed to produce meaningful alerts. Over 60 pre-built, contextual alerts give admins an advanced system for incident detection without the overhead required to build, maintain and tune alerting rules. The investigations feature gives incident responders an immediate understanding of the what, where, and who involved in an incident, along with the ability to run deeper forensic interrogations and correlate data from all log sources to further investigate an incident.

Attackers Thrive On Chaos; Don't Be Blind To It

Many find it strange, but I really enjoy chaos. It is calming to see so many problems around in need of solutions. For completely different reasons, attackers love the chaos within our organizations. It leaves a lot of openings for gaining access and remaining undetected…

Many find it strange, but I really enjoy chaos. It is calming to see so many problems around in need of solutions. For completely different reasons, attackers love the chaos within our organizations. It leaves a lot of openings for gaining access and remaining undetected within the noise. Rapid7 has always focused on reducing the weaknesses introduced by chaos. Dr. Ian Malcolm taught us in Jurassic Park that you cannot control chaos. Instead, we strive to help you reduce and understand its impact. Chaos in modern companies is largely an issue of scale. There is so much computing power in use that's creating so much data; no one could be expected to manually find every vulnerability, lacking control, or misconfiguration amid this rapidly growing and disconnected set of data. The solutions that fit into the Threat Exposure Management suite at Rapid7 are wholly focused on automating the discovery of these exposures and providing quick remediation steps. Detection solutions are often built for specific attack types. Incident response teams have to solve a completely different problem brought through the same chaotic expansion of data: understanding activity. When new methods of threat detection are invented, they are often highly effective against malware. Organizations gradually recognize their effectiveness and deploy them to prevent and detect a large number of attacks. As this security technology becomes universally adopted, attackers get creative and transition away from the many methods these solutions so effectively stop. This leads to a great deal of disparate solutions that remain effective against specific actions which will never completely disappear. When attempting to pool alerts and information across all of them, the sheer number of sources has led to many incident responders sitting in front of eight or more monitors with a view similar to the security guard command centers frequently navigated by the criminal protagonists in heist movies like The Score. The tools available are failing to bring sufficient order. For a decade now, incident response teams have trusted a lot of SIEM solutions which were not even built for incident responders. They were built for IT professionals, auditors, and data miners, so they focused on dealing with chaos in one way: centralization. Pooling all activity data to a single centralized place enabled incident responders to finally monitor something and if the team includes experts in networking and incident analysis, it can operative effectively. This is primarily done by continually building custom scripts to identify known risky behavior and indicators of compromise. However, combating this multiplying chaos through faster processing doesn't help you understand it better. Big Data technologies are now needed just to maintain the status quo as the attackers explore new technology. You can expand your staff of experts to include data scientists to help reduce the alerting noise, but this is only automating the analysis for data you already collect and know how to explain. Similar to trying to master a competitive sport without ever scrimmaging or working with other teams to improve, this can work, but there has to be a better way. We built InsightIDR for incident responders in this evolving chaos. We were consistently hearing this challenge of recognizing the legitimate, undesirable, and malicious behavior amid this growing chaos. So much time is being spent maintaining data collection, creating rules, and adjusting the views of the centralized data that security teams are often left with insufficient time to analyze the tens of thousands of alerts across their many dashboards. We automate this collection, attribution to users, and the normalization of the users' behavior, so your team can focus on analyzing incidents instead of manually checking networking protocols, constructing algorithms, and writing scripts on top of your centralized data. By combining the knowledge of the Rapid7 research, development, and incident response teams with every customer we bring on board, we are testing and adapting the detection capabilities of each involved organization. Because of the necessity of a detection in depth strategy with the range of current attacker techniques, we integrate with your trusted solutions which detect known malware through malicious network signatures and application characteristics with the benefit of user behavior analytics for all of it, but we also supplement that with effective detection for the many behaviors not involving malware, such as stolen credential use or lateral movement across endpoints and your managed cloud services. With all of these alerts in a single place, we can reduce the noise and simplify the process of scoping an incident in your organization of expanding chaos. If you want to learn more about how we can do this for your company, check out our Incident Detection and Response page. I think you'll find we have the services and solutions to quickly improve incident response at your organization.

Positive Secondary Effects: Incident Response Teams Benefit From Cloud Applications

We primarily hear the term "secondary effects" after natural disasters: "an earthquake causes a gas line to rupture and a fire ensues" or "a volcano erupts and the sulfur cloud shuts down all flights across the Atlantic", but there…

We primarily hear the term "secondary effects" after natural disasters: "an earthquake causes a gas line to rupture and a fire ensues" or "a volcano erupts and the sulfur cloud shuts down all flights across the Atlantic", but there are a lot of positive secondary effects out there. If developed properly, cloud applications bring with them secondary effects of singular events to benefit the customer community. Since I work for a security company, I cannot write a blog post about cloud applications without addressing the obvious concern so many people have. Cloud Security The first question that every security professional needs to ask a vendor when considering a cloud application is "how have you secured your cloud?" because it is just proper due diligence. The first step that Rapid7 took when building a security product in Amazon Web Services (AWS) was to lay out the strategy to make it secure and scalable, so we did a great deal of research and recognized that if you go to the necessary lengths (as we did), a cloud application can be just as secure as an on-premise app. Despite this forethought, I still run into a great deal of security pros that would prefer to have the data reside on one of their servers so they can truly control it. This is a natural feeling that we have in the security industry because we have been taught to be paranoid and trust no one, but I liken it to why people feel safer driving a car than flying on a plane controlled by someone else, despite the fact that a pilot specializes and agonizes over flying the way that we do over securing your data. The next time you discover that a rogue server was set up in your organization without informing the security team, remember that security is the primary focus of our InsightIDR development team. Please always ask your vendors to walk you through the steps they have taken to secure your data in the cloud, so that you can take advantage of the secondary effects I will describe below. Secondary Effect #1 - Quick reaction to change One of the most common pains that I hear from incident response teams (internal or MSSP) is keeping the data flowing into their SIEM. The SIEM vendors are not to blame here; whether you are talking about firewalls, VPNs, or whatever other devices containing valuable information, the export formats vary widely and change without notice. The worst possible scenario here is that a concerning event occurs and the investigation warrants a look at firewall logs to quickly learn what happened, but those logs have not been reaching the SIEM since the last firewall software update (silently) changed the logging format. Now, InsightIDR does not have the Nostradamus-like power to predict which vendor will change formats without notice, but it can take your team out of the silo that makes every team feel the same pain. Let me explain: once a single InsightIDR customer has set up, say, a Fortinet Fortigate firewall as an event source, the parser and collection method are there for all new customers to simply choose from a drop-down list and connect in seconds via syslog, a shared folder, or otherwise. This means that your team does not have to remember how they set up parsing of Fortinet logs at their last organizations or ever even see the logs in raw form. This helps in initial setup time, but it really helps when the firewall data suddenly stops reaching the InsightIDR cloud because the parsing rules have changed. One of Rapid7's customers will get a notification that data has stopped flowing, but the quick reaction of the InsightIDR team to update the parsing logic for this event source means that we will typically support the new format before other IR teams end up with a gap in data. The more organizations in the community, the lower the likelihood that your team even notices that log formats have changed. Secondary Effect #2 - Detection that learns faster than any single person Though it is not a pain as frequently voiced as changing log formats, I have heard a great deal of security leaders describe the challenge to find seasoned incident response experts. Typically, it comes in statements like "I worked at company X when they were breached and helped the team build detection for the techniques used there" or "I have a fantastic ArcSight guy and I don't know what I would do without him". Neither of these statements is a bad thing, but not every organization with security concerns can make those claims. In fact, I consistently hear that it is challenging for CSOs to fill the openings that are already approved. This is where InsightIDR's customer base gets the benefit of what I call a collective mind. We generate a lot of our detection capabilities with our customers, whether it is simply a customer saying "I ran X with WCE and didn't get an alert" or Rapid7 creating a new alert and finding a few customers that want to test it out and make sure it brings value. If attackers used a certain trick to stay hidden in 2011, we will build a way to detect that, but more importantly, if they use a new technique in 2014 and a single person (Rapid7 employee, a customer, or even a friend) learns of it, the entire InsightIDR customer base will get the benefit of detecting that action in the future. Attackers are sharing techniques with one another, so our only possible way to keep up is to do the same in our detection. Secondary Effect #3 - User behavior analytics across a larger dataset There are a lot of "security analytics" companies emerging these days, so I feel it important to point out the biggest challenge you will have with the on-premise solutions: they only analyze the data in the silos I mentioned above. Reducing noise and correlating the right disparate data for detection across various sources grows slowly when there is no collective mind on which research and proactive analysis can be performed. It assumes that a security analytics solution already knows what data is important before install and it is not important to rapidly learn new use cases and adapt. This is why we believe it was necessary to build in the cloud, despite the natural resistance that some security teams still have. If you want to take advantage of the secondary effects at your organization, please start the process and request a demo. We will happily step through the way by which we have secured our cloud.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now