Rapid7 Blog

SIEM  

SIEM Market Evolution And The Future of SIEM Tools

There’s a lot to be learned by watching a market like SIEM adapt as technology evolves, both for the attackers and the analysis.…

There’s a lot to be learned by watching a market like SIEM adapt as technology evolves, both for the attackers and the analysis.

InsightIDR Now Supports Multi-Factor Auth and Data Archiving

InsightIDR is now part of the Rapid7 platform. Learn more about our platform vision and how it enables you to have the SIEM solution you've always wanted.…

InsightIDR is now part of the Rapid7 platform. Learn more about our platform vision and how it enables you to have the SIEM solution you've always wanted.

Want to try InsightIDR in Your Environment? Free Trial Now Available

InsightIDR, our SIEM powered by user behavior analytics, is now available to try in your environment. This post shares how it can help your security team.…

InsightIDR, our SIEM powered by user behavior analytics, is now available to try in your environment. This post shares how it can help your security team.

PCI DSS Dashboards in InsightIDR: New Pre-Built Cards

No matter how much you mature your security program and reduce the risk of a breach, your life includes the need to report across the company, and periodically, to auditors. We want to make that part as easy as possible. We built InsightIDR as a…

No matter how much you mature your security program and reduce the risk of a breach, your life includes the need to report across the company, and periodically, to auditors. We want to make that part as easy as possible. We built InsightIDR as a SaaS SIEM on top of our proven User Behavior Analytics (UBA) technology to address your incident detection and response needs. Late last year, we added the ability to create custom dashboards and reports through the Card Library and the Log Entry Query Language (LEQL). Now, we’ve added seven pre-built cards that align directly to PCI DSS v3.2, to help you find important behaviors and communicate it out across the company, the board, and external auditors. Let’s walk through a quick overview of the seven cards and how it ties to the requirements in PCI DSS v3.2. 1.3.5: Denied Connection Attempts PCI Requirement 1 covers installing and maintaining a firewall configuration to protect cardholder data. InsightIDR can easily ingest and visualize all of your security data, and with our cloud architecture, you don’t need to worry about housing and maintaining a datastore, even as your organization grows with global offices or acquisition. The above card is a standard, important use-case to identify anomalies and trends from your firewall data. In this case, the card runs the query, “where(connection_status=DENY) groupby(source_address)” over your firewall log data. 4.1c: Potential Insecure Connections It’s important to identify traffic with destination to port 80, or the use of outdated SSL/TLS, especially for traffic around the CDE. This can help identify misconfigurations and ensure per Req 4, transmission of cardholder data is encrypted. As with all cards, you can click on the top right gear to pivot into log search, for more context around any particular IP address. 7.1.2b & 8.1.4: Users Accessing the CDE Identifying which users have accessed the PCI environment is important, as is digging a layer deeper. When did they last access the CDE, and from what asset? This is all important context used when identifying the use of compromised credentials. If the creds for Emanuel Osborne, who has access to the cardholder environment, are used to log in from a completely new asset, should your team be worried? We think so—and that’s why our pre-built detections will automatically alert you. From this card, you can pivot to log search to identify the date of last access. On the top global search, any name can be entered to show you all of the assets where those credentials have been used (new asset logon is tracked as a notable behavior). 8.1.1: Shared/Linked Accounts in the CDE Credentials being shared by multiple people is dangerous, as it makes it much more difficult to retrace behavior and identify compromise. This card draws from asset authentication data to identify when the source account is not the destination (where(sourceaccount != destinationaccount) groupby(destinationaccount)), so your team can proactively reduce this risk, especially for the critical CDE. 8.1.3a: Monitor Deactivated Accounts Similar to the above, it’s important to know when deactivated accounts are re-enabled and used to access the CDE—many InsightIDR alerts focus on this attack vector as we’ve found that disabled and service accounts are common targets for lateral movement. Related: See how InsightIDR allows you to detect and investigate lateral movement. This card highlights users with accounts deactivated over the last 30 days. 10.2.4: Highlight Relevant Log Events 10.2.5a: Track & Monitor Authentications Ah, the beefy Requirement 10: track and monitor access to network resources and cardholder data. This is where InsightIDR shines. All of your disparate log data is centralized (Req. 10.2) to detect malicious behavior across the attack chain (Req. 10.6). With the standard subscription, the data is stored and fully searchable for 13 months (Req. 10.7). These two cards highlight failed and successful authentications, so you can quickly spot anomalies and dig deeper. If you’ve been able to use InsightIDR for a few months, you already know that we’ll surface important authentication events to you in the form of alerts and notable events. These cards will ease sharing your findings and current posture outside the team. For a comprehensive list of how InsightIDR can help you maintain PCI Compliance, check out our PCI DSS v3.2 guide here. If you don’t have InsightIDR, check out our interactive product tour to see how it can help unify your data, detect across the attack chain, and prioritize your search with security analytics.

More Answers, Less Query Language: Bringing Visual Search to InsightIDR

Sitting down with your data lake and asking it questions has never been easy. In the infosec world, there are additional layers of complexity. Users are bouncing between assets, services, and geographical locations, with each monitoring silo producing its own log files and slivers of…

Sitting down with your data lake and asking it questions has never been easy. In the infosec world, there are additional layers of complexity. Users are bouncing between assets, services, and geographical locations, with each monitoring silo producing its own log files and slivers of the complete picture. From a human perspective, distilling this data requires two unique skillsets: Incident Response: Is this anomalous activity a false positive, a misconfiguration, or true malicious behavior? Data Manipulation: What search query should I construct to get what I need? Do I need to build a custom rule for this, or report on this statistic? We’ve built InsightIDR with the goal of reducing friction and complexity on both of these fronts. On the incident response side, you’re armed with a dossier of user behavior analytics across network, endpoint, and cloud services to make faster, informed decisions. You can now enjoy Visual Search, which aims to lower the level of complexity associated with writing queries and making sense of your wealth of log data. Visual Search was first released in InsightOps, our solution for IT infrastructure monitoring and troubleshooting. It’s had a great reception, and we’re proud that it’s now a shared service also available in InsightIDR. Visual Search identifies anomalies, allows for flexible drill-downs, and helps you build queries without using the Log Entries Query Language (LEQL). Your First Visual Search In InsightIDR, start by heading to Log Search. You’ll notice that we’ve refreshed the look and feel—we’re continuously improving the speed and responsiveness of the search technology. A breakdown of the updated interface: Activate Visual Search by selecting it under the Mode dropdown. At this point, three cards will auto-populate, proactively identifying anomalies from your data. For each data set, we brainstormed with security teams, including our own, to map out interesting starter queries. You can click on the gear to edit, copy, or remove the card. This is the same architecture as the cards in Dashboards, so the suggested queries can improve your LEQL skills and help you see your data differently. From here, you can click into any of the bars or data points on the card to drill further. For example, for the “Group by destination_port” card, we can click on the 5666 bar. It automatically performs the search query, where(destination_port=5666). Visual Search is a great first step in highlighting “where to look”. As each data set is enriched with user and location data, this feature really highlights the user behavior analytics core in InsightIDR. These cards wouldn’t be possible to populate from the raw log data alone. By proactively identifying anomalies tailored to each data set, and guiding you towards LEQL search strings, you can find answers while gaining skill along the way. If you don’t have InsightIDR, but would like to know how customers are using the combined UBA+SIEM+EDR capabilities, head over to our interactive product tour to explore top use-cases.

SIEM Security Tools: Four Expensive Misconceptions

Why modern SIEM security solutions can save you from data and cost headaches. If you want to reliably detect attacks across your organization, you need to see all of the activity that's happening on your network. More importantly, that activity needs to be filtered and…

Why modern SIEM security solutions can save you from data and cost headaches. If you want to reliably detect attacks across your organization, you need to see all of the activity that's happening on your network. More importantly, that activity needs to be filtered and prioritized by risk -- across assets & users – to help you report on how the team is measurably chipping away at Risk Mountain™. Today, the only solution capable of flexibly ingesting, correlating, and visualizing data from a sprawling tool stack is a SIEM solution. SIEMs don't get a lot of love – some might say their deployment felt like a data lake glacier, where budget dollars flowed in, never to leave. Advances in SIEM tools and customer pain are converging, as organizations are looking to cut losses on stagnant deployments and try a new approach. In this post, let's cover four misconceptions that you won't have to suffer from today's nimble and adaptive SIEMs. MISCONCEPTION #1: SIEMs are complex, unwieldy tools that take months to deploy, and a large dedicated staff to keep running. REALITY: Cloud architecture makes SIEM deployment quicker and maintenance easier than ever before. More SIEM security tools today offer cloud deployment as an option, so there is no longer the need for a large, initial hardware investment. In addition, SIEM providers now provide pre-built analytics in their solutions, so security teams don't need to spend recurring hours setting up and refining detection rules as analysts comb through more and more data. The simpler setup of SIEMs running in the cloud, combined with pre-built analytics, means that an organization can get started with SIEM security technology in just a few days instead of months, and that they won't have to continually add staff to keep the SIEM up and running effectively. When choosing a SIEM, define the use cases you'd like the deployment to tackle and consider a Proof of Concept (POC) before making a purchase; you'll have better expectations for success and see how quickly it can identify threats and risk. MISCONCEPTION #2: As SIEMs ingest more data, data processing costs skyrocket into the exorbitant. REALITY: Not all SIEMs come with burdensome cost as deployment size increases. Traditional SIEM pricing models charge by the quantity of data processed or indexed, but this model is penalizing the marketplace. SIEMs become more effective at detecting attacks as more data sources are added over time, especially those that can identify attacker behaviors. As a result, any pricing model that discourages you from adding data sources could hamstring your SIEM's efficacy. Work with your SIEM vendor to determine what data sets you need today and may need in the future, so you can scale effectively without getting burned. MISCONCEPTION #3: SIEMs aren't great at detection. They should primarily be used once you know where to look. REALITY: SIEMs with modern analytics can be extremely effective at detecting real-world attack behaviors in today's complex environments. Related to misconception number two above, if you can't process as many data sources as possible—such as endpoints, networks, and cloud services—then you are potentially limiting your SIEM's ability to detect anomalies and attacks in your environment. In fact, there are many traces of attackers that require the comprehensive data sets fed into SIEM. Two examples are detecting the use of stolen passwords and lateral movement, extremely common behaviors once an attacker has access to the network. At Rapid7, we detect this by first linking together IP Address > Asset > User data and then using graph mining and entity relationship modeling to track what is “normal” in each environment. Outside of SIEMs and User Behavior Analytics (UBA) solutions, this is incredibly hard to detect. In a nutshell: SIEM security tools need that data to be effective, so if you restrict the data coming in, it won't be as effective. A SIEM with modern analytics will be capable of detecting real-world attack behaviors earlier in the attack chain. MISCONCEPTION #4: SIEMs can ingest and centralize log files and network data, but have limited coverage for cloud services and remote workers. REALITY: Today's SIEMs can and should account for data coming in from cloud and endpoints. Network-only data sources may be the norm for more traditional SIEMs on the market, but newer SIEMs also pull in data from endpoints and cloud services to make sure you're detecting attacker behavior no matter where it may occur. Just as the perimeter has shifted from the corporate network walls to the individual user, SIEMs have had to adapt to collect more data from everywhere these users work, namely endpoints and cloud services. Make sure any SIEM security solution you're considering can integrate these additional data sources, not just traditional log files and network data. At Rapid7, we feel strongly that customers shouldn't have to deal with these past pitfalls, and this mindset is expressed throughout InsightIDR, our solution for incident detection and response. On Gartner's Peer Insights page, we've been recognized by customers for resetting expectations around time to value and ease of use: “We are able to monitor many sources with a very small security team and provide our clients with the peace of mind usually only achieved with large security departments.” “[InsightIDR]… on its own, mitigated against 75% of identified threats within our organisation, but with the simplicity of use even my granny could get to grips with.” Want to try InsightIDR at your organization? Start with our on-demand 20 minute demo here, or contact us – we want to learn about your challenges and provide you with answers.

Preparing for GDPR

GDPR is coming….. If your organisation does business with Europe, or more specifically does anything with the Personal Data of EU Citizens who aren't dead (i.e. Natural Persons), then, just like us, you're going to be in the process of living the dream…

GDPR is coming….. If your organisation does business with Europe, or more specifically does anything with the Personal Data of EU Citizens who aren't dead (i.e. Natural Persons), then, just like us, you're going to be in the process of living the dream that is Preparing for the General Data Protection Regulation. For many organisations, this is going to be a gigantic exercise, as even if you have implemented processes and technologies to meet with current regulations there is still additional work to be done. Penalties for infringements of GDPR can be incredibly hefty. They are designed to be dissuasive. Depending on the type of infringement, the fine can be €20 million, or 4% of your worldwide annual turnover, depending on which is the higher amount. Compliance is not optional, unless you fancy being fined eye-watering amounts of money, or you really don't have any personal data of EU citizens within your control. The Regulation applies from May 25th 2018. That's the day from which organisations will be held accountable, and depending on which news website you choose to read, many organisations are far from ready at the time of writing this blog. Preparing for GDPR is likely to be a cross-functional exercise, as Legal, Risk & Compliance, IT, and Security all have a part to play. It's not a small amount of regulation (are they ever?) to read and understand either – there are 99 Articles and 173 Recitals. I expect if you're reading this, it's because you're hunting for solutions, services, and guidance to help you prepare. Whilst no single software or services vendor can act as a magic bullet for GDPR, Rapid7 can certainly help you cover some of the major security aspects of protecting Personal Data, in addition to having solutions to help you detect attackers earlier in the attack chain, and service offerings that can help you proactively test your security measures, we can also jump into the fray if you do find yourselves under attack. Processes and procedures, training, in addition to technology and services all have a part to play in GDPR. Having a good channel partner to work with during this time is vital as many will be able to provide you with the majority of aspects needed. For some organisations, changes to roles and responsibilities are required too – such as appointing a Data Protection Officer, and nominating representatives within the EU to be points of contact. So what do I need to do?If you're just beginning in your GDPR compliance quest, I'd recommend you take a look at this guide which will get you started in your considerations. Additionally, having folks attend training so that they can understand and learn how to implement GDPR is highly recommended – spending a few pounds/euros/dollars, etc on training now can save you from the costly infringement fines later on down the line. There are many courses available – in the UK I recently took this foundation course, but do hunt around to find the best classroom or virtual courses that make sense for your location and teams.Understanding where Personal Data physically resides, the categories of Personal Data you control and/or process, how and by whom it is accessed, and how it is secured are all areas that you have to deal with when complying with GDPR. Completing Privacy Impact Assessments are a good step here. Processes for access control, incident detection and response, breach notification and more will also need review or implementation. Being hit with a €20million fine is not something any organisation will want to be subject to. Depending on the size of your organisation, a fine of this magnitude could easily be a terminal moment. There is some good news, demonstrating compliance, mitigating risk, and ensuring a high level of security are factors that are considered if you are unfortunate to experience a data breach. But ideally, not being breached in the first place is best, as I'm sure you‘d agree, so this is where your security posture comes in. Article 5, which lists the six principles of processing personal data, states that personal data must be processed in an appropriate manner as to maintain security. This principal is covered in more detail by Article 32 which you can read more about here.Ten Recommendations for Securing Your EnvironmentEncrypt data – both at rest and in transit. If you are breached, but the Personal Data is in a render unintelligible to the attacker then you do not have to notify the Data Subjects (See Article 34 for more on this). There are lots of solutions on the market today – have a chat to your channel partner to see what options are best for you. Have a solid vulnerability management process in place, across the entire ecosystem. If you're looking for best practices recommendations, do take a look at this post. Ensuring ongoing confidentiality, integrity and availability of systems is part of Article 32 – if you read Microsoft's definition of a software vulnerability it talks to these three aspects. Backups. Backups. Backups. Please make backups. Not just in case of a dreaded ransomware attack; they are a good housekeeping facet anyway in case of things like storage failure, asset loss, natural disaster, even a full cup of coffee over the laptop. If you don't currently have a backup vendor in place, Code42 have some great offerings for endpoints, and there are a plethora of server and database options available on the market today. Disaster recovery should always be high on your list regardless of which regulations you are required to meet.Secure your web applications. Privacy-by-design needs to be built in to processes and systems – if you're collecting Personal Data via a web app and still using http/clear text then you're already going to have a problem. Pen tests are your friend. Attacking your systems and environment to understand your weak spots will tell you where you need to focus, and it's better to go through this exercise as a real-world scenario now than wait for a ‘real' attacker to get in to your systems. You could do this internally using tools like Metasploit Pro, and you could employ a professional team to perform regular external tests too. Article 32 says that you need to have a process for regularly testing, assessing, & evaluating the effectiveness of security measures. Read more about Penetration testing in this toolkit.Detect attackers quickly and early. Finding out that you've been breached ~5 months after it first happened is an all too common scenario (current stats from Mandiant say that the average is 146 days after the event). Almost 2/3s of organisations told us that they have no way of detecting compromised credentials, which has topped the list of leading attack vectors in the Verizon DBIR for the last few years. User Behaviour Analytics provide you with the capabilities to detect anomalous user account activity within your environment, so you can investigate and remediate fast.Lay traps. Deploying deception technologies, like honey pots and honey credentials, are a proven way to spot attackers as they start to poke around in your environment and look for methods to access valuable Personal Data.  Don't forget about cloud-based applications. You might have some approved cloud services deployed already, and unless you've switched off the internet it's highly likely that there is a degree of shadow IT (a.k.a. unsanctioned services) happening too. Making sure you have visibility across sanctioned and unsanctioned services is a vital step to securing them, and the data contained within them. Know how to prioritise and respond to the myriad of alerts your security products generate on a daily basis. If you have a SIEM in place that's great, providing you're not getting swamped by alerts from the SIEM, and that you have the capability to respond 24x7 (attackers work evenings and weekends too). If you don't have a current SIEM (or the time or budget to take on a traditional SIEM deployment project), or you are finding it hard to keep up with the number of alerts you're currently getting, take a look at InsightIDR – it covers a multitude of bases (SIEM, UBA and EDR), is up and running quickly, and generates alert volumes that are reasonable for even the smallest teams to handle. Alternatively, if you want 24x7 coverage, we also have a Managed Detection and Response offering which takes the burden away, and is your eyes and ears regardless of the time of day or night.Engage with an incident response team immediately if you think you are in the midst of an attack. Accelerating containment and limiting damage requires fast action. Rapid7 can have an incident response engagement manager on the phone with you within an hour. Security is just one aspect of the GDPR, for sure, but it's very much key to compliance. Rapid7 can help you ready your organisation, please don't hesitate to contact us or one of our partners if you are interested in learning more about our solutions and services. GDPR doesn't have to be GDP-argh!

Incident Detection and Investigation - How Math Helps But Is Not Enough

I love math. I am even going to own up to having been a "mathlete" and looking forward to the annual UVM Math Contest in high school. I pursued a degree in engineering, so I can now more accurately say that I love…

I love math. I am even going to own up to having been a "mathlete" and looking forward to the annual UVM Math Contest in high school. I pursued a degree in engineering, so I can now more accurately say that I love applied mathematics, which have a much different goal than pure mathematics. Taking advanced developments in pure mathematics and applying them to various industries in a meaningful manner often takes years or decades. In this post, I want to provide the necessary context for math to add a great deal of value to security operations, but also explain the limitations and issues that will arise when it is relied upon too heavily. A primer on mathematics-related buzzwords in the security industry There are always new buzzwords used to describe security solutions with the hope that they will grab your attention, but often the specific detail of what's being offered is lost or missing. Let's start with my least favorite buzzphrase: Big Data Analytics - This term is widely used today, but is imprecise and means different things to different people. It is intended to mean that a system can process and analyze data at a speed and scale that would have been impossible a decade ago. But that too is vague. Given the amount of data generated by security devices today, scale of continually growing networks, and the speed with which attackers move, being able to crunch enormous amounts of data is a valuable capability for your security vendors to have, but that capability tells you very little about the value derived from it. If someone tries to sell you their product because it uses Cassandra or MongoDB or another of the dozens of NoSQL database technologies in combination with Hadoop or another map/reduce technology, your eyes should gloss over because it is more important how these technologies are being used. Dig deeper and ask "so your platform can process X terabytes in Y seconds, but how does that specifically help me improve the security of my organization?" Next, let me explain a few of the more specific, but still oversold math-related (and data science) buzzwords: Machine Learning is all about defining algorithms flexible and adaptive enough to learn from historical data and adjust to the changes in a given dataset over time. Some people prefer to call it pattern recognition because it uses clusters of like data and advanced statistical comparisons to predict what would happen if the monitored group were to continue behaving in a reasonably close manner to that previously observed. The main benefit of this field toward security is the possibility of distinguishing the signal from the noise when sifting through tons of data, whether using clustering, prediction models, or something else. Baselining is a part of machine learning that is actually quite simple to explain. Given a significant sample of historical data, you can establish various baselines that show a normal level of any given activity or measurement. The value of baselining comes from detecting when a measured value deviates significantly from the established historical baseline. A simple example is credit card purchases: consider an average credit card user is found to spend between $600 and $800 per week. This is the baseline for credit card spending for this person. Anomaly Detection refers to the area of machine learning that identifies the events or other measurements in a dataset which are significantly different from an established pattern. These detected events are called "outliers", like the Malcolm Gladwell book. Finding anomalous behavior on your network does not inherently mean you have found risky activity, just that these events differ from the vast majority of historically seen events in the organization's baseline. To extend the example above: if the credit card user spends $650 one week and $700 the next, that's in line with previous spending patterns. Even spending $575 or $830 is outside the established baseline, but not much cause for concern. Detecting an anomaly would be to find that the same user spent over $4,000 in a week. That is an uncharacteristic amount to spend, and the purchases that week should probably be reviewed, but it doesn't immediately mean fraud was committed. Artificial Intelligence is not exactly a mathematics term, but it sometimes gets used as a buzzphrase by security vendors as a synonym for "machine learning". Most science fiction movies focus on the potentially negative consequences of creating artificial intelligence, but the goal is to create machines that can learn, reason, and solve problems the way the awesome brains of animals can today. "Big Blue" and "Watson" showed this field's progress for chess and quiz shows, respectively, but those technologies were being applied to games with set rules and still needed significant teams to manage them. If someone uses this phrase to describe their solution, run, because all other security vendors would be out of business if this advanced research could be consistently applied to motivated attackers that play by no such set of rules when trying to steal from you. Peer Group Analysis is actually as simple as choosing very similar actors (peers) that do, or are expected to, act in very similar manners, then using these groups of peers to identify when one outlier begins to exhibit behavior significantly different from its peers. Peer groups can be similar companies, similar assets, individuals with similar job titles, people with historically similar browsing patterns, or pretty much any cluster of entities with a commonality. The power of peer groups is to compare new behavior against the new behavior of similar actors rather than expecting the historical activity of a single actor to continue in perpetuity. Make sure the next time someone starts bombarding you with these terms that they can explain why they are using them and the results that you are going to see. Mathematics will trigger new alerts, but you could just trade one kind of noise for another The major benefit that user behavior analytics promises to security teams today is the ability to stop relying on the rules and heuristics primarily used for detection in their IPS, SIEM, and other tools. Great! Less work for the security team to maintain and research the latest attack, right? It depends. The time that you currently spend writing and editing rules in your monitoring solutions very well could be taken over by training the analytics, adjusting thresholds, tweaking the meaning of "high risk" versus "low risk" and any number of modifications that are not technically rules setting. If you move from rules and heuristics to automated anomaly detection and machine learning, there is no question that you are going to see outliers and risky behaviors that you previously did not. Your rules were most likely aimed at identifying patterns that your team somehow knows indicate malicious activity and anomaly detection tools should not be restricted by the knowledge of your team. However, not involving the knowledge of your team means that a great deal of outliers identified will be legitimate to your organization, so instead of having to sift through thousands of false positives that broke a yes/no rule, you will have thousands of false positives on a risk scale from low to high. I have three examples of the kind of false positives that can occur because human beings are not broadly predictable: Rare events - Certain events occur in our lives that cause significant changes in behavior, and I don't mean having children. When someone changes roles in your organization, they are most likely going to immediately look strange in comparison to their established peer group. Similarly, if your IT staff stays late to patch servers every time a major vulnerability (with graphics and a buzz-name!) is released, this is now some of the most critical administrators and systems in the organization straying from any established baselines. Periodic events - Someone taking vacation is unlikely to skew your alerting because the algorithms should be tuned to account for a week without activity, but what about annual audits for security, IT, accounting, etc.? What about the ongoing change in messaging systems and collaboration tools that constantly lead to data moving through different servers? Rare actors - There are always going to be individuals with no meaningful peer; whether it is a server that is accessed by nearly every user in the organization (without their knowledge) like IIS servers or a user that does extremely unique, cutting edge research like basically everyone on the Rapid7 Research team, mathematics has not reached the point where it can determine enough meaningful patterns to predict the behavior of some portion of the organization that you need to monitor. Aside from dealing with a change in the noise, there is the very real risk that by relying too heavily on canned analytics to detect attacks, you can easily leave yourself open to manipulation. If I believe that your organization is using "big data analytics" as most are, I can pre-emptively start to poison the baseline for what is considered normal by triggering events on your network that set off alerts, but appear to be false positives upon closer investigation. Then, having forced this activity into some form of baseline, it can be used as an attack vector. This is the challenge that scientists always run into when observing humans: anyone that knows they are being observed can choose to act differently than they otherwise would and you won't know. A final note on anomalies is that a great deal of them are going to be stupid behavior. That's right, I guarantee that negligence is a much more common cause of risky activity in your organization than malice, but an unsupervised machine learning approach will not know the difference. InsightIDR blends mathematics with knowledge of attacker behavior This post is not meant to say that applied mathematics have no place in incident detection or investigation. On the contrary, the Rapid7 Data Science team is continuously researching data samples for meaningful patterns to use in both areas. We just believe that you need to apply the science behind these buzzwords appropriately. I would summarize our approach around this in three ways: A blend of techniques: At times, simple alerts are necessary because the activity should either never occur in an organization or occurs so rarely that the security team wants to hear about it - the best example of this is providing someone with domain administrator privileges. Incident response teams always want to know when a new king of the network has been crowned. Some events cannot be assumed good when a solution is baselining or "learning normal", so there should be an extremely easy way for the security team to indicate which activities are permitted to take place in that specific organization. Add domain expertise: Adding security domain knowledge is not unique to Rapid7, but thanks to our research, penetration test, and Metasploit teams, the breadth and depth of our familiarity with the tools attackers use and their stealth techniques is unmatched in the market. We continually use this in our analyses of large datasets to find new indicators of compromise, visualizations, and kinds of data that we will add to InsightIDR. Plus, if we cannot get the new data from your SIEM or existing data source, we will build tools like our endpoint monitor or no-maintenance honeypots to go out there and get the data. Use outliers differently: Almost every user behavior analytics product in the market is using its algorithms to produce an enormous list of events sorted by each one's risk score. We believe in alerting infrequently, so that you can trust it is something worth investigating. Outliers? Anomalies? We are going to expose them and help you to explore the massive amount of data to hopefully discover unwanted activity, but the specific outliers have to pass our own tests for significance and noise level before we will turn them into alerts. Additionally, we will help you look through the data in the context of an investigation because it can often add clarity to traditional "search and compare" methods that your teams are likely using in your SIEM. So if you want to drop mathematics into your network, flip a switch, and to let its artificial intelligence magically save you from the bad guys, we are not the solution for you. Sadly, though, no solution out there is going to fulfill your desire any time soon. If you want to learn more about the way InsightIDR does what I described here, please check out our on-demand demo. We think you will appreciate our approach.

12 Days of HaXmas: Rudolph the Machine Learning Reindeer

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Sam the snowman taught me everything I know about reindeer [disclaimer: not actually true], so it only seemed logical that we bring him back to explain the journey of machine learning. Wait, what? You don't see the correlation between reindeer and machine learning? Think about it, that movie had everything: Yukon Cornelius, the Bumble, and of course, Rudolph himself. And thus, in sticking with the theme of HaXmas 2016, this post is all about the gifts of early SIEM technology, “big data”, and a scientific process. SIEM and statistical models – Rudolph doesn't get to play the reindeer games Just as Rudolph had conformist Donner's gift of a fake black nose promising to cover his glowing monstrosity [and it truly was impressive to craft this perfect deception technology with hooves], information security had the gift of early SIEM technology promising to analyze every event against known bad activity to spot malware. The banking industry had just seen significant innovation in the use of statistical analysis [a sizable portion of what we now call “analytics”] for understanding the normal in both online banking and payment card activities. Tracking new actions and comparing them to what is typical for the individual takes a great deal of computing power and early returns in replicating fraud prevention's success were not good. SIEM had a great deal working against it when everyone suddenly expected a solution designed solely for log centralization to easily start empowering more complex pattern recognition and anomaly detection. After having witnessed, as consumers, the fraud alerts that can come from anomaly detection, executives starting expecting the same from their team of SIEM analysts. Except, there were problems: the events within an organization vary a great deal more than the login, transfer, and purchase activities of the banking world, the fraud detection technology was solely dedicated to monitoring events in other, larger systems, and SIEM couldn't handle both data aggregation and analyzing hundreds of different types of events against established norms. After all, my favorite lesson from data scientists is that “counting is hard”. Keeping track of the occurrence of every type of event for every individual takes a lot of computing power and understanding of each type of event. After attempting to define alerts for transfer size thresholds, port usage, and time-of-day logins, no one understood that services like Skype using unpredictable ports and the most privileged users regularly logging in late to resolve issues would cause a bevy of false positives. This forced most incident response teams to banish advanced statistical analysis to the island of misfit toys, like an elf who wants to be a dentist. “Big Data” – Yukon Cornelius rescues machine learning from the Island of Misfit Toys There is probably no better support group friend for the bizarre hero, Yukon Cornelius, than “big data” technologies. Just as NoSQL databases, like Mongo, to map-reduce technologies, like Hadoop, were marketed as the solution to every, conceivable challenge, Yukon proudly announced his heroism to the world. Yukon carried a gun he never used, even when fighting Bumble, and “big data” technology is varied and each kind needs to be weighed against less expensive options for each problem. When jumping into a solution technology-first, most teams attempting to harness “big data” technology came away with new hardware clusters and mixed results; insufficient data and security experts miscast as data experts still prevent returns on machine learning from happening today. However, with the right tools and the right training, data scientists and software engineers have used “big data” to rescue machine learning [or statistical analysis, for the old school among you] from its unfit-for-here status. Thanks to “big data”, all of the original goals of statistical analysis in SIEMs are now achievable. This may have led to hundreds of security software companies marketing themselves as “the machine learning silver bullet”, but you just have to decide when to use the gun and when to use the pick axe. If you can cut through the hype to decide when the analytics are right and for which problems machine learning is valuable, you can be a reason that both Hermey, the dentist, and Rudolph, the HaXmas hero, reach their goal. Visibility - only Santa Claus could get it from a glowing red nose But just as Rudolph's red nose didn't make Santa's sleigh fly faster, machine learning is not a magic wand you wave at your SIEM to make eggnog pour out. That extra foggy Christmas Eve couldn't have been foggy all over the globe [or it was more like the ridiculous Day After Tomorrow], but Santa knows how to defy physics to reach the entire planet in a single night, so we can give him the benefit of the doubt and assume he knew when and where he needed a glowing red ball to shine brighter than the world's best LED headlights. I know that I've held a pretty powerful Maglite in the fog and still couldn't see a thing, so I wouldn't be able to get around Mt. Washington armed with a glowing reindeer nose. Similarly, you can't just hand a machine learning toolkit to any security professional and expect them to start finding the patterns they should care about across those hundreds of data types mentioned above. It takes the right tools, an understanding of the data science process, and enough security domain expertise to apply machine learning to the attacker behaviors well hidden within our chaotically normal environments. Basic anomaly detection and the baselining of users and assets against their peers should be embedded in modern SIEM and EDR solutions to reveal important context and unusual behavior. It's the more focused problems and data sets that demand the kind of pattern recognition within the characteristics of a website or PHP file only the deliberate development of machine learning algorithms can properly address.

SIEM Tools Aren't Dead, They're Just Shedding Some Extra Pounds

Security Information and Event Management (SIEM) is security's Schrödinger's cat. While half of today's organizations have purchased SIEM tools, it's unknown if the tech is useful to the security team… or if its heart is even beating or deployed. In response to…

Security Information and Event Management (SIEM) is security's Schrödinger's cat. While half of today's organizations have purchased SIEM tools, it's unknown if the tech is useful to the security team… or if its heart is even beating or deployed. In response to this pain, people, mostly marketers, love to shout that SIEM is dead, and analysts are proposing new frameworks with SIEM 2.0/3.0, Security Analytics, User & Entity Behavior Analytics, and most recently Security Operations & Analytics Platform Architecture (SOAPA).However, SIEM solutions have also evolved from clunky beasts to solutions that can provide value without requiring multiple dedicated resources. While some really want SIEM dead, the truth is it still describes the vision we all share: reliably find insights from security data and detect intruders early in the attack chain. What's happened in this battle of survival of the fittest is that certain approaches and models simply weren't working for security teams and the market.What exactly has SIEM lost in this sweaty regimen of product development exercise? Three key areas have been tightened and toned to make the tech something you actually want to use.No More Hordes of Alerts without User Behavior ContextUser Behavior Analytics. You'll find this phrase at every SIEM vendor's booth, and occasionally in their technology as well. Why? This entire market segment explosion spawned from two major pain-points in legacy SIEM tech: (1) too many false-positive, non-contextual alerts, and a (2) failure to detect stealthy, non-malware attacks, such as the use of stolen credentials and lateral movement.By tying every action on the network to the users and assets behind them, security teams spend less time retracing user activity to validate and triage alerts, and can detect stealthy, malicious behavior earlier in the attack chain. Applying UBA to SIEM data results in higher quality alerts and faster investigations, as teams are spending less time retracing IPs to users and running tedious log searches.Detections now Cover Endpoints Without Heavy LiftingEndpoint Detection and Response. This is another super-hot technology of 2016, and while not every breach originates from the endpoint, endpoints are often an attacker's starting point and provide crucial information during investigations. There are plenty of notable behaviors that if detected, are reliable signs of “investigate-me” behavior.A couple examples:Log DeletionFirst Time Admin Action (or detection of privilege exploit)Lateral MovementAny SIEM that doesn't offer built-in endpoint detection and visibility, or at the very least, automated ways to consume endpoint data (and not just anti-virus scans!), leaves gaps in coverage and across the attack chain. Without endpoint data, it's very challenging to have visibility into traveling and remote workers or detect an attacker before critical assets are breached. It can also complicate and slow incident investigations, as endpoint data is critical for a complete story. The below highlights a standard investigation workflow along with the relevant data sources to consult at each step.Incident investigations are hard. They require both incident response expertise (how many breaches have you been a part of?) and also data manipulation skills to get the information you need. If you can't search for endpoint data from within your SIEM, that slows down the process and may force you to physically access the endpoint to dig deeper.Leading SIEMs today now offer a combination of Agents or an Endpoint Scan to ingest this data, detect local activity, and have it available for investigations. We do all of this and supplement our Endpoint detections with Deception Technology, which includes decoy Honey Credentials that are automatically injected into memory to better detect pass-the-hash and credential attacks.Drop the Fear, Uncertainty, and Doubt About Data ConsumptionThere are a lot of things that excite me, for example, the technological singularity, autonomous driving, loading my mind onto a Westworld host. You know what isn't part of that vision? Missing and incomplete data. Today's SIEM solutions derive their value from centralizing and analyzing everything. If customers need to weigh the value of inputting one data set against another, that results in a fractured, frustrating experience. Fortunately, this too is now a problem of the past.There are a couple of factors behind these winds of change. Memory capacity continues to expand close to a Moore's Law pace, which is fantastic, as our log storage needs are heavier than ever before.Vendors now are offering mature cloud architectures that can securely store and retain log data to meet any compliance need, along with faster search and correlation activity than most on-premise deployments can dream about. The final shift, and one that's currently underway today, is with vendor pricing. Today's models revolve around Events per Second and Data Volume Indexed. But, what's the point of considering endpoint, cloud, and log data if the inevitable data volume balloon means the org can't afford to do so?We've already tackled this challenge and customers have been pleased with it. Over the next few years, new and legacy vendors alike will also shed existing models to also reflect the demand for sensible data pricing that finally arms incident investigators with the data and context they need.There's a lot of pain with existing SIEM technology – we've experienced it ourselves, from customers, and every analyst firm we've spoken with. However, that doesn't mean the goal isn't worthy or the technology has continually failed to adapt. Can you think of other ways SIEM vendors have collectively changed their approach over the years? Share it in the comments! If you're struggling with an existing deployment and are looking to augment or replace, check out our webcast, “Demanding More From Your SIEM”, for recommendations and our approach to the SIEM you've always wanted.

Cyber Threat Intelligence: How Do You Incorporate it in Your InfoSec Strategy?

In the age of user behavior analytics, next-gen attacks, polymorphic malware, and reticulating anomalies, is there a time and place for threat intelligence? Of course there is! But – and it seems there is always a ‘but' with threat intelligence – it needs to be carefully applied…

In the age of user behavior analytics, next-gen attacks, polymorphic malware, and reticulating anomalies, is there a time and place for threat intelligence? Of course there is! But – and it seems there is always a ‘but' with threat intelligence – it needs to be carefully applied and managed so that it truly adds value and not just noise. In short, it needs to actually be intelligence, not just data, in order to be valuable to an information security strategy. We used to have the problem of not having enough information. Now we have an information overload. It is possible to gather data on just about anything you can think of, and while that can be a great thing (if you have a team of data scientists on standby), most organizations simply find themselves facing an influx of information that is overwhelming at best and contradictory at worst. Threat intelligence can help solve that problem. What is Threat Intelligence? As Rick Holland and I mentioned in our talk at UNITED Summit 2016, there are a variety of definitions and explanations for threat intelligence, ranging in size from a paragraph to a field manual. Here's the distilled definition: “Threat Intelligence helps you make decisions about how to prevent, detect, and respond to attacks." That's pretty simple, isn't it? But it covers a lot of ground. The traditional role of intelligence is to inform policy makers. It doesn't dictate a particular decision, but informs them with what they need to make critical decisions. The same concept applies to threat intelligence in information security, and it can benefit everyone from a CISO to a vulnerability management engineer to a SOC analyst. All of those individuals have decisions to make about the information security program and threat intelligence arms them with relevant, timely information that will help them make those decisions. If intelligence is making it harder for you to make decisions, then it is not intelligence. When Threat Intelligence Fails Threat Intelligence can be a polarizing topic –  you hate it or you love it. Chances are that if you hate it, you've probably been burned by threat feeds containing millions of indicators from who-knows-where, had to spend hours tracking down information from a vendor report with absolutely no relevance to your network, or simply fed up by the clouds of buzzwords that distract from the actual job of network defense. If you love it, you probably haven't been burned, and we want to keep it that way. Threat Intelligence fails for a variety of reasons, but the number one reason is irrelevance. Threat feeds with millions of indicators of uncertain origin are not likely to be relevant. Sensationalized threat actor reports with little detail but lots of fear, uncertainty, and doubt (FUD) are not likely to be relevant. Stay away from these, or the likelihood that you end up crying under your desk increases. So how DO you find what is relevant? That starts with understanding your organization and what you are protecting, and then seeking out threat intelligence about attacks and attackers related to those things. This could mean focusing on attackers that target your vertical or the types of data you are protecting. It could mean researching previously successful attacks on the systems or software that you use. By taking the time to understand more about the source and context behind your threat intelligence, you'll save a ton of pain later in the process. The Time and Place for Threat Intelligence Two of the most critical factors for threat intel are just that – time and place. If you're adding hundreds of thousands of indicators with no context and no expiration date, that will result in waves of false positives that dilute any legitimate alerts that are generated. With cloud architectures today, vendors have the ability to anonymously collect feedback from customers, including whether alerts generated by the intel are false positives or not. This crowdsourcing can serve as a feedback loop to continuously improve the quality of intelligence. For example, with this list, 16 organizations are using it, 252 alerts have been generated across the community, and none have been marked as false positives. The description also contains enough context to help defenders know how to respond to any alerts generated. This has served as valuable threat intelligence. The second half is place – different intelligence should be applied differently in your organization. Strategic intelligence, such as annual trend reports, or warnings on targeted threats to your industry, are meant to help inform decision makers. More technical intelligence, such as network based indicators, can be used as firewall rules to prevent threats from impacting your network. Host based indicators, especially those from your own incidents or from organizations similar to yours, can be used to detect malicious activity on your network. This is why you need to know exactly where your intelligence comes from, as without it, proper application is a serious challenge. Your own incident experience is one of the best sources of relevant intelligence – don't let it go to waste! To learn about how you can add threat intelligence into InsightIDR, check out the Solution Short below. Threat intelligence isn't as easy as plugging a threat feed into your SIEM. Integrating threat intelligence into your information security program involves (1) understanding your threat profile, (2) selecting appropriate intelligence sources, and (3) contextually applying it to your environment. However, once completed, threat intelligence will serve a very valuable role in protecting your network. Intelligence helps us understand the threats we face – not only with identifying them as they happen, but to understand the implications of those threats and respond accordingly. Intelligence enables us to become persistent and motivated defenders, learning and adapting each step of the way.

Displace SIEM "Rules" Built for Machines with Custom Alerts Built For Humans

If you've ever been irritated with endpoint detection being a black box and SIEM detection putting the entire onus on you, don't think you had unreasonable expectations; we have all wondered why solutions were only built at such extremes. As software has evolved and our…

If you've ever been irritated with endpoint detection being a black box and SIEM detection putting the entire onus on you, don't think you had unreasonable expectations; we have all wondered why solutions were only built at such extremes. As software has evolved and our base expectations with it, a lot more people have started to wonder why it requires so many hours of training just to make solutions do what they are designed to do. Defining a SIEM rule is the perfect example – crafting the right query and adding it to detection lore can take up to an hour, which is fine if you have nothing else to do all day. Writing a SIEM rule in legacy systems harms security teams by demanding expertise The training London black cab drivers endure has been examined a great deal in recent years and for good reason: memorizing the ridiculous layout of London streets for a year before working eliminates “how do we get there?” annoyances and expands a region of the cabby's brain. However, requiring this level of expertise has been a massive barrier to entry for new drivers, and with the advent of GPS devices [“sat nav” to the angry London taxi drivers], somewhat unnecessary. In the taxi world, this has led to Uber providing rides to consumers at a third of the price. Similarly, when ArcSight [who defines “simple” different than I] and QRadar were first deployed as the “single pane of glass” for the well-staffed organizations who could make them effective, it took more than six months to develop the skills and expertise necessary to translate the foreign language of many logs into meaningful rules for detection. Now that cloud solutions and continuous deployment made collective learning possible, it feels impractical for Splunk or AlienVault experts to first translate the logs into a language your SIEM can understand, and then use this event format to define each and every one of the alerts that'll trigger. In this case, the negative impact isn't the cost of your services, but rather a decrease in how quickly your security team can adapt to new attacker techniques. Understanding a SIEM rule and the corresponding alert makes every non-expert look for the translator Whenever a customer walks me through the process of triaging and analyzing an alert, it reminds me of the effort to debug the satellite communication terminals I developed at Raytheon in 1999. We were running FORTRAN on x386 chips and breakpoints weren't a possibility, so the raw assembly code we traced through resembled the chirps and beeps of R2D2 until you spent a lot of time with it. It wasn't until you'd acquired this level of expertise that you'd understand how to backtrack from a bizarre message on a tiny screen through five to ten different CALL and GO TO statements until the mistake in the code presented itself. Just as I was forced to translate an output to the raw machine data to the actual code written by someone else, today's SIEM analysts have to translate the alert's accompanying data to the actual behavior identified and then, the reason why it warranted a rule back when someone else wrote it. It certainly doesn't feel like you've got the information you need to take action; some digging is required before you gain the necessary insight. C-3PO would be really helpful here to immediately explain every alert in plain English. If your team is going to get more time to do the important work, you need custom alerts for humans, not machines InsightIDR comes with dozens of useful alerts to anomalous and attacker-like behavior across log and endpoint events, but this extreme is just not enough. Switching from the rules-only of SIEM to the anomalies-only approach of other User Behavior Analytics (UBA) solutions is too dramatic a shift and that's why you need a solution with both. This is why Rapid7 customers can write custom alerts for the events which are a concern for their organizations. If they want to feed this intelligence to our teams, we're thrilled, and we may add it to the alerts every new customer gets after testing its noise level. But right now, assign a junior analyst to make sure you have the alerts you need. We've made it dramatically easier to capture the alert in the first place. Deciding to alert whenever someone authenticates from North Korea or it looks like someone is streaming The Night Of from HBO Go will feel like you've been handed a sat nav on the day you interview to drive in London. My first experience in debugging Java was a dream after the process I had learned with FORTRAN. Even I can write these custom alerts and even I can understand what it means when someone else's alert triggers. If you want to learn more about the way InsightIDR does what I described here, please check out our on-demand demo. We think you will appreciate our approach.

Warning: This blog post contains multiple hoorays! #sorrynotsorry

Hooray for crystalware! I hit a marketer's milestone on Thursday – my first official award ceremony, courtesy of the folks at Computing Security Awards, which was held at The Cumberland Hotel in London. Staying out late on a school night when there's a 16 month old…

Hooray for crystalware! I hit a marketer's milestone on Thursday – my first official award ceremony, courtesy of the folks at Computing Security Awards, which was held at The Cumberland Hotel in London. Staying out late on a school night when there's a 16 month old teething toddler in the house definitely took it's toll the following morning, but the tiredness was definitely softened by the sweet knowledge that we'd left the award ceremony brandishing some crystalware. In the two categories that Rapid7 solutions were shortlisted as finalists - SME Security Solution of the Year (Nexpose) and Best New Product of the Year (InsightIDR) - we were awarded winner and runner-up respectively. What's particularly cool about the Computing Security Awards is that the majority of awards, including the two we were up for, are voted for by the general public, so receiving these accolades is very special to us. We'd like to say an absolutely massive THANK YOU to everyone who voted for our products, we are truly very grateful for your support. Hooray for Nexpose! Nexpose storming to the win in the SME category, a space that isn't always top of mind to some security vendors, really validates for me how well designed and engineered the product is. Our customers come in all shapes and sizes, and the maturity of their vulnerability management programs vary just as much, but Nexpose caters for all. In SME the concept of a dedicated security team is certainly less common. More often than not we see that IT teams have security as just one of their many disciplines – so they need a vulnerability management tool which is easy to use, and allows them to quickly prioritise remediation efforts with live data that's relevant to their environment. Nexpose determines and constantly updates vulnerability risk scoring using RealRisk – scoring vulnerabilities from 1-1000, thus removing the nightmare of having umpteen hundred ‘'criticals” which are seemingly all equal. Liveboards (because dashboards don't actually dash – they should really be called meanderboards) provide admins with real time data – you know at all times exactly how well you are winning at remediating. If you're reading this blog and you're thinking about implementing a new VM solution, you should download a free trial here and experience it in action for yourself. Hooray for InsightIDR! InsightIDR receiving an honourable mention in the Best New Product category makes Sam very happy. This product was frankly one of the main reasons I came to work for Rapid7. When I first heard of it back in March my interest was immediately sparked, as I'd never seen anything quite like it.  I've worked in incident response in a previous life, and have seen a vast number of organisations really struggle to find answers when they are in the unfortunate situation of a cyberattack. Some didn't even know they'd been under attack until they received notification from a third party. Incidents would regularly go on for many days, with teams having to work around the clock with great pressure to balance business continuity and incident response, which is the juggling act from hell. More often than not, investigations and Root Cause Analysis reports would take months and months, and would frequently be lacking in details. If you can't see what's happening, you can't properly respond, and you have pretty much a zero chance of taking away any solid learnings from the event. InsightIDR solves these problems by combining SIEM, EDR and UBA capabilities, which mean it detects attacks early in the attack chain, finds compromised credentials, and it provides a clear investigation timeline. It's truly an amazing piece of kit, and I know that every incident I ever worked on would undoubtedly have had a better outcome had InsightIDR been in place at the time. Seeing in this case will definitely result in believing – I'd heartily recommend you arrange a demo today. Hooray for Integrated Solutions! So before I give a shout out to the incredible people behind these two superb products, there's one further piece of good news: you can now integrate [PDF] them too! Hooray for Moose! Our people, our “Moose”, who design, build, test, sell, support and of course market (obvs.) these products are all the winners here. I don't use the term ‘incredible' lightly either – I am privileged to have represented them at the awards ceremony, we have an amazing team across the globe jam-packed with smart, creative, brilliant people. Our solutions are testament to the work they do, their combined knowledge solves difficult customer problems, providing insight to security professionals all over the world. Congratulations Moose – you are a bloody awesome bunch! Thanks again to everyone who voted for our solutions, and a big cheers to the folks at Computing Security who held a brilliant awards bash. We hope to see you again next year!

Demanding More from Your SIEM Tools [Webcast Summary]

Do you suffer from too many vague and un-prioritized incident alerts? What about ballooning SIEM data and deployment costs as your organization expands and ingests more data? You're not alone. Last week, over a hundred infosec folks joined us live for Demanding More out of…

Do you suffer from too many vague and un-prioritized incident alerts? What about ballooning SIEM data and deployment costs as your organization expands and ingests more data? You're not alone. Last week, over a hundred infosec folks joined us live for Demanding More out of Your SIEM.Content Shared in the WebcastIn Gartner's Feb 2016, “Security Information and Event Management Architecture and Operational Processes,” Anton Chuvakin and Augusto Barros recommend a “Run-Watch-Tune” model in order to achieve a “SIEM Win”. For those with a Gartner subscription, check out the full report here.While some SIEM vendors recommend 10 full-time analysts for a 24/7 SIEM deployment, at least three full-time employees should serve as the foundation of your deployment. A breakdown of core Run, Watch, and Tune responsibilities:Run: Maintain operational status, monitor uptime, optimize application and system performance.We recommend: Take stock of your existing network and security stack – are there more data sources you should be integrating? From talking to customers and our Incident Detection & Response research, top gaps in SIEM integrations are:DHCP. This integration provides a crucial User-Asset-IP link and powers most User Behavior Analytics solutions today.Endpoint Data. If local authentications aren't centrally logged, attackers can laterally move between endpoints and go undetected by the SIEM. 5 Ways Attackers can Evade a SIEM.Cloud Services. Leading cloud services such as Office 365, Google Apps, and Salesforce expose APIs with audit data, but many SIEMs don't take advantage of this data.Watch: Using the SIEM for security monitoring and incident investigation.We recommend: Today's organizations are getting way too many alerts – here's a poll taken during the webcast.Most security teams have to jump between multiple tools during investigations, are getting too many alerts, and are struggling to identify stealthy attacks, such as the use of compromised credentials and lateral movement, that don't require malware to be successful. Most organizations are alerted on unauthorized access to critical assets, but at that point, intruders are already at Mission Target in the Attack Chain.By mapping your detections to the Attack Chain, you can find intruders earlier and kick them out before data exfiltration occurs.Tune: Customize SIEM content, create rules for specific business use-cases.We recommend: Building queries requires specialized SIEM skills and experience manipulating large data sets, a scarce skillset that differs from incident investigation & response experience. If you've just been handed the reins to an existing SIEM deployment, it's worth the time to do a rule review. While technology like User Behavior Analytics provides robust detection for today's top attack vectors behind breaches, custom work is still necessary to meet specific business needs, such as compliance or a company-specific detection.What I Learned from the AudienceThroughout the talk, we asked a few questions to learn from the audience. 71% currently have a SIEM, 11% don't, and 18% don't but are looking to purchase. Current satisfaction with their existing SIEM for Incident Detection and Response was across the board, with answers ranging from 4-8 on a scale of 1-10. The biggest concern was with data costs, the pricing model behind traditional SIEM solutions.Top questions from our Q&A:1. What is the best way to detect pass-the-hash techniques over servers?The key data source is endpoint event logs. Only local authentication logs contain both the source and destination asset. For a full technical breakdown, check out our whitepaper: Why You Need to Detect More than Pass the Hash, with best practices on how to identify the use of compromised credentials.2. Is there a way to see all InsightIDR integrations on your website?Yes – to see the full list, which ranges from network events, endpoint data, existing log aggregators or SIEMs, and more, check out the Insight Platform Supported Event Sources doc here.3. Is there an [InsightIDR] integration with Nexpose or Metasploit?Yes! Nexpose, our vulnerability management solution, integrates with InsightIDR to provide visibility and security detection across assets and the users behind them. This provides three key benefits:Put a “face” to your vulnerabilitiesAutomatically place vulnerable assets under greater scrutinyFlag users that use actively exploitable assetsLearn more about the Nexpose-InsightIDR integration here. InsightIDR also integrates with Metasploit to track the success of phishing campaigns on your users.I Want More from My SIEM Deployment: Why InsightIDR?InsightIDR works by integrating with your existing network and security stack, including Log Aggregators and SIEMs. The first step is unifying your technology and leveraging SIEM, UBA, and EDR capabilities to leave attackers with nowhere to hide.InsightIDR can augment or replace your existing SIEM deployment. Organizations that use InsightIDR in sync with their SIEM especially enjoy:User Behavior Analytics: Alerts show the actual users and assets affected, not just an IP address. InsightIDR automatically correlates the millions of events generated every day to the users behind them, highlighting notable behaviors to accelerate incident validation and investigations.Endpoint Detection & Visibility: The blend of the Insight Agent and Endpoint Scan means detection and real-time queries for critical assets and endpoints, even off the corporate network. InsightIDR focuses on detecting intruders earlier in the Attack Chain, meaning you'll be alerted on local lateral movement, privilege escalation, log deletion, and other suspicious behavior happening on your endpoints.10x Faster Incident Investigations: The security team can bring real-time user behavior, log search, and endpoint data together in a single visual timeline. No more jumping between disparate log files, retracing user activity across multiple IPs, and requiring physical access to the endpoint to answer questions.If you'd like to learn more, Demanding More from Your SIEM shows a live InsightIDR demo, complete with Q&A from an engaged audience. Or - contact us for a free guided demo!

InsightIDR & Nexpose Integrate for Total User & Asset Security Visibility

Rapid7's Incident Detection and Response and Vulnerability Management solutions, InsightIDR and Nexpose, now integrate to provide visibility and security detection across assets and the users behind them. Combining the pair provides massive time savings and simplifies incident investigations by highlighting risk across your network ecosystem…

Rapid7's Incident Detection and Response and Vulnerability Management solutions, InsightIDR and Nexpose, now integrate to provide visibility and security detection across assets and the users behind them. Combining the pair provides massive time savings and simplifies incident investigations by highlighting risk across your network ecosystem without writing queries or digging through logs.Nexpose proactively identifies & prioritizes weak points on your network, while InsightIDR helps find unknown threats with user behavior analytics, prioritizes where to look with SIEM capabilities, and combines endpoint detection and visibility to leave attackers with nowhere to hide. Let's look at three specific benefits: (1) putting a "face" to your vulnerabilities, (2) automatically placing vulnerable assets under greater scrutiny, and (3) flagging users that use actively exploitable assets.User Context for Your VulnerabilitiesInsightIDR integrates with your existing network & security infrastructure to create a baseline of your users' activity. By correlating all activity to the users behind them, you're alerted of attacks notoriously hard to detect, such as compromised credentials and lateral movement.When InsightIDR ingests the results of your Nexpose vulnerability scans, vulnerabilities are added to each user's profile. When you search by employee name, asset, or IP address, you get a complete look at their user behavior:How this saves you time:See who is affected by what vulnerability – this helps you get buy-in to remediate a vulnerability by putting a face and context on a vulnerability. (“The CFO has this vulnerability on their laptop – let's prioritize remediation.”)Have instant context on the user(s) behind an asset, so you accelerate incident investigations and can see if the attacker laterally moved beyond that endpoint.Proactively reduce your exposed attack surface, by verifying key players are not vulnerable.Automatic Security Detection for Critical AssetsIn Nexpose, you can dynamically tag assets as critical. For example, they may be in the IP range of the DMZ or contain a particular software package/service unique to domain controllers. Combined with InsightIDR, that context extends to the users that access these assets.When InsightIDR ingests scan results, assets tagged as critical are labeled in InsightIDR as Restricted Assets. This integration helps you automatically place vulnerable assets under greater detection scrutiny.Some examples of alerts for Restricted Assets:First authentication from an unfamiliar source asset: InsightIDR doesn't just alert on the IP address, but whenever possible, shows the exact users involved.An unauthorized user attempts to log-in: This can include a contractor or compromised employee attempting to access a financial server.A unique or malicious process hash is run on the asset: A single Insight Agent deployed on your endpoints performs both vulnerability scanning and endpoint detection. Our vision is to reliably find intruders earlier in the attack chain, which includes identifying every process running on your endpoints. We run these process hashes against the wisdom of 50 virus scanners to identify malicious processes, as well as identify unique processes for further investigation.Lateral movement (both local and domain): Once inside your organization's network, intruders typically run a network scan to identify high-value assets. They then laterally move across the network, leaving behind backdoors & stealing higher privilege credentials.Endpoint log deletion: After compromising an asset, attackers look to delete system logs in order to hide their tracks. This is a high-confidence indicator of compromise.Anomalous admin activity, including privilege escalation: Once gaining access to an asset or endpoint, attackers use privilege escalation exploits to gain admin access, allowing them to dump creds or attempt pass-the-hash. We identify and alert on anomalous admin activity across your ecosystem.Identifying Users that Use Exploitable AssetsMany Nexpose customers purchase Metasploit Pro to validate their vulnerabilities and test if assets can be actively exploited in the wild. As an extension of the critical asset functionality above, customers that own all three products can automatically tag assets that are exploited by Metasploit as critical, and thus mark these as restricted assets in InsightIDR. This ensures that assets which are easy to breach are placed under higher scrutiny until the exploitable vulnerabilities are patched.Configuring the InsightIDR-Nexpose IntegrationIf you have InsightIDR & Nexpose, setting up the Event Source is easy.1. In Nexpose, setup a Global Admin. 2. In InsightIDR, on the top right Data Collection tab -> Setup Event Source -> Add Event Source.3. Add the information about the Nexpose Console (Server IP & Port).4. Add the credentials of the newly created Global Admin.And you're all set! If you have any questions, reach out to your Customer Success Manager or Support. Don't have InsightIDR and want to learn how the technology relentlessly hunts threats? Check out an on-demand 20 minute demo here.Nathan Palanov contributed to this post.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now