In my last blog post, we reviewed the most prevalent detection strategies and how we can best implement them. This post dives into understanding how to catch what our other systems missed, using attacker behavior analytics and anomaly detection to improve detection.

Understand Your Adversary – Attack Methodology Detection

Contextual intelligence feeds introduce higher fidelity and the details needed to gain insight into patterns of attacker behavior. Attackers frequently rotate tools and infrastructure to avoid detection, but when it comes to tactics and techniques, they often stick with what works. The methods they use to deliver malware, perform reconnaissance, and move laterally in a network do not change significantly.

A thorough understanding of attacker methodology leads to the creation and refinement of methodology-based detection techniques. Knowledge of applications targeted by attackers enables more focused monitoring of those applications for suspicious behaviors, thus optimizing the effectiveness and efficiency of an organization's detection program. An example of application anomaly-based detection is webshells on IIS systems:

It is anomalous for IIS to spawn a command prompt, and the execution of “whoami.exe” and “net.exe” indicate likely reconnaissance activity. By understanding the methods employed by attackers we generate detections that will identify activity without relying on static indicators such as hashes or IPs. In this case we are using the low likelihood of IIS running CMD and the rare occurrence of CMD executing ‘whoami' and ‘net [command]' to drive our detection of potential attacker activity.

Additionally, attackers must reconnoiter networks both internally and externally to identify target systems and users. Reviewing logs for unusual user-to-system authentication events, suspicious processes (for example, ‘dsquery', ‘net dom', ‘ping –n 1', and ‘whoami'), especially over abbreviated time periods, can provide investigative leads to identify internal reconnaissance.

Even without a constant stream of real-time data from endpoints, we can model behavior and identify anomalies based upon the frequency of an item across a group of endpoints. By gathering data on persistent executables across a network, for example, we can perform frequency analysis and identify rare or unique entries for further analysis. Simple techniques like frequency analysis will often reveal investigative leads from prior (or even current) compromises, and can be applied to all manner of network and endpoint data.

Expanding beyond a reliance primarily on traditional static indicator-based detection and adding a focus on attacker behavior increases the likelihood of detecting previously unknown malware and skilled attackers. A culmination of multiple detection strategies is necessary for full coverage and visibility: proactive detection technology successfully blocks known-bad, contextual intelligence assists in identifying less common malware and attackers, and methodology-based evidence gathered from thorough observation provides insight into potential indicators of compromise.

Use the Knowledge You Have

IT and security staff know their organization's users and systems better than anyone else. They work diligently on their networks every day ensuring uptime of critical components, enablement of user actions, and expedient resolution of problems. Their inherent knowledge of the environment provides incredible depth of detection capabilities. In fact, IT support staff are frequently the first to know something is amiss, regardless if the problem is caused by attacker activity.  Compromised systems may often exhibit unusual symptoms and become problematic for users, who report the problems to their IT support staff.

Environment-specific threat detection is Rapid7's specialty. Our InsightIDR platform continuously monitors user activity, authentication patterns, and process activity to spot suspicious behavior. By tracking user authentication history, we can identify when a user authenticates to a new system, over a new protocol, and from a new IP. By tracking the processes executed on each system we can identify if a user is running applications that deviate from their normal patterns or if they are running suspicious commands (based on our knowledge of attacker methodology). Combining user authentication with process execution history ensures that even if an attacker accesses a legitimate account, his tools and reconnaissance techniques will give him away. Lastly, by combining this data with threat intelligence from previous findings, industry feeds, and attacker profiles we ensure that we prioritize high-fidelity investigative leads and reduce overall detection time, enabling faster and more effective response.

Let's walk through an example: Bob's account is compromised internally:

After compromising the system, an attacker would execute reconnaissance commands that are historically dissimilar to Bob's normal activity. Bob does not typically run ‘whoami' on the command line or execute psexec, nor has Bob ever executed a powershell command – those behaviors are investigative elements that individually are not significant enough to alert on, but in aggregate present a trail of suspicious behavior that warrants an investigation.

Knowledge of your environment and what is statistically ‘normal' per user and per system enables a ‘signature-less' addition to your detection strategy. Regardless of the noisy and easily bypassed malware hashes, domains, IPs, IDS alerts, firewall blocks, and proxy activity your traditional detection technology provides, you can identify attacker activity and ensure that you are not missing events due to stale or inaccurate intel.

Once you have identified an attack based on user and system anomaly detection, extract useful indicator data from your investigation and build your own ‘known-bad' threat feed. By adding internal indicators to your traditional detection systems, you have greater intel context and you can simplify the detection of attacker activity throughout your environment. Properly combining detection strategies dramatically increases the likelihood of attack detection and provides you with the context you need to differentiate between ‘weird', ‘bad', and ‘there goes my weekend'.