One of my favorite tweets-turned-into blogs of last year was one by Microsoft security’s John Lambert: “Defenders think in lists, attackers think in graphs.” Though it certainly doesn’t entirely sum up the challenges of being a defender, it drummed up some interesting conversation/controversy on twitter. Plus as a nice, pithy statement, it has a good ring to it. ;)
In the blog, John talks about the dependency graph of security dependencies: where nodes are assets (e.g. endpoints, network devices, applications, user accounts) and edges are the relationships that could form an attack path. (For those of you already confused: by ‘graph’ here he means graph in the mathematical sense: a set of nodes and edges, not a chart you’d place in a powerpoint deck).
As an abstract concept, the idea of ‘defending in graphs’ is useful for a number of reasons:
- A graph abstraction allows for multiple hierarchies based on distance, where a list has only a single hierarchy.
- As John points out, attackers don’t start with pre-built lists of high-value targets.
- From the vantage point of their current context, attackers will probe to expand their reach via edges in a security dependency graph.
For example: an attacker is much more likely to attempt to attack a domain controller associated with the workstation it has already compromised than one off in a yet-undiscovered network segment.
The Dangers Of Linear Thinking
Too much ‘list thinking’, John warns, can be dangerous. Consider his scenario where weak credentials on a low-priority R&D workstation (node) are shared with another high-priority application server:
The credentials form an edge, providing an attack path. If you work from just a ‘list’ of high value assets and prioritize applying defensive layers from there, you are overlooking the ‘pivot path’ from that little-used R&D workstation. When constructing your defenses, that workstation must be considered with the same vigilance as the application server.
Another reason why list thinking can be dangerous is that it trains defenders with the wrong mental model. I see this frequently with overuse of the ‘kill chain’ which is a great reference model, but when misinterpreted encourages the wrong kind of rigid thinking.
The kill chain is not a playbook that attackers must strictly follow. We’ve already seen a number of real-world examples where attackers don’t use malware for persistence, for example.
Graph Thinking and Why Incident Response Triage Is Like Debugging
Graph thinking is not required just for risk assessments, but also for security investigations. To optimize for speed and efficiency, defenders must learn the same skills that software engineers learn for debugging. Debugging skills are hard to learn because they also require ‘graph thinking’: to fix a problem in code or trace down an error on a production installation, there is often no fixed list of steps that will lead to a resolution.
Why? Because the root cause is often unclear, and engineers must use their time efficiently to get to a solution. If an engineer were to perform every single diagnostic check possible, debugging processes would routinely take days or weeks.
Rather, an engineer starts from observed information, formulates some educated guesses and performs some follow-on tasks to verify or discard theories about the root cause. Instead of performing every single possible diagnostic task, an engineer will pursue and discard various ‘branches’ along the way. The process resembles the OODA (Observe Orient Decide Act) model, which paints a cyclical graph, not a linear checklist.
When triaging a security event, an analyst will think the same way. Not every ‘horizontal port scan’ alarm will trigger a time-intensive and costly forensic deep-dive into the host. A trained security professional will evaluate certain data in the source event and in follow-on data enrichment steps (e.g. what the role of the host/destination? Is the source internal? Is this traffic typical for this device, or new traffic?) to follow certain branches in investigation and eliminate others: one which may eventually lead to look at data on a host, or to the quick conclusion of a false positive.
Sometimes as part of a triage process, you really just need to run through a checklist of basic and repetitive tasks. Wherever you are finding a repeatable checklist is really want you want, consider security automation as a way to accelerate those sub-processes.
Training Defenders To Think in Graphs
Sophisticated defenders must be trained to think critically about the relationships between present evidence, and follow-on investigative actions that must be taken — security processes that look more like graphs than checklists.
This doesn’t mean that you can’t document and share processes, and standardize on common operating procedures. Some of these processes may even look like checklists, especially at the microlevel. However, how frequently do we fall into the easy path of using forms and linear checklists, when what we really need are processes that look more like ‘call graphs’ and have branching structures?
Too often security products and tools encourage linear thinking, and no wonder: it’s really easy to make a checklist. Are you using event consoles and incident response workflow systems that encourage agile, composable playbooks, or are they training your team to think in lists? Just something to think about.
Augmenting Your Incident Response Processes
No matter the model of defense, security orchestration and automation can handle many of the follow-on steps of an investigation. on security automation best practices to learn more about what it is and how it helps security defenders tackle alerts and security events faster and with better accuracy.