Last updated at Tue, 25 Jul 2017 19:12:38 GMT

Having been involved in information security for the last 15 years, I've had the opportunity to meet some really amazing people and to view the industry through their eyes. I've been toying with the idea of a blog series where I interview some of the people I've had the privilege to meet, and hopefully to introduce some of my readers to the awesome research that's being done. I've decided to call the blog series "Dangerous Things", which is a reference to the fact that so many of us in this industry are fascinated by things that go boom - whether that be fast cars, martial arts, firearms, or exploits (or all of the above).

The first installment of Dangerous Things is an interview with Dan Guido, co-founder of a new venture named Trail of Bits.  I was originally introduced to Dan by my business partner Tas, Rapid7's co-founder and CTO.  The Malware Exposure features of Nexpose 5.0 were inspired in part by conversations between Tas and Dan at Security Confab last year. We invited Dan to speak at the 2011 UNITED Summit in San Francisco to present his research and to participate in panels - if you haven't read Dan's research yet, I recommend you check it out!

Thanks and enjoy the interview. If you have any follow-up questions for Dan Guido, please post them here and I will do my best to hound Dan until he answers them

Disclaimer: The opinions represented on this blog belong to the people being interviewed, and are not necessarily representative of my views or the views of Rapid7.

CL: Can you tell us a little bit about yourself and what you do for a living?


DG: I'm a co-founder of Trail of Bits, an intelligence-driven security firm that helps organizations make better strategic defense decisions. People tend to find our approach to security unique because we guide organizations to identify and respond to current attacks, rather than broadly address software vulnerability. At Trail of Bits, we acknowledge that it's not possible to fix all of an organization's vulnerabilities and that it's far more productive to focus on minimizing the effectiveness of attacks in a tangible and measurable way instead. We came to this belief after research that Dino Dai Zovi and I published last year and after ten years of watching Alex Sotirov obliterate any technology he stared at long enough.

In addition to my work at Trail of Bits, I'm a Hacker in Residence at NYU-Poly where I oversee student research and teach classes in Application Security and Vulnerability Analysis, the two capstone courses in the NYU-Poly security program.

CL: Your presentation "An Intelligence-Driven Approach to Malware" was very well received at the UNITED Conference.  Can you summarize your  thesis for people who haven't seen the presentation?

DG: "Attackers are resource-constrained too."

In order to scale, different classes of threats become dependent on specific processes towards achieving their goals. In this presentation, I investigated the workflows that mass malware groups have built out and identified points in them that were the most vulnerable to disruption. With this information, I can evaluate the precise impact or lack of impact of any given security decision an organization can make. This is fundamentally different from the due diligence approach towards security that most organizations use today and it presents actual metrics for the effectiveness of a security program. Throughout the presentation, I introduced, defined, and walked through re-usable analytical techniques that viewers could use to re-apply this method of thinking to other threats they care about. If you're interested in seeing this presentation, all of our research can be found on our website at www.trailofbits.com.

CL: Does the principle of limited resources apply equally well to criminal groups, hacktivists, and nation states? Without rehashing the entire presentation, can you give a couple examples from each type of threat about where the likely resource constraints are?

DG: All the groups that you mentioned have limited resources and have to struggle with problems of scale. These constraints influence the techniques, tactics, and procedures (TTPs) that each group adopts and what they are able to achieve.

Let's take hacktivist groups as an example because we've seen many public examples of their work over the last year and this let's us more easily draw conclusions about them. Thinking about hacktivists is particularly fun since they uniquely have no path to financial remuneration for their attacks. In this way, they're closely related to open-source projects and are similarly constrained by the people they can attract and the talent those people have. Since anyone can contribute to an open-source project it would seem like their resources are infinite, but in reality we know they have the arduous task of convincing people to work for them for free. This is why open-source software hasn't really destroyed everything else and it's one of the reasons why hacktivists groups don't display the level of operational sophistication that, say, APT or financial crime groups do.

In terms of their TTPs, this plays out in a variety of different ways with hacktivist groups. Since hacktivist groups don't care as much about exactly what they're breaking into, they're perfectly fine trying low overhead attacks like SQL injection and whenever they get in, they get in. It's much less targeted and more opportunistic because they're not going after anything specific, they're going after the entire organization (see: http://attrition.org/security/rant/sony_aka_sownage.html). Hacktivist groups don't need as much sophistication as APT or other attack groups, but they can be equally effective in the embarrassment they cause. For their goals this is enough and the attacks that require setting up infrastructure and time to develop and deploy, like client-side attack campaigns, zero-day browser exploits, or long-term persistence, are infrequently used by hacktivist threats. If you understand this, then you can understand how to best defend against them.

CL: Separating the hype about APT, there are still organizations out there which really DO face determined, sophisticated adversaries.  Does your research offer any advice beyond the mass malware threat?

DG: Of course! In fact, I've used these techniques primarily against APT groups and wanted to demonstrate that the same approach was applicable to other threats as well. I chose mass malware because they're an easy threat to beat up on, data about their operations is widely available, and they represent an impasse that an organization needs to overcome before it can effectively take on a more advanced adversary like APT.

Many of these techniques are incredibly well documented by Eric M. Hutchins et al in a paper they released late last year and I would recommend anyone interested in this topic to read their paper after seeing my presentation.

Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains.

http://papers.rohanamin.com/wp-content/uploads/papers.rohanamin.com/2011/08/iciw 2011.pdf

CL: Sometimes it seems like IT security pros are losing hope.  I talk to many practitioners who are discouraged and overwhelmed.  The task of patch management, which seems simple on the surface, is a huge amount of work for most organizations.  You are one of the rare few that seems to offer some hope for defenders - is that true and if so, what is your message for the doom and gloom crowd?

DG: I think the key problem here is that people are unable to connect the work that they're doing with the impact it has on attackers. That leads them to focus enormous amounts of resources on, for example, patching everything in sight rather than going about it more strategically or defending against their adversaries an entirely different and less resource-intensive way. If you're able to connect the product of your work with its impact, you're going to be considerably more effective at describing and selling security initiatives within your organization. The way to do that is with attacker data and with attacker scenarios derived from actual events.

CL: In your work, where do you see IT security organizations wasting the most time and money?

DG: Organizations waste the most time and money scrambling to identify the defenses that work and the defenses that don't after they've experienced their first major incident. The common wisdom of doing your best at security and then hoping that nothing happens is misleading and counterproductive. Instead, organizations should identify attacks that their peers have experienced and talk to an expert to simulate these on paper or for real. The output of such an exercise are actionable metrics about the effectiveness and gaps of processes and technologies inside your organization. It's only after you know what you're defending against that you can make educated decisions about security and avoid the marketing-driven snake oil and ineffective best practices that are so prevalent in this industry.

Said another way, people designing defenses who have never had them evaluated by a good attacker is kind of like learning one of those martial arts that look more like dancing than fighting. They look nice, but when you get into a fight your dance kungfu isn't going to help you not get your ass kicked.

CL: You have mentioned that organizations should view the desktop environment as one big public attack surface. Why is that and how do you see trends like VDI and application sandboxing affecting this?

DG: We built DMZs to tightly contain and control access to our firms most critical assets but over the last 10 years, these critical assets now reside on systems as directly connected to the internet but without such protections: our desktop computers. As an attacker, these systems are effortless to interact with from outside your perimeter and can be precisely targeted to specific individuals when necessary. E-mail, social networking, even targeted advertisements on general purpose websites allow attackers to directly interact with the systems that hold your critical assets today.

Sandboxing and Virtual Desktop Infrastructure (VDI) are steps in the right direction and allow organizations to gradually separate their assets from applications that are under direct attacker control, with the eventual goal of total separation. I would caution that many VDI solutions are built for ease of management rather than security and don't typically isolate applications well enough to prevent attacks. Organizations should ask for reviews of such technologies by teams capable of performing real exploitation before relying on them as a barrier. This is in contrast to application sandboxes like those from Google Chrome and Adobe Reader X, which have gone under such study and have had a demonstrable mitigating effect on exploitation in the wild.

It's unfortunate that widespread implementation of these technologies will take quite a long time. In the near to medium term, sandboxing will have little to no impact on attackers abilities to perform successful attacks. For instance, APT did not suddenly disappear or even radically change strategies when Adobe Reader X was released and it won't when the next application is sandboxed either. It's going to take years.

CL: What would be your top 3 practical and easy recommendations for typical IT security organizations?

DG:

  1. Have a major compromise, but make sure it happens on paper. Most organizations are completely unaware of how an actual attack looks until one happens to them, but one of the most surprising developments of last year was the wealth of compromises that occurred in full view of the public. At this point, we know almost every step of how Google, RSA, Sony, and others were compromised: what if these same attackers had targeted your company instead?
  2. Acknowledge that mass malware continues to abuse old vulnerabilities with unreliable exploit code and enable simple memory protections like Data Execution Prevention (DEP) and consider the Enhanced Mitigation Experience Toolkit (EMET) or Google Chrome Frame if you are unable to switch to a newer browser altogether. In particular, I like that these technologies provide an effective toggle switch that organizations can use when the risk of exploitation by a zero-day or other vulnerability is increased. Long term, organizations should understand that the needs of their intranet browser and their internet browser are diverging and they need a more secure, constantly updated browser that moves at “internet speed” to browse the web safely.
  3. Identify common methods that malware uses to persist and detect them with standard desktop management or security tools. In actual use, there are a very small number of locations in the registry and on-disk that malware like to start from and they typically have a variety of other characteristics that give them away: they're usually unsigned, impersonating Microsoft binaries in the wrong locations, composed of random filenames, or are only found on one or two hosts in an organization at a time.

CL: We've chatted about some of your research-in-progress. Can you tell us what you're working on next, and what are some of your predictions for attack patterns over the next 18 months?

DG: Coming up in April, I'll be publishing an intelligence-driven analysis of mobile phone exploits with Mike Arpaia, an ex-coworker of mine from iSEC Partners. We're comprehensively mapping out all of the exploits that exist for Android and iOS and we'll be using that data to chart a course for the future of mobile malware. This presentation should accurately describe the attacks that enterprises are likely to experience if they roll out these devices to their workforce as well as evaluate the effectiveness of current mobile security products.

As for predictions over the next 18 months, I think it's important to break them out by threat groups so here are three of mine for APT, mass malware, and hacktivist groups:

  1. As adoption of application sandboxes increases, APT groups will demonstrate the ability to break out of them. They will not target specific application sandbox implementations, rather they will rely on more generic Windows kernel exploits. If this happens, it validates that the architecture and implementation of the sandbox in a targeted application is effective as the exploit developer found it easier to avoid rather than attack head-on.
  2. Mass malware groups will continue their operations unchanged. Their ability to exploit client-side vulnerabilities will decline due to increased adoption of modern web browsers and their lack of capability to perform any customized exploit development. They will lack the necessary skills to take advantage of kernel exploits inadvertently disclosed in APT attacks. Instead, they will continue to innovate on social engineering and this will become their dominant infection vector over the long term.
  3. The continued success of hacktivist groups to seemingly compromise organizations at-will will cause companies to question their investments in security and demand greater justification regarding the effectiveness of proposed products and services. Hacktivists will continue to use SQL injection, remote file includes, and other remotely accessible web application flaws as their primary attack vector. Hacktivist groups will avoid the use of client-side attack campaigns, like those used by APT groups, since they are too slow to gain access and require significantly more investment in infrastructure and coordination.

CL: You also teach security at NYU-Poly.  What recommendations do you have for students wanting to enter the security industry?

DG: Learn to code, develop big projects, and do at least some of them in C. Participate in capture the flag competitions and war games. Disregard social media and what the security industry thinks is cool right now. Code repositories, research projects, and CTF standings are the certifications you want to have. Attend local security events, meet people in-person, and demonstrate your competence to them.

I tried to collect all my thoughts about this for my students on my course website: http://pentest.cryptocity.net/careers

CL: What security-related studies or papers have you found surprising and illuminating over the last year? How has your thinking changed?

DG: I thought 2011 was a great year for security research, particularly for our understanding of attacker behavior and capabilities. If you're going to read any papers or presentations from the last year, I would recommend the following:

  1. Eric Hutchins et al's paper on Intelligence-Driven Security. This paper describes the approach of Intelligence-Driven Defense, defines much a common language for practitioners to use, and walks the reader through a scenario with an attack that was previously observed. http://papers.rohanamin.com/wp-content/uploads/papers.rohanamin.com/2011/08/ici w2011.pdf.
  2. UCSD's Click Trajectories. Replace "value chain" with "kill chain" and you might get deja vu after reading the last paper. The UCSD folks use different language, but they further demonstrated the effectiveness of an intelligence-driven approach against a professional spammers -- a threat that most accept as a reality of existing on the internet these days. http://cseweb.ucsd.edu/~savage/papers/Oakland11.pdf
  3. Dino Dai Zovi's Attacker Math 101. In this presentation, Dino describes a body of analytical techniques for charting the future of exploitation of a given platform. He describes a language for modeling the actions and incentives of an exploit developer and then applies those techniques to one of the most active exploitation communities today: iOS jail-breakers. http://blog.trailofbits.com/2011/08/09/attacker-math-101/
  4. Microsoft's Mitigating Software Vulnerabilities. Writing exploits is incredibly hard. Many people don't understand this and simply equate knowledge of a vulnerability with the development of an exploit. Remember that your attackers are resource-constrained too. This paper will help you understand just the level of resources that one needs to exert to overcome modern memory protections like those offered by Microsoft's developer toolchain. http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=26788

CL: What sorts of research and data would you like to see coming from the industry?

DG: I think more people need to learn from and understand what attackers are using against them. In fact, I'd like to see it made an informal requirement for publishing in the future that papers describing defensive techniques evaluate their effectiveness against observed attacker behavior and capabilities. We have several good standards now. Authors can readily use intrusion kill chains, courses of action, or value chains, define their adversary, and provide a meaningful estimation of the utility of their contribution.

CL: What should we expect from Trail of Bits in the next few months? Are you going to be at RSA?

DG: We're entirely focused on product development right now so, if everything goes according to plan, the answer to your first question should be “not much.” Dino and I will be speaking more about our research and general approach to security at RSA, Blackhat EU, and SOURCE Boston, but we're waiting to announce anything related to our product offerings until we're convinced that they're ready to ship. If you're interested in hearing more about what our company is up to, you can sign up to our mailing list on the Trail of Bits website and you'll be the first to know when we have something to release or when we're looking for beta testers.