Rapid7 Blog

Red Team  

Penetration Test vs. Red Team Assessment: The Age Old Debate of Pirates vs. Ninjas Continues

In a fight between pirates and ninjas, who would win? I know what you are thinking. “What in the world does this have to do with security?” Read on to find out but first, make a choice: Pirates or Ninjas? Before making that choice, we…

In a fight between pirates and ninjas, who would win? I know what you are thinking. “What in the world does this have to do with security?” Read on to find out but first, make a choice: Pirates or Ninjas? Before making that choice, we must know what the strengths and weaknesses are for each: Pirates Strengths Weaknesses Strong Loud Brute-Force Attack Drunk (Some say this could be a strength too) Great at Plundering Can be Careless Long-Range Combat Ninjas Strengths Weaknesses Fast No Armor Stealthy Small Dedicated to Training Hand-to-Hand/Sword Combat It comes down to which is more useful in different situations. If you are looking for treasure that is buried on an island and may run into the Queen's Navy, you probably do not want ninjas. If you are trying to assassinate someone, then pirates are probably not the right choice. The same is true when it comes to Penetration Testing and Red Team Assessments. Both have strengths and weaknesses and are more suited to specific circumstances. To get the most value, first determine what your goals are, then decide which best corresponds with those goals. Penetration Testing Penetration testing is usually rolled into one big umbrella with all security assessments. A lot of people do not understand the differences between a Penetration Test, a Vulnerability Assessment, and a Red Team Assessment, so they call them all Penetration Testing. However, this is a misconception. While they may have similar components, each one is different and should be used in different contexts. At its core, real Penetration Testing is testing to find as many vulnerabilities and configuration issues as possible in the time allotted, and exploiting those vulnerabilities to determine the risk of the vulnerability. This does not necessarily mean uncovering new vulnerabilities (zero days), it's more often looking for known, unpatched vulnerabilities. Just like Vulnerability Assessments, Penetration Testing is designed to find vulnerabilities and assess to ensure they are not false positives. However, Penetration Testing goes further, as the tester attempts to exploit a vulnerability. This can be done numerous ways and, once a vulnerability is exploited, a good tester will not stop. They will continue to find and exploit other vulnerabilities, chaining attacks together, to reach their goal. Each organization is different, so this goal may change, but usually includes access to Personally Identifiable Information (PII), Protected Health Information (PHI), and trade secrets. Sometimes this requires Domain Administrator access; often it does not or Domain Administrator is not enough. Who needs a penetration test? Some governing authorities require it, such as SOX and HIPAA, but organizations already performing regular security audits internally, and implementing security training and monitoring, are likely ready for a penetration test. Red Team Assessment A Red Team Assessment is similar to a penetration test in many ways but is more targeted. The goal of the Red Team Assessment is NOT to find as many vulnerabilities as possible. The goal is to test the organization's detection and response capabilities. The red team will try to get in and access sensitive information in any way possible, as quietly as possible. The Red Team Assessment emulates a malicious actor targeting attacks and looking to avoid detection, similar to an Advanced Persistent Threat (APT). (Ugh! I said it…) Red Team Assessments are also normally longer in duration than Penetration Tests. A Penetration Test often takes place over 1-2 weeks, whereas a Red Team Assessment could be over 3-4 weeks or longer, and often consists of multiple people. A Red Team Assessment does not look for multiple vulnerabilities but for those vulnerabilities that will achieve their goals. The goals are often the same as the Penetration Test. Methods used during a Red Team Assessment include Social Engineering (Physical and Electronic), Wireless, External, and more. A Red Team Assessment is NOT for everyone though and should be performed by organizations with mature security programs. These are organizations that often have penetration tests done, have patched most vulnerabilities, and have generally positive penetration test results. The Red Team Assessment might consist of the following: A member of the Red Team poses as a Fed-Ex delivery driver and accesses the building. Once inside, the Team member plants a device on the network for easy remote access. This device tunnels out using a common port allowed outbound, such as port 80, 443, or 53 (HTTP, HTTPS, or DNS), and establishes a command and control (C2) channel to the Red Team's servers. Another Team member picks up the C2 channel and pivots around the network, possibly using insecure printers or other devices that will take the sights off the device placed. The Team members then pivot around the network until they reach their goal, taking their time to avoid detection. This is just one of innumerable methods a Red Team may operate but is a good example of some tests we have performed. So... Pirates or Ninjas? Back to pirates vs. ninjas. If you guessed that Penetration Testers are pirates and Red Teams are ninjas, you are correct. Is one better than the other? Often Penetration Testers and Red Teams are the same people, using different methods and techniques for different assessments. The true answer in Penetration Test vs. Red Team is just like pirates vs. ninjas; one is not necessarily better than the other. Each is useful in certain situations. You would not want to use pirates to perform stealth operations and you would not want to use ninjas to sail the seas looking for treasure. Similarly, you would not want to use a Penetration Test to judge how well your incident response is and you would not want to perform a Red Team assessment to discover vulnerabilities.

All Red Team, All the Time

In last week's blog (which you should read now if you have not), I said: The core problem with security today isn't about technology. It's about misaligned incentives. We are trying to push security onto people, teams, and processes that just don't want it. To…

In last week's blog (which you should read now if you have not), I said: The core problem with security today isn't about technology. It's about misaligned incentives. We are trying to push security onto people, teams, and processes that just don't want it. To be clear, it's not that people don't care. They say they want security, and I believe them. Or more precisely, part of their brain wants security. People who want to break a bad habit, or to lose weight, or to stop smoking, all want to achieve their goals, but other parts of their brain are in charge. What matters are their actions and behaviors. Outsiders will judge you by the results, not your efforts, goals, and intentions. How do we bridge the gap between people and organizations wanting to be more secure, and actually being more secure? Thinking about the long-term effects, how do we get from where we are now to a world in which breaches are rare? As I dreamed in the previous blog post, it's all about incentives that move responsibility from people with “security” in their title to people everywhere in the organization. I don't have any great answers (other than my “All red team, all the time” dream), but I will offer a few  characteristics of an organization that will be more likely to be secure by design. Product teams have to care more about the security of the data they collect than the security team does. To use an analogy (which I admit are always fraught with peril) I simply must care about my own health more than my personal trainer and doctor because when something goes wrong I'm the one who has the heart attack, not them. Today the incentives don't line up that way in infosec. Teams regularly ignore or override the advice of their security “doctor.” And when the incident happens, the security teams often bear the brunt of the incident response process. Everyone with Product Manager as a title should be well versed in the attacker lifecycle, the black market value of the data they collect, the legal impact of a breach, and should have a written runbook for when all that private data is dumped on torrents (among other scenarios). As an equal partner in securing the data, security is more likely. The CEO is the implicit Chief Security Officer. She has to set the tone for everyone. She has to brag to her VPs about how she's tightening up her personal security. She needs to be the first to update her laptop OS, experiment with a new secure instant messaging system, and to request security report cards for the various team. She has to require each VP and Director to formally explain what they are doing to improve security in their areas (as opposed to putting the sole burden on the security team). She should ask them to explain why they are collecting and holding customer and employee data for so long. What really matters is not what the CEO says to the security team about security, it's what the CEO says to everyone else about security when the security team isn't present. Small, continuous reinforcement is stronger than a single bold pronouncement. Everyone thinks like an attacker. You are up against dedicated, human adversaries. After you make a move to improve security, your adversaries will decide what to do, if anything. When you start to think this way consistently it gives you new perspective. Your company does a lot of work to pass the audits, build ISO or NIST controls, train people, roll out a new IDS system, refactor networks, implement an SDL, and a lot of other hard, painful, expensive things. But when you view your results through the lens of an attacker, you may find that it's not enough; that it's necessary, but not sufficient. Or that you over invested in one area like Protect at the expense of Detect, or Respond, or Recover. If you knew for a fact that you were going to be attacked tonight, what would you do differently? If you knew you had an intruder in your networks right now, what would you do? Thinking like an attacker doesn't devalue all those hard things you do to defend. It gives you perspective to know if it's enough and balanced. Thinking like an attacker will let you know if you've changed the incentives and economics for the adversary. Those are a few characteristics that will lead to a more secure organization. I'm sure there are others. Let me be blunt. Until those things happen, compromises and breaches are inevitable because the incentives are misaligned. Have a story or a dream for me about about incentives that worked? Or went awry? Drop me a line on Twitter at @boblord. If you want to tell me confidentially, send me a DM! My settings allow you to DM me even if I don't follow you.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More


Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now


Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now