Rapid7 Blog

Jen Ellis  



Patching CVE-2017-7494 in Samba: It's the Circle of Life

With the scent of scorched internet still lingering in the air from the WannaCry Ransomworm, today we see a new scary-and-potentially-incendiary bug hitting the twitter news. The vulnerability - CVE-2017-7494 - affects versions 3.5 (released March 1, 2010) and onwards of Samba, the defacto…

With the scent of scorched internet still lingering in the air from the WannaCry Ransomworm, today we see a new scary-and-potentially-incendiary bug hitting the twitter news. The vulnerability - CVE-2017-7494 - affects versions 3.5 (released March 1, 2010) and onwards of Samba, the defacto standard for providing Windows-based file and print services on Unix and Linux systems. Check out Samba's advisory for more details. We strongly recommend that security and IT teams take immediate action to protect themselves. Who is affected? Many home and corporate network storage systems run Samba and it is frequently installed by default on many Linux systems, making it possible that some users are running Samba without realizing it. Given how easy it is to enable Samba on Linux endpoints, even devices requiring it to be manually enabled will not necessarily be in the clear. Samba makes it possible for Unix and Linux systems to share files the same way Windows does. While the WannaCry ransomworm impacted Windows systems and was easily identifiable, with clear remediation steps, the Samba vulnerability will impact Linux and Unix systems and could present significant technical obstacles to obtaining or deploying appropriate remediations. These obstacles will most likely present themselves in situations where devices are unmanaged by typical patch deployment solutions or don't allow OS-level patching by the user. As a result, we believe those systems may be likely conduits into business networks. How bad is it? The internet is not on fire yet, but there's a lot of potential for it to get pretty nasty. If there is a vulnerable version of Samba running on a device, and a malicious actor has access to upload files to that machine, exploitation is trivial. In a Project Sonar scan run today, Rapid7 Labs discovered more than 104,000 internet-exposed endpoints that appear to be running vulnerable versions of Samba on port 445. Of those, almost 90% (92,570) are running versions for which there is currently no direct patch available. In other words, “We're way beyond the boundary of the Pride Lands.” (sorry - we promise that's the last Lion King reference. Maybe.) We've been seeing a significant increase in malicious traffic to port 445 since May 19th; however, the recency of the WannaCry vulnerability makes it difficult for us to attribute this directly to the Samba vulnerability. It should be noted that proof-of-concept exploit code has already appeared on Twitter, and we are seeing Metasploit modules making their way into the community. We will continue to scan for potentially vulnerable endpoints and will provide an update on numbers in the next few days. RESEARCH UPDATE – 5/25/17 – We have now run a scan on port 139, which also exposes Samba endpoints. We found very similar numbers to those for the scan of port 445. On port 139, we found approximately 110,000 internet-exposed endpoints running vulnerable versions of Samba. Of these, about 91% (99,645) are running older, unsupported versions of Samba (pre-4.4). What should you do to protect yourself? The makers of Samba have provided a patch for versions 4.4 onwards. A workaround for unsupported and vulnerable older versions (3.5.x to 4.4.x) is available, and that same workaround can also be used for supported versions that cannot upgrade. We also recommend that users of older, affected versions upgrade to a more recent, supported version of Samba (4.4 or later) and then apply the available patch. Organizations should be reviewing their official asset and configuration management systems to immediately identify vulnerable systems and then perform comprehensive and regular full network vulnerability scans to identify misconfigured or rogue systems. Additionally, organizations should review their firewall rules to ensure that SMB/Samba network traffic is not allowed directly from the internet to their assets. Many network-attached storage (NAS) environments are used as network backup systems. A direct attack or worm would render those backups almost useless, so if patching cannot be done immediately, we recommend creating an offline copy of critical data as soon as possible. In addition, organizations should be monitoring all internal and external network traffic for increases in connections or connection attempts to Windows file sharing protocols. How can Rapid7 help? We are working on checks for Rapid7 InsightVM and Rapid7 Nexpose so customers can scan their environments for vulnerable endpoints and take mitigating action as quickly as possible. We also expect a module in the Metasploit Framework very soon, enabling security professionals to test the effectiveness of their mitigations, and understand the potential impact of exploitation. We will notify users of the availability of these solutions as soon as they are available. PRODUCT UPDATE – 5/25/17 – We have authenticated checks available for Samba CVE-2017-7494 in Rapid7 InsightVM and Rapid7 Nexpose.  The authenticated checks relate to vendor-specific fixes as follows: ubuntu-cve-2017-7494 debian-cve-2017-7494 freebsd-cve-2017-7494 oracle_linux-cve-2017-7494 redhat_linux-cve-2017-7494 suse-cve-2017-7494 PRODUCT UPDATE 2 – 5/25/17 – We now have both authenticated and unauthenticated remote checks in Rapid7 InsightVM and Rapid7 Nexpose. In the unauthenticated cases we use anonymous or guest login to gather the required information, and on systems that are hardened against that kind of login, the authenticated remote check is available. Not a Rapid7 customer? Scan your network with InsightVM to understand the impact this vulnerability has on your organization. We also have a step-by-step guide on how to scan for Samba CVE-2017-7494 using our vulnerability scanners. PRODUCT UPDATE 3 - 5/25/17 - We now have a Metasploit module available for this vulnerability, so you can see whether you can be exploited via Samba CVE-2017-7494, and understand the impact of such an attack. Download Metasploit to try it out. P.S. yes, we know the lion is called Simba. But who doesn't love a gratuitous and tenuous cartoon lion reference?! Rowr.

Rapid7's Position on the U.S. Executive Order on Immigration

On Friday, January 27th, 2017, the White House issued an Executive Order entitled, “Protecting The Nation from Foreign Terrorist Entry into The United States.”  As has been well-publicized, the Order suspends some immigration from seven Muslim-majority countries — Syria, Yemen, Sudan, Somalia, Iraq, Iran and Libya…

On Friday, January 27th, 2017, the White House issued an Executive Order entitled, “Protecting The Nation from Foreign Terrorist Entry into The United States.”  As has been well-publicized, the Order suspends some immigration from seven Muslim-majority countries — Syria, Yemen, Sudan, Somalia, Iraq, Iran and Libya — for 90 days, halts the refugee program for 120 days, and suspends the admission of Syrian refugees indefinitely. Since being issued on Friday, it has resulted in thousands of people being stranded and detained, away from their homes and families, and facing immense uncertainty over their futures. Below is the response that Rapid7's president and CEO, Corey Thomas, shared with media over the weekend: “As a midsize company with a global customer and employee base, these actions increase fear, uncertainty, and the cost of running a business, without clearly articulated security benefits. I believe that this action not only risks serious harm to innocent lives, it also weakens the position of US companies over time, and thus weakens the US economy. I am supportive of thoughtful security measures that are clearly communicated and well executed; however, these executive actions do not meet that standard. “We want to applaud and thank the Massachusetts senators and representatives that have taken a stand against this action.” We hope for swift and positive resolution for all those adversely affected by this Order.

Research Report: Vulnerability Disclosure Survey Results

When cybersecurity researchers find a bug in product software, what's the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers' disclosure? Questions like these are becoming increasingly important as…

When cybersecurity researchers find a bug in product software, what's the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers' disclosure? Questions like these are becoming increasingly important as more software-enabled goods - and the cybersecurity vulnerabilities they carry - enter the marketplace. But more data is needed on how these issues are being dealt with in practice. Today we helped publish a research report [PDF] that investigates attitudes and approaches to vulnerability disclosure and handling. The report is the result of two surveys – one for security researchers, and one for technology providers and operators – launched as part of a National Telecommunications and Information Administration (NTIA) “multistakeholder process” on vulnerability disclosure. The process split into three working groups: one focused on building norms/best practices for multi-party complex disclosure scenarios; one focused on building best practices and guidance for disclosure relating to “cyber safety” issues, and one focused on driving awareness and adoption of vulnerability disclosure and handling best practices. It is this last group, the “Awareness and Adoption Working Group” that devised and issued the surveys in order to understand what researchers and technology providers are doing on this topic today, and why. Rapid7 - along with several other companies, organizations, and individuals - participated in the project (in full disclosure, I am co-chair of the working group) as part of our ongoing focus on supporting security research and promoting collaboration between the security community and technology manufacturers. The surveys, issued in April, investigated the reality around awareness and adoption of vulnerability disclosure best practices. I blogged at the time about why the surveys were important: in a nutshell, while the topic of vulnerability disclosure is not new, adoption of recommended practices is still seen as relatively low. The relationship between researchers and technology providers/operators is often characterized as adversarial, with friction arising from a lack of mutual understanding. The surveys were designed to uncover whether these perceptions are exaggerated, outdated, or truly indicative of what's really happening. In the latter instance, we wanted to understand the needs or concerns driving behavior. The survey questions focused on past or current behavior for reporting or responding to cybersecurity vulnerabilities, and processes that worked or could be improved. One quick note – our research efforts were somewhat imperfect because, as my data scientist friend Bob Rudis is fond of telling me, we effectively surveyed the internet (sorry Bob!). This was really the only pragmatic option open to us; however, it did result in a certain amount of selection bias in who took the surveys. We made a great deal of effort to promote the surveys as far and wide as possible, particularly through vertical sector alliances and information sharing groups, but we expect respondents have likely dealt with vulnerability disclosure in some way in the past. Nonetheless, we believe the data is valuable, and we're pleased with the number and quality of responses. There were 285 responses to the vendor survey and 414 to the researcher survey. View the infographic here [PDF]. Key findings Researcher survey The vast majority of researchers (92%) generally engage in some form of coordinated vulnerability disclosure. When they have gone a different route (e.g., public disclosure) it has generally been because of frustrated expectations, mostly around communication. The threat of legal action was cited by 60% of researchers as a reason they might not work with a vendor to disclose. Only 15% of researchers expected a bounty in return for disclosure, but 70% expected regular communication about the bug. Vendor survey Vendor responses were generally separable into “more mature” and “less mature” categories. Most of the more mature vendors (between 60 and 80%) used all the processes described in the survey. Most “more mature” technology providers and operators (76%) look internally to develop vulnerability handling procedures, with smaller proportions looking at their peers or at international standards for guidance. More mature vendors reported that a sense of corporate responsibility or the desires of their customers were the reasons they had a disclosure policy. Only one in three surveyed companies considered and/or required suppliers to have their own vulnerability handling procedures. Building on the data for a brighter future With the rise of the Internet of Things we are seeing unprecedented levels of complexity and connectivity for technology, introducing cybersecurity risk in all sorts of new areas of our lives. Adopting robust mechanisms for identifying and reporting vulnerabilities, and building productive models for collaboration between researchers and technology providers/operators has never been so critical. It is our hope that this data can help guide future efforts to increase awareness and adoption of recommended disclosure and handling practices. We have already seen some very significant evolutions in the vulnerability disclosure landscape – for example, the DMCA exemption for security research; the FDA post-market guidance; and proposed vulnerability disclosure guidance from NHTSA. Additionally, in the past year, we have seen notable names in defense, aviation, automotive, and medical device manufacturing and operating all launch high profile vulnerability disclosure and handling programs. These steps are indicative of an increased level of awareness and appreciation of the value of vulnerability disclosure, and each paves the way for yet more widespread adoption of best practices. The survey data itself offers a hopeful message in this regard - many of the respondents indicated that they clearly understand and appreciate the benefits of a coordinated approach to vulnerability disclosure and handling. Importantly, both researchers and more mature technology providers indicated a willingness to invest time and resources into collaborating so they can create more positive outcomes for technology consumers. Yet, there is still a way to go. The data also indicates that to some extent, there are still perception and communication challenges between researchers and technology providers/operators, the most worrying of which is that 60% of researchers indicated concern over legal threats. Responding to these challenges, the report advises that: “Efforts to improve communication between researchers and vendors should encourage more coordinated, rather than straight-to-public, disclosure. Removing legal barriers, whether through changes in law or clear vulnerability handling policies that indemnify researchers, can also help. Both mature and less mature companies should be urged to look at external standards, such as ISOs, and further explanation of the cost-savings across the software development lifecycle from implementation of vulnerability handling processes may help to do so.” The bottom line is that more work needs to be done to drive continued adoption of vulnerability disclosure and handling best practices. If you are an advocate of coordinated disclosure – great! – keep spreading the word. If you have not previously considered it, now is the perfect time to start investigating it. ISO 29147 is a great starting point, or take a look at some of the example policies such as the Department of Defense or Johnson and Johnson. If you have questions, feel free to post them here in the comments or contact community [at] rapid7 [dot] com. As a final thought, I would like to thank everyone that provided input and feedback on the surveys and the resulting data analysis - there were a lot of you and many of you were very generous with your time. And I would also like to thank everyone that filled in the surveys - thank you for lending us a little insight into your experiences and expectations. ~ @infosecjen

Vulnerability Disclosure and Handling Surveys - Really, What's the Point?

Maybe I'm being cynical, but I feel like that may well be the thought that a lot of people have when they hear about two surveys posted online this week to investigate perspectives on vulnerability disclosure and handling. Yet despite my natural cynicism, I believe…

Maybe I'm being cynical, but I feel like that may well be the thought that a lot of people have when they hear about two surveys posted online this week to investigate perspectives on vulnerability disclosure and handling. Yet despite my natural cynicism, I believe these surveys are a valuable and important step towards understanding the real status quo around vulnerability disclosure and handling so the actions taken to drive adoption of best practices will be more likely to have impact.Hopefully this blog will explain why I feel this way. Before we get into it, here are the surveys:One for technology providers and operators: https://www.surveymonkey.com/r/techproviderOne for security researchers: https://www.surveymonkey.com/r/securityresearcherA little bit of background…In March 2015, the National Telecommunications and Information Administration (NTIA) issued a request for comment to “identify substantive cybersecurity issues that affect the digital ecosystem and digital economic growth where broad consensus, coordinated action, and the development of best practices could substantially improve security for organizations and consumers.” Based on the responses they received, they then announced that they were convening a “multistakeholder process concerning collaboration between security researchers and software and system developers and owners to address security vulnerability disclosure.”This announcement was met by the deafening sound of groaning from the security community, many of whom have already participated in countless multistakeholder processes on this topic. The debate around vulnerability disclosure and handling is not new, and it has a tendency to veer towards the religious, with security researchers on one side, and technology providers on the other. Despite this, there have been a number of good faith efforts to develop best practices so researchers and technology providers can work more productively together, reducing the risk on both sides, as well as for end-users. This work has even resulted in two ISO standards (ISO 29147 & ISO 30111) providing vulnerability disclosure and handling best practices for technology providers and operators. So why did the NTIA receive comments proposing this topic?  And of all the things proposed, why did they pick this as their first topic?In my opinion, it's for two main, connected reasons. Firstly, despite all the phenomenal work that has gone into developing best practices for vulnerability disclosure and handling, adoption of these practices is still very limited. Rapid7 conducts quite a lot of vulnerability disclosures, either for our own researchers, or on occasion for researchers in the Metasploit community that don't want to deal with the hassle.  Anecdotally, we reckon we receive a response to these disclosures maybe 20% of the time. The rest of the time, it's crickets. In fact, at the first meeting of the NTIA process in Berkeley, Art Manion of the CERT Coordination Center commented that they've taken to sending registered snail mail as it's the only way they can be sure a disclosure has been received.  It was hard to tell if that's a joke or true facts.So adoption still seems to be a challenge, and maybe some people (like me) hope this process can help. Of course, the efforts that went before tried to drive adoption, so why should this one be any different? This brings me to the second of my reasons for this project, namely that the times have changed, and with them the context. In the past five years, we've seen a staggering number of breaches reported in the news; we've seen high-profile branded vulnerability disclosures dominate headlines and put security on the executive team's radar. We've seen bug bounties starting to be adopted by the more security-minded companies. And importantly, we've seen the Government start to pay attention to security research – we've seen that in the DMCA exemption recently approved, the FDA post-market guidance being proposed, the FTC's presence at DEF CON, the Department of Defense's bug bounty, and of course, in the very fact that the NTIA picked this topic. None of these factors alone creates a turn of the tide, but combined, they just might provide an opportunity for us to take a step forward.And that's what we're talking about here – steps. It's important to remember that complex problems are almost never solved overnight. The work done in this NTIA process builds on work conducted before: for example the development of best practices; the disclosure of vulnerability research; efforts to address or fix those bugs; the adoption of bug bounties. All of these pieces make up a picture that reveals a gradual shift in the culture around vulnerability disclosure and handling. Our efforts, should they yield results, will also not be a panacea, but we hope they will pave the way for other steps forward in the future.OK, but why do we need surveys?As I said above, discussions around this tend to become a little heated, and there's not always a lot of empathy between the two sides, which doesn't make for great potential for finding resolution. A lot of this dialogue is fueled by assumptions.My experience and resulting perspective on this topic stems from having worked on both sides of the fence – first as a reputation manager for tech companies (where my reaction to a vulnerability disclosure would have been to try to kill it with fire); and then more recently I have partnered with researchers to get the word out about vulnerabilities, or have coordinated Rapid7's efforts to respond to major disclosures in the community. At different points I have responded with indignation on behalf of my tech company client, who I saw as being threatened by those Shady Researcher Types, and then later on behalf of my researcher friends, who I have seen threatened by those Evil Corporation Types. I say that somewhat tongue-in-cheek, but I do often hear that kind of dialogue coming from the different groups involved, and much worse besides. There are a lot of stereotypes and assumptions in this discussion, and I find they are rarely all that true.I thought my experience gave me a pretty good handle on the debate and the various points of view I would encounter. I thought I knew the reality behind the hyperbolic discourse, yet I find I am still surprised by the things I hear.For example, it turns out a lot of technology providers (both big and small) don't think of themselves as such and so they are in the “don't know what they don't know” bucket. It also turns out a lot of technology operators are terrified of being extorted by researchers. I've been told that a few times, but had initially dismissed it as hyperbole, until an incredibly stressed security professional working at a non-profit and trying to figure out how to interpret an inbound from a researcher contacted me asking for help. When I looked at the communication from the researcher, I could absolutely understand his concern.On the researcher side, I've been saddened by the number of people that tell me they don't want to disclose findings because they're afraid of legal threats from the vendor. Yet more have told me they see no point in disclosing to vendors because they never respond.  As I said above, we can relate to that point of view! At the same time, we recently disclosed a vulnerability to Xfinity, and missed disclosing through their preferred reporting route (we disclosed to Xfinity addresses, and their recommendation is to use abuse@comcast.net).  When we went public, they pointed this out, and were actually very responsive and engaged regarding the disclosure. We realized that we've become so used to a lack of response from vendors that we stopped pushing ourselves to do everything we can to get one. If we care about reaching the right outcome to improve security – and we do – we can't allow ourselves to become defeatist.My point here is that assumptions may be based on past experience, but that doesn't mean they are always correct, or even still correct in the current context. Assumptions, particularly erroneous ones, undermine our ability to understand the heart of the problem, which reduces our chances of proposing solutions that will work. Assumptions and stereotypes are also clear signs of a lack of empathy. How will we ever achieve any kind of productive collaboration, compromise, or cultural evolution if we aren't able or willing to empathize with each other?  I rarely find that anyone is motivated by purely nefarious motives, and understanding what actually does motivate them and why is the key to informing and influencing behavior to effect positive change.  Even if in some instances it means that it's your own behavior that might change JSo, about those surveys…The group that developed the surveys – the Awareness and Adoption Group participating in the NTIA process (not NTIA itself) – is comprised of a mixture of security researchers, technology providers, civil liberties advocates, policy makers, and vulnerability disclosure veterans and participants. It's a pretty mixed group and it's unlikely we all have the same goals or priorities in participating, but I've been very impressed and grateful that everyone has made a real effort to listen to each other and understand each other's points of view. Our goal with the surveys is to do that on a far bigger scale so we can really understand a lot more about how people think about this topic. Ideally we will see responses from technology providers and operators, and security researchers that would not normally participate in something like the NTIA process as they are the vast majority and we want to understand their (your?!) perspectives. We're hoping you can help us defeat any assumptions we may have - the only hypothesis we hope to prove out here is that we don't know everything and can still learn.So please do take the survey that relates to you, and please do share them and encourage others to do likewise:Survey for technology providers and operators: https://www.surveymonkey.com/r/techproviderSurvey for security researchers: https://www.surveymonkey.com/r/securityresearcherThank you! @infosecjen

12 Days of HaXmas: Political Pwnage in 2015

This post is the ninth in the series, "The 12 Days of HaXmas."2015 was a big year for cybersecurity policy and legislation; thanks to the Sony breach at the end of 2014 year, we kicked the new year off with a renewed focus on…

This post is the ninth in the series, "The 12 Days of HaXmas."2015 was a big year for cybersecurity policy and legislation; thanks to the Sony breach at the end of 2014 year, we kicked the new year off with a renewed focus on cybersecurity in the US Government. The White House issued three legislative proposals, held a cybersecurity summit, and signed a new Executive Order, all before the end of February. The OPM breach and a series of other high profile cybersecurity stories continued to drive a huge amount of debate on cybersecurity across the Government sphere throughout the year. Pretty much every office and agency is building a position on this topic, and Congress introduced more than 100 bills referencing “cybersecurity” during the year.So where has the security community netted out at the end of the year in terms of policy and legislation? Let's recap some of the biggest cybersecurity policy developments…Cybersecurity Information SharingThis was Congress' top priority for cybersecurity legislation in 2015 and the TL;DR is that an info sharing bill was passed right before the end of the year. The idea of an agreed legal framework for cybersecurity information sharing has merit; however, the bill has drawn a great deal of fire over privacy concerns, particularly with regard to how intelligence and law enforcement agencies will use and share information.The final bill was the result of more than five years of debate over various legislative proposals for information sharing, including three separate bills that went through votes in 2015 (two in the House and one in the Senate).  In the end, the final bill was agreed through a conference process between the House and Senate, and included in the Omnibus Appropriations Act.Despite this being the big priority for cybersecurity legislation, a common view in the security community seems to be that this is unlikely to have much impact in the near term. This is partly because organizations with the resources and ability to share cybersecurity information, such as large financial or retail organizations, are already doing so.  The liability limitation granted in the bill means they are able to continue to do this with more confidence. It's unlikely to draw new organizations into the practice as participation has traditionally centered more on whether the business has the requisite in-house expertise, and a risk profile that makes security a headline priority for the business, rather than questions of liability. For many organizations that strongly advocated for legislation, a key goal was to get the government to improve its processes for sharing information with the private sector. It remains to be seen whether the legislation will actually help with this.Right to ResearchFor those that have read any of my other posts on legislation, you probably know that protecting and promoting research is the primary purpose of Rapid7's (and my) engagement in the legislative arena. This year was an interesting year in terms of the discussion around the right to research…The DMCA Rulemaking The Digital Millennium Copyright Act (DMCA) prohibits the circumvention of technical measures that control access to copyrighted works, and thus it has traditionally been at odds with security research.  Every three years, there is a “rulemaking” process for the DMCA whereby exemptions to the prohibition can be requested and debated through a multi-phase public process. All granted exemptions reset at this point, so even if your exemption has been passed before, it needs to be re-requested every three years.  The idea of this is to help the law keep up with the changing technological landscape, which is sensible, but the reality is a pretty painful, protracted process that doesn't really achieve the goal.In 2015, several requests for security research exemptions were submitted: two general ones, one for medical devices, and one for vehicles. The Library of Congress, who oversees the rulemaking process, rolled these exemption requests together and at the end of October it announced approval of an exemption for good faith security research on consumer-centric devices, vehicles, and implantable medical devices. Hooray!Don't get too excited though – the language in the announcement sounded slightly as though the Library of Congress was approving the exemption against its own better judgment and with some heavy caveats, most notably that it won't come in to effect for a year (apart from for voting machines, which you can start researching now). More on that here.Despite that, this is a positive step in terms of acceptance for security research, and demonstrates increased support and understanding of its value in the government sector.CFAA ReformThe Computer Fraud and Abuse Act (CFAA) is the main anti-hacking law in the US and all kinds of problematic.  The basic issues as relates to security research can be summarized as follows:It's out of date – it was first passed in 1986, and despite some “updates” since then, it feels woefully out of date. One of the clearest examples of this is that the law talks about access to “protected computers,” which back in '86 probably meant a giant machine with an actual fence and guard watching over it. Today it means pretty much any device you use that is more technically advanced than a sliderule.It's ambiguous – the law hinges on the notion of “authorization” (you're either accessing something without authorization, or exceeding authorized access), yet this term is not defined anywhere, and hence there is no clear line of what is or is not permissible. Call me old fashioned, but I think people should be able to understand what is covered by a law that applies to them…It contains both civil and criminal causes of action. Sadly, most researchers I know have received legal threats at some point. The vast majority have come from technology providers rather than law enforcement; the civil causes of action in the CFAA provide a handy stick for technology providers to wield against researchers when they are concerned about negative consequences of a disclosure.The CFAA is hugely controversial, with many voices (and dollars spent) on all sides of the debate, and as such efforts to update it to address these issues have not yet been successful.In 2015 though, we saw the Administration looking to extend the law enforcement authorities and penalties of the CFAA as a means of tackling cybercrime. This focus found resonance on the Hill, resulting in the development of the International Cybercrime Prevention Act, which was then abridged and turned into an amendment that its sponsors hoped to attach to the cybersecurity information sharing legislation. Ultimately, the amendment was not included with the bill that went to the vote, which was the right outcome in my opinion.The interesting and positive thing about the process though was the diligence of staff in seeking out and listening to feedback from the security research community. The language was revised several times to address security research concerns. To those who feel that the government doesn't care about security research and doesn't listen, I want to highlight that the care and consideration shown by the White House, Department of Justice, and various Congressional offices through discussions around CFAA reform this year suggests that is not universally the case. Some people definitely get it, and they are prepared to invest the time to listen to our concerns and get the approach right.It's Not All Positive NewsDespite my comments above, it certainly isn't all plain sailing, and there are those in the Government that fear that researchers may do more harm than good. We saw this particularly clearly with a vehicle safety bill proposal in the second half of the year, which would make car research illegal. Unfortunately, the momentum for this was fed by fears over the way certain high profile car research was handled this year.The good news is that there were plenty of voices on the other side pointing out the value of research as the bill was debated in two Congressional hearings. As yet, this bill has not been formally introduced, and it's unlikely to be without a serious rewrite. Still, it behooves the security research community to consider how its actions may be viewed by those on the outside – are we really showing our good intentions in the best light? I have increasingly heard questions arise in the government about regulating research or licensing researchers. If we want to be able to address that kind of thinking in a constructive way to reach the best outcome, we have to demonstrate an ability to engage productively and responsibly.Vulnerability DisclosureFollowing high profile vulnerability disclosures in 2014 (namely, Heartbleed and Shellshock), and much talk around bug bounties, challenges with multi-party coordinated disclosures, and best practices for so called “safety industries” – where problems with technology can adversely impact human health and safety, so e.g. medical devices, transportation, power grids etc. – the topic of vulnerability disclosure was once again on the agenda. This time, it was the Obama Administration taking an interest, led by the National Telecommunications and Information Administration (NTIA, part of the Department of Commerce). They convened a public multi-stakeholder process to tackle the thorny and already much-debated topic of vulnerability disclosure.  The project is still in relatively early stages, and could probably do with a few more researcher voices, so get involved! One of the inspiring things for me is the number of vendors that are new to thinking about these things and are participating.  Hopefully we will see them adopting best practices and leading the way for others in their industries.At this stage, participants have split into four groups to tackle multiple challenges: awareness and adoption of best practices; multi-party coordinated disclosure; best practices for safety industries; and economic incentives.  I co-lead the awareness and adoption group with the amazing Amanda Craig from Microsoft, and we're hopeful that the group will come up with some practical measures to tackle this challenge.  If you're interested in more information on this issue specifically, you can email us.Export ControlsThanks to the Wassenaar Arrangement, in 2015, export controls became a hot topic in the security industry, probably for the first time since the Encryption Wars (Part I).The Wassenaar Arrangement is an export control agreement amongst 41 nation states with a particular focus on national security issues – hence it pertains to military and dual use technologies. In 2013, the members decided that should include both intrusion and surveillance technologies (as two separate categories).  From what I've seen, the surveillance category seems largely uncontested; however, the intrusion category has caused a great deal of concern across the security and other business communities.This is a multi-layered concern – the core language that all 41 states agreed to raises concerns, and the US proposed rule for implementing the control raises additional concerns. The good news is that the Bureau of Industry and Security (BIS) – the folks at the Department of Commerce that implement the control – and various other parts of the Administration, have been highly engaged with the security and business communities on the challenges, and have committed to redrafting the proposed rule, implementing a number of exemptions to make it livable in the US, and opening a second public comment period in the new year.  All of which is actually kind of unheard of, and is a strong indication of their desire to get this right.  Thank you to them!Unfortunately the bad news is that this doesn't tackle the underlying issues in the core language. The problem here is that the definition of what's covered is overly broad and limits sharing information on exploitation. This has serious implication for security researchers, who often build on each other's work, collaborate to reach better outcomes, and help each other learn and grow (which is also critical given the skills shortage we face in the security industry). If researchers around the world are not able to share cybersecurity information freely, we all become poorer and more vulnerable to attack.There is additional bad news: more than 30 of the member states have already implemented the rule and seem to have fewer concerns over it, and this means the State Department, which represents the US at the Wassenaar discussions, are not enthusiastic about revisiting the topic and requesting the language be edited or overturned. The Wassenaar Arrangement is based on all members arriving at consensus, so all must vote and agree when a new category is added, meaning the US agreed to the language and agreed to implement the rule. From State's point of view, we missed our window to raise objections of this nature and it's now our responsibility to find a way to live with the rule. Dissenters ask why the security industry wasn't consulted BEFORE the category was added.The bottom line is that while the US Government can come up with enough exemptions in their implementation to make the rule toothless and not worth the paper it's written on, it will still leave US companies exposed to greater risk if the core language is not addressed.As I mentioned, we've seen excellent engagement from the Administration on this issue and I'm hopeful we'll find a solution through collaboration. Recently, we've also seen Congress start to pay close attention to this issue, which is also likely to help move the discussion forward:In December, 125 members of the House of Representatives signed a letter addressed to the White House asking them to step into the discussion around the intrusion category. That's a lot of signatories and will hopefully encourage the White House to get involved in an official capacity. It also indicates that Wassenaar is potentially going to be a hot topic for Congress in 2016.Reflecting that, the House Committee on Homeland Security, and the House Committee on Oversight and Government Reform are joining forces for a joint hearing on this topic in January.The challenges with the intrusion technology category of the Wassenaar Arrangement highlight a hugely complex problem: how do we reap the benefits of a global economy, while clinging to regionalized nation state approaches to governing that economy?  How do you apply nation state laws to a borderless domain like the internet? There are no easy answers to these questions, and we'll see the challenges continue to arise in many areas of legislation and policy this year.Breach NotificationIn the US, there was talk of a federal law to set one standard for breach notification.  The US currently has 47 distinct state laws setting requirements for breach notification.  For any businesses operating in multiple states, this creates confusion and administrative overhead.  The goal for those that want a federal breach notification law is to simplify this by having one standard that applies across the entire country. In principle this sounds very sensible and reasonable.  The problem is that the federal legislative process does not move quickly, and there is concern that by making this a federal law, it will not be able to keep up with changes in the security or information landscape, and thus consumers will end up worse off than they are today. To address this concern, consumer protection advocates urge that the federal law not pre-empt state law that sets a higher standard for notification. However, this does not alleviate the core problem any breach notification bill is trying to get at – it just adds yet another layer of confusion for businesses. So I suspect it's unlikely we'll see a federal breach notification bill pass any time soon, but I wouldn't be surprised if we see this topic come up again in cybersecurity legislative proposals this year.Across the pond, there was an interesting development on this topic at the end of the year – the European Union issued the Network and Information Security Directive, which, amongst other things, requires that operators of critical national infrastructure must report breaches (interestingly, this is kind of at odds with the US approach, where there is a law protecting critical infrastructure from public disclosure). The EU directive is not a law – member states will now have to develop their own in-country laws to put this into practice. This will take some time, so we won't see a change straight away.  My hope is that many of the member states will not limit their breach notification requirements to only organizations operating critical infrastructure – consumers should be informed if they are put at risk, regardless of the industry. Over time, this could mark a significant shift in the dialogue and awareness of security issues in Europe; today there seems to be a feeling that European companies are not being targeted as much as US ones, which seems hard to believe. It seems likely to me that a part of the reason we don't hear about it so much because the breach notification requirement is not there, and many victims of attacks keep it confidential.Cyber HygieneThis was a term I heard a great deal in government circles this year as policy makers tried to come up with ways of encouraging organizations to put basic security best practices in place. The kinds of things that would be on the list here would be patching, using encryption, that kind of thing.  It's almost impossible to legislate for this in any meaningful way, partly because the requirements would likely already be out of date by the time a bill passed, and partly because you can't take a one-size-fits-all approach to security. It's more productive for Governments to take a collaborative, educational approach and provide a baseline framework that can be adapted to an organization's needs. This is the approach the US takes with the NIST Framework (which is due for an update in 2016), and similarly CESG in the UK provides excellent non-mandated guidance.There was some discussion around incentivizing adoption of security practices – we see this applied with liability limitation in the information sharing law.  Similarly, there was an attempt at using this carrot to incentivize adoption of security technologies.  The Department of Homeland Security (DHS) awarded FireEye certification under the SAFETY Act. This law is designed to encourage the use of anti-terrorism technologies by limiting liability for a terrorist attack. So let's say you run a football stadium and you deploy body scanners for everyone coming on to the grounds, but someone still manages to smuggle in and set off an incendiary device; you could be protected from liability because you were using the scanners and taking reasonable measures to stop an attack. In order for organizations to receive the liability limitation, the technology they deploy must be certified by DHS.Now when you're talking about terrorist attacks, you're talking about some very extreme circumstances, with very extreme outcomes, and something that statistically is rare (tragically not as rare as it should be). By contrast, cybercrime is extremely common, and can range vastly in its impact, so this is basically like comparing apples to flamethrowers. On top of that, using a specific cybersecurity technology may be less effective than an approach that layers a number of security practices together, e.g. setting appropriate internal policies, educating employees, patching, air gapping etc. Yet, if an organization has liability limitation because it is deploying a security technology, it may feel these other measures are unnecessarily costly and resource hungry. So there is a pretty reasonable concern that applying the SAFETY Act to cybersecurity may be counter-productive and actually encourage organizations to take security less seriously than they might without liability limitation.None-the-less, there was a suggestion that the SAFETY Act be amended to cover cybersecurity. Following a Congressional hearing on this, the topic has not raised its head again, but it may reappear in 2016.EncryptionUnless you've been living under a rock for the past several months (which might not be a terrible choice all things considered), you'll already be aware of the intense debate raging around mandating backdoors for encryption. I won't rehash it here, but have included it because it's likely to be The Big Topic for 2016. I doubt we'll see any real resolution, but expect to see much debate on this, both in the US and internationally.~@infosecjen

12 Days of HaXmas: Rapid7 Gives to You... Free Professional Media Training (Pear Tree Not Included)

Ho ho ho, Merry HaXmas! For those of you new to this series, every year we mark the 12 days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year we're kicking the series off with something not altogether…

Ho ho ho, Merry HaXmas! For those of you new to this series, every year we mark the 12 days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year we're kicking the series off with something not altogether hackery, but it's a gift, see, so very appropriate for the season. For the past couple of years, I've provided free media training at various security conferences, often as part of an I Am The Cavalry track, and often with the assistance of a reporter. Big thank yous and lots of adoration for SantaJen's helpers: Steve Ragan - my most frequent partner in crime - Paul Roberts, and Jim Finkle.  In the spirit of giving that is synonymous with HaXmas, the purpose of this blog is to make that training freely available to anyone that's interested. Why are we doing this? It's pretty simple really: I believe security professionals have important information to share, which can help individuals and organizations understand how they are at risk, and what they need to do to protect themselves. You could say that's a gift, and I reckon it's pretty valuable. The media can be a fantastic way of disseminating information broadly, and the good thing is that a lot of publications have dedicated security reporters these days. Unfortunately that doesn't mean it's all smooth sailing. The challenge comes in the details. Security pros are typically dealing with a pretty complex and nuanced subject matter.  Media is driven by attention-grabbing headlines and a need to feed the attention-spans and limited knowledge of readers.  As a reporter, you have to cater to people with a range of familiarity, understanding, and interest in the subject matter, even if you write for a specialist security title. There can be a vast distance between the deep technical knowledge of a security pro, and the will-my-editor-like-it need of reporters, and that provides much opportunity for misunderstanding, misreporting, or oversharing. NB: One thing I want to flag here is that my media training isn't about an adversarial relationship between spokesperson and reporter; it's about optimizing the engagement for a better result all the way around. We don't train people on this because we believe reporters are evilly conspiring against us. In fact, part of the reason I try to train with a reporter is to help build a greater understanding of their world, including their motivations, pressures and challenges. The training does talk about how to navigate certain reporter "techniques," but often these actions arise unintentionally, or for valid reasons (eg. a reporter going quiet on a call to catch up with their notes). You won't always encounter these techniques anyway, but if you do (and regardless of why they are used), you are better off knowing how to handle them. So in a nutshell, the media training I deliver is designed to help security pros share the information they have in as impactful, non-FUDy, and helpful way as possible. My goal is that we'll get better at making security relevant beyond our echo chamber, and in turn we'll help people understand it and protect themselves. Oh, and it probably doesn't hurt that getting good at briefing press helps our industry, and helps you as an individual build your career. So what am I actually giving you? Having received several requests for my slides, I created a deck designed for people to “self-teach,” which you can download here. And yes, people have been known to pay me to media train their spokespeople, so this is free professional training, as promised in the title. The presentation is licensed for use under the Creative Commons BY 4.0 license, so you can feel free to share it. If you end up using to it to build an amazing career as a media trainer, I'd appreciate a cut of your newfound riches . [If you feel that this is not hackery enough to be considered an appropriate gift for HaXmas, you can think of it as me teaching you how to “hack the media for fame and profit,” which is the title I sometimes present under at cons.] Want more? For those that want even more advice, Steve Ragan and Violet Blue have both written posts on interacting with media at conferences: http://www.csoonline.com/article/2952395/security-awareness/a-primer-on-dealing- with-the-media-as-a-hacker-and-dealing-with-hackers-as-the-media.html https://blog.rapid7.com/2015/07/22/the-black-hat-attendee-guide-guest-post-talking-to-the-media-press/ If you have specific questions, drop them into the comments section and I will try to answer them. If you have examples of putting the training into practice, I love to hear about it – let me know! Merry HaXmas! ~@infosecjen

New DMCA Exemption is a Positive Step for Security Researchers

Today the Library of Congress officially publishes its rule-making for the latest round of exemption requests for the Digital Millennium Copyright Act (DMCA).  The advance notice of its findings revealed some good news for security researchers as the rule-making includes a new exemption to…

Today the Library of Congress officially publishes its rule-making for the latest round of exemption requests for the Digital Millennium Copyright Act (DMCA).  The advance notice of its findings revealed some good news for security researchers as the rule-making includes a new exemption to the DMCA for security research:“(i) Computer programs, where the circumvention is undertaken on a lawfully acquired device or machine on which the computer program operates solely for the purpose of good-faith security research and does not violate any applicable law, including without limitation the Computer Fraud and Abuse Act of 1986, as amended and codified in title 18, United States Code; and provided, however, that, except as to voting machines, such circumvention is initiated no earlier than 12 months after the effective date of this regulation, and the device or machine is one of the following:(A) A device or machine primarily designed for use by individual consumers (including voting machines); (B) A motorized land vehicle; or (C) A medical device designed for whole or partial implantation in patients or a corresponding personal monitoring system, that is not and will not be used by patients or for patient care. (ii) For purposes of this exemption, “good-faith security research” means accessing a computer program solely for purposes of goodfaith testing, investigation and/or correction of a security flaw or vulnerability, where such activity is carried out in a controlled environment designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices or machines on which the computer program operates, or those who use such devices or machines, and is not used or maintained in a manner that facilitates copyright infringement.”Basically this means that good-faith security research on consumer-centric devices, motor vehicles and implantable medical devices is no longer considered a violation of the DMCA (with caveats detailed below). It's a significant step forward for security research, reflecting a positive shift in the importance placed on research as a means of protecting consumers from harm.A brief history of the DMCAThe DMCA was passed in 1998 and criminalizes efforts to circumvent technical controls that are designed to stop copyright infringement. It also criminalizes the production and dissemination of technologies created for the purpose of circumventing these technical controls. That's an incredibly simplified explanation of what the law does, and this is a good time for me to remind you that I'm not a lawyer.The statute includes a number of exceptions that relate to security research – one for reverse engineering (section 1201 (f)), encryption research (section 1201(g)), and security testing (section 1201(j)); however, these are very limited in what they allow. Acknowledging that technology moves fast, the statute also includes provisions for a new rule-making every three years, during which, requests for new and additional exemptions can be made. These are reviewed through a lengthy process that includes opportunities for support and opposition to the exemptions to be lodged with the Library of Congress. After reviewing these arguments, the Copyright Office makes a recommendation to the Library of Congress, who then issues a rule-making that either approves or rejects the submitted exemptions. Exemptions that are approved will automatically expire at the end of the three year window (as opposed to the exceptions, which are permanent unless subject to a change via legislative reform through Congress).Today's rule-making is the product of the latest round of exemption requests. A number of submissions relating to research were filed – a couple for a broad security research exemption, one for medical devices, one for cars, and even something for tractors.  The Library of Congress effectively rolled these into one exemption, which is why it covers consumer-centric devices, automobiles, and implantable medical devices.What does the new exemption mean for security research?Well firstly, it's an important acknowledgement of two things: 1) that research is critical for consumer protection, and 2) that laws like the DMCA can negatively impact research.This is significant not only in what it allows within the context of the DMCA, but also that it sets a precedent and presents an opportunity for a broader discussion on these two points in the Government arena.In terms of what is specifically allowed now, users are able to circumvent technical protections to conduct research on consumer-centric devices, automobiles, and implantable medical devices (that are not or will not be used for patient care).This is not carte-blanche though, and it's important to understand that.  There are a number of limits and questions raised by the language of the exemption:You are allowed to circumvent technical controls to conduct research, but you are NOT allowed to make, sell, or otherwise disseminate tools for circumventing these controls. So you can only conduct research to the extent that it doesn't require such tools.The exemption won't come into effect for a year. This is so other relevant agencies can update their policies. In his article on Boing Boing, Corey Doctorow points out that “the power to impose waiting times on exemptions at these hearings is not anywhere in the statute, is without precedent, and has no basis in law.” (Interestingly, the Library of Congress is excluding research on voting machines from the year delay.)It remains to be seen what the agencies referenced (Department of Transport, Environmental Protection Agency, and the Food and Drug Administration) will do and how that will impact the way this exemption can be applied. It's probably fair to say the exemption's dissenters will be actively lobbying them to find a way to limit the impact of the exemption.  It falls to those of is in the security research community to try to engage these agencies to ensure they understand why research is important, and to try to address any concerns they may have.The research must apply to consumer-centric devices (or cars, or implantable medical devices). What does that mean and where do you draw the line?  For example, we regularly hear of research findings in SOHO routers or printers.  These are devices designed for use in both home and work environments.  Do they count as “primarily designed for use by individual consumers?” I really hope these kinds of devices are included in this classification as they do represent a great deal of consumer risk. It's also somewhat strange to me that we're not granting business users the same protections we're giving individual consumer users.The exemption does NOT allow for research on devices relating to critical infrastructure or nuclear power. It's understandable that these areas raise considerable concern, but at the same time, do we really want flaws in these systems to be left unmitigated?  Doesn't that create more opportunities for bad actors to attack high value targets, potentially with very serious repercussions?For medical devices, the research cannot be conducted on devices that are being, or will be, used for patient care.  That seems pretty reasonable to me.Also, it's important to remember that, as noted above, the rule-making resets every three years, so this exemption will be in effect for a maximum of two years before we have to reapply and go through the entire process again (because of the year delay the Library of Congress has imposed on the exemption).But it IS a positive step?Yes, despite these qualifiers and limitations, I believe it is a positive step.  This is not just because in-and-of itself it enables more research to be conducted without concern of legal action, but also because it may indicate a bigger shift.Just last week, I wrote a blog about a proposed legislative amendment that was significantly rewritten in response to feedback from the security research community.  The TL;DR of that post is that it seems like a hugely positive step that the amendment's authors were prepared to engage and listen to the research community, and were concerned about avoiding negative consequences that would chill security research.Couple that with this exemption being approved, and I continue to have hope that we're starting to see a shift in the way the Government sector understands and values security research.  I'm also seeing a shift in the way the research community engages the Government, and how're we're participating in discussions that will shape our industry.It's not a silver bullet; given the complexity of the challenges we're addressing, we're not going to solve concerns around the right-to-research overnight. It's going to be a long path, but every step counts.  And today I think we may have taken “one giant leap” for research-kind ~ @infosecjen

Why I Don't Dislike the Whitehouse/Graham Amendment 2713

[NOTE: No post about legislation is complete without a lot of acronyms representing lengthy and forgettable names of bills. There are three main ones that I talk about in this post:CISA – the Cyber Information Sharing Act of 2015 – Senate bill that will…

[NOTE: No post about legislation is complete without a lot of acronyms representing lengthy and forgettable names of bills. There are three main ones that I talk about in this post:CISA – the Cyber Information Sharing Act of 2015 – Senate bill that will likely go to vote soon.  The bill aims to facilitate cybersecurity information sharing and create a framework for private and government participation.ICPA – the International Cybercrime Prevention Act of 2015 – proposed bill to extend law enforcement authorities and penalties in order to deter cybercrime. Proposed by Senators Graham and Whitehouse of the Senate Committee on the Judiciary.CFAA – the Computer Fraud and Abuse Act – main “anti-hacking” law in the US.  Passed in 1986 and updated a number of times since then, the law is considered by many to be out-of-date and in dire need of reform.][UPDATE - 5.25.16] Senators Graham and Whitehouse have re-introduced this amendment as S.356, which is slated to be be discussed at a meeting of the Senate Judiciary Committee tomorrow (May 26th). The bill is mostly unchanged since I wrote about it last year – the only change seems to be the removal of the section on “Stopping the Sale of American's Financial Information” as that ended up being rolled into the Cybersecurity Act of 2015, which is what CISA eventually became when it passed at the end of 2015 as part of an omnibus spending bill.Rapid7's position is also unchanged – we're still neutral on the language of the amendment itself and keen to see broader reform of the CFAA to address the underlying issues:Lack of clarity and definitions around “exceeding authorized access” and “accessing without authorization”Outdated terms (e.g. “protected computer”) and thresholds for prosecutions (e.g. the value of accessed information being set at $5000)Inclusion of civil causes of action, enabling technology providers to use the law to ward off researchersUnless and until these core underlying issues are fixed, we do not intend to support any legislation that would extend authorities and penalties for CFAA prosecutions.Finally, as detailed in the original post below, we agree that botnets are a very real problem and creating clear guidelines, expectations, and requirements for law enforcement to address them is sensible. Still, we feel the section on “shutting down botnets” raises questions around how far the proposed authorities would extend. That section would allow the Government to compel companies via injunction to take unspecified actions against botnets. This might include demanding the company hack a computer controlling a botnet, force an update to damaged computers, or re-route damaged computers' internet traffic to a site where the user can download a patch. There are a number of questions around the implications of this for the owners of victim machines – could the Government use this authority to access information on those machines? If the machines are rendered unusable through action taken to shut down the botnet, what recourse is there for the owners? In addition, the statute that the amendment would modify – 18 USC 1345 – is broad and may permit any action, restraining order, or injunction that "is warranted to prevent a continuing and substantial injury to the United States or to any person or class of persons for whose protection the action is brought." One questions thrown up by the recent (and ongoing) encryption debate is whether this could be used as the All Writs Act was in the Apple case? That might seem far-fetched, but it's unclear and may be a question worth asking during the Judiciary Committee's meeting tomorrow. [UPDATE - 10.21.15] CISA went to the floor for debate yesterday. Today we heard that the Whitehouse/Graham Amendment will not advance to a vote with CISA. According to The Hill, "Whitehouse has some parliamentary options to bring the amendment to a vote still at his disposal but any attempt would likely be blocked by opponents of the provision."[ORIGINAL POST - 10.20.15] After several years of debate around a cybersecurity information sharing bill, it looks like we're getting closer to something passing. Earlier this year, the House passed two information sharing bills with pretty strong support.  The Senate side proposal – the Cyber Information Sharing Act of 2015 (CISA) – has been considerably more controversial, but is likely to go to a vote very soon, possibly even in the next few days. A number of concerns have been raised around this bill – mostly around privacy, data handling, and the way the Government can use information that is shared. These issues are hopefully being tackled in a number of amendments; Senate leaders unofficially agreed to a limit of 22 amendments, made up of the manager's amendment (which is put forward by the bill's primary sponsors), plus 11 Democratic amendments and 10 Republican amendments.  You can see a full list of the proposed amendments here thanks to New America's fantastic round up.Included among these is Amendment 2713, proposed by Senators Whitehouse and Graham. This amendment has been controversial, and has raised alarm in the civil liberties and security research communities, resulting in a letter requesting that the amendment not go to a vote with CISA. While I was initially one of the most vocal over these concerns, I am now comfortable that the current language of the amendment, attached at the bottom of this post, does not have negative implications for researchers. (Caveat – this language is the latest available as of writing this on October 20, 2015. I reserve the right to change my position if the language changes )A brief history of Amendment 2713The amendment is an abridged version of the International Cybercrime Prevention Act (ICPA) (also known as the Graham/Whitehouse bill). The full bill proposed eight key areas of legislative updates to extend law enforcement authorities in order to reduce cybercrime; the CISA amendment cuts this down to four main areas of focus:Extending authority to prosecute the sale of US financial information overseasFormalizing authority for law enforcement to shutdown botnetsIncreasing the penalties for cyberattacks on critical infrastructureCriminalizing the trafficking of botnets and the sale of exploits to bad actorsThese feel like pretty reasonable goals to me.  In particular, updating the law to tackle the burgeoning botnet problem makes sense, and I like the idea of a clear legal framework for shutting down botnets. Yet, for a long time I was very worried about both the full bill, and the shorter abridged version of the Amendment. In fact, I even testified on the bill to the Senate Judiciary Subcommittee on Crime and Terrorism. For those that want to see the entire hearing, there's a video here. This is probably the point at which I should remind people that I'm not a lawyer. (NOTE: the hearing was held before the bill was abridged and submitted as an amendment for CISA).Most of my testimony focuses on updates to the Computer Fraud and Abuse Act (CFAA), the main anti-hacking law in the US.  In the case of the Amendment, three of the four sections – critical infrastructure, shutting down botnets, trafficking in botnets and exploits – relate to the CFAA. As I've written before, this law is a big concern as it chills security research, which results in more opportunities for cybercriminals in my opinion. The CFAA is woefully out of date, and lacks clear boundaries for what is permissible – a state exacerbated by the law containing both criminal and civil causes of action. The TL;DR of my testimony is that I was concerned that, far from improving the situation with the CFAA, the ICPA would actually make matters for researchers worse.In relation to the Amendment, my particular concern was the language of Section 4 (though I also provided feedback on Sections 2 and 3). It was not much of a stretch for me to imagine a scenario in which a researcher disclosing a vulnerability finding at a conference might satisfy all the requirements of Section 4 as they were written in the original proposal, resulting in the potential for them to be prosecuted.So what changed?In a nutshell, the language changed.  And it changed because the people writing it really LISTENED to our feedback.Maybe you can sense my surprise here. Those in the security research community that are familiar with the CFAA have felt frustration over its lack of clarity and misuse for years, and in many cases we've come to expect the worst of the Government. Recently though, we've started to see a change.  We saw the Department of Justice proactively address the research community on the issue of CFAA prosecutions at Black Hat this year. We've seen various federal agencies, such as the FDA and FTC call for researchers to help them update their position on, and understanding of, cybersecurity challenges. We've even seen the introduction of legislative proposals designed to support and protect research, such as the Breaking Down Barriers to Innovation Act introduced by Senator Wyden and Representative Polis.  In short, we've seen the start of a huge shift in attitudes towards security research in the Government sector.Don't get me wrong, it's not all sunshine and kittens.  A new bill proposal “To provide greater transparency, accountability, and safety authority to the National Highway Traffic Safety Administration” explicitly makes research on cars illegal, which is, at best, short-sighted, and puts consumers at risk. Similarly, the recent proposal for implementation of the Wassenaar Arrangement's export controls on intrusion software highlighted that we in the research community have some work to do to educate the Government on the value, role, and practices of security research – work which I'm happy to say is now underway.Yet my experience working with the offices of Senators Whitehouse and Graham highlighted that we are making progress and that there are people in the Government that genuinely value the role of security research in reducing cybercrime. Throughout their collaborative engagement, the staff mentioned numerous times that they wanted to ensure security research would not be harmed by unintended consequences of the language. OK, I know security people can – cough – sometimes be a little cynical – cough, so you're possibly thinking that words are all well and good, but the proof is in actions.  I agree. So it was probably around the time that I was reviewing redraft number 5 or so that I started to believe they may be serious about this whole “no negative consequences” thing.Yeah, I know I sound like I drank some funky cool aid while I was on the Hill, but today an updated version of the Amendment – attached at the bottom of this post – was distributed on the Hill. Again, a quick reminder that it might change, and if so, my position may change too, but at the time of writing this, the current language is pretty well crafted. I've spent some time vetting it with both lawyers and security researchers, and we really tried to find ways that the language would cause problems for research, but it looks solid. The drafters listened to all the weird examples and edge case anecdotes we threw at them, and even the suspicious slurs against the concept of “prosecutorial discretion,” and they wrote language that holds up against this scrutiny and does not create negative consequences for research – at least as far as I can see (and remember, I'm not a lawyer).I'd like to pause here to thank the drafters for that diligence, and their grace in the face of some very blunt feedback.So what does this mean for the security community?In the short-term it means that a significantly better Amendment will go to the vote with CISA. I do wish it had gone through more public debate – it feels like that process was side-stepped – but if it's going to go to vote, I'm very glad that the language has been updated as it has. Any debate that takes place around CISA is likely to focus on the core challenges around privacy, the way the Government can use information that is shared, and the controversial issue of “countermeasures.” Resolution on those issues is enormously important, so it's good that the language of Amendment 2713 is not so concerning that it needs to steal focus from CISA.  As an aside, Rapid7 is supportive of legislation that creates a clear framework for sharing cyber threat information, establishes meaningful protections for Personally Identifiable Information (PII), and clarifies the role and responsibilities of government entities in the process.  We think CISA has work to do to get there, but are hopeful that the discussion around the 22 amendments will result in a better outcome.Back to Amendment 2713… If it passes, the most significant immediate outcome will be greater law enforcement authority for addressing botnets, which is a good thing, I hope.To be clear though, this process wasn't a silver bullet that miraculously fixed the CFAA. If the Amendment passes, the CFAA will gain a new provision ((a)(8) for those that want to get nerdy about it), but it will still lack a definition of its core principle: “authorization.” That means it still lacks a clear line for what constitutes a violation and what does not. And it will still be out of date, and will still contain both civil and criminal causes of action, providing a handy stick that defensive vendors will use to threaten researchers.The process for addressing these issues in the CFAA will be considerably harder and will take far longer. It will likely be a bloody battle as there are many angry voices on all sides of this debate and it has already raged for a long time. But there may be some possibility for light at the end of the tunnel if we can continue to work with legislators to find common ground. For me, that's the most significant take away from Amendment 2713 – hope for more productive collaboration, and by extension, hope for better cybersecurity legislation in the future.~ @infosecjen

Rapid7's Comments on the Wassenaar Arrangement Proposed Rule for Controlling Exports of Intrusion Software

For the past two months, the Department of Commerce's Bureau of Industry and Security (BIS) has been running a public consultation to solicit feedback on its proposal for implementing export controls for intrusion software under the Wassenaar Arrangement. You can read about the proposal and…

For the past two months, the Department of Commerce's Bureau of Industry and Security (BIS) has been running a public consultation to solicit feedback on its proposal for implementing export controls for intrusion software under the Wassenaar Arrangement. You can read about the proposal and Rapid7's initial thoughts here. The consultation window closed on Monday, July 20th and I'm excited that numerous companies and security researchers submitted comments. It's great to see so many engaging with the process and trying to ensure we achieve the right outcome.I also commend BIS for their engagement with the community through this process - I don't think this is an easy knot for them to untangle. It's important to remember that while the US did not propose the addition of intrusion software to the Wassenaar Arrangement controls, as a member nation of the Arrangement, the US must still try to find a way to make it work (unless and until the members of the Arrangement vote to drop intrusion software from their control agreement). Basically they're trying to make the best of a tough situation, and I believe they are striving to address the concerns of the community.I expect we will see an updated proposal from BIS, and another public consultation period. This is an unusual measure, but warranted in this situation, and I believe it would demonstrate the desire of the Government to get the implementation right. Should we get a second consultation period, I hope even more organizations will join the discussion as the implications for their security and business become clearer.In the meantime, attached (below) are the comments Rapid7 submitted for the consultation that just ended. Our CEO, Corey Thomas, will be speaking about some of the challenges outlined in our response at the upcoming meeting of the Information Systems Technical Advisory Committee (ISTAC), hosted by BIS. We hope to see you there.~@infosecjen

Wassenaar Arrangement - Frequently Asked Questions

The purpose of this post is to help answer questions about the Wassenaar Arrangement.  You can find the US proposal for implementing the Arrangement here, and an accompanying FAQ from the Bureau of Industry and Security (BIS) here. For Rapid7's take on Wassenaar, and…

The purpose of this post is to help answer questions about the Wassenaar Arrangement.  You can find the US proposal for implementing the Arrangement here, and an accompanying FAQ from the Bureau of Industry and Security (BIS) here. For Rapid7's take on Wassenaar, and information on the comments we intend to submit to BIS, please read this companion piece. If you would like to propose a question to be added to this FAQ, please email us, or post it in the comments section below.1. What is the Wassenaar Arrangement and who are its members?The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (“Wassenaar” or the “Arrangement”) is a voluntary, multilateral export control regime whose member states exchange information on transfers of conventional weapons and dual-use goods and technologies.  The Arrangement's purpose is to contribute to regional and international security and stability by promoting transparency and greater responsibility in transfers of conventional arms and dual-use  goods and technologies (i.e., items with predominantly non-military applications that nonetheless may be useful for certain military purposes) to prevent destabilizing accumulations of those items.  Wassenaar establishes lists of items for which member countries are to apply export controls.  Member governments implement these controls to ensure that transfers of the controlled items do not contribute to the development or enhancement of military capabilities that undermine the goals of the Arrangement, and are not diverted to support such capabilities.  In addition, the Wassenaar Arrangement imposes certain reporting requirements on its member governments.The participating states of the Wassenaar Arrangement are Argentina, Australia, Austria, Belgium, Bulgaria, Canada, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Japan, Latvia, Lithuania, Luxembourg, Malta, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Republic of Korea, Romania, Russian Federation, Slovakia, Slovenia, South Africa, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom and United States.2. What are the members' obligations under Wassenaar?The Wassenaar control lists do not have binding legal force throughout the member states.  All Wassenaar controls must be implemented through national legislation and policies to have effect and the member states have full discretion with respect to whether and how the controls are implemented.  Because member states enjoy discretion as to whether and how to implement the Wassenaar controls, there are variations in the specific export control laws and regulations among the members. 3. What are the Export Administration Regulations?With respect to dual use goods and technology, the United States generally implements the Wassenaar control lists through the Export Administration Regulations (“EAR”).  The EAR regulate exports of commercial and “dual-use” goods, software and technology.  These regulations are administered by the Commerce Department's Bureau of Industry and Security (“BIS”).Exports of items controlled under the EAR may require a specific license from the Commerce Department, depending upon the reasons for control applicable to the particular items, the country of destination and the purposes for which the items will be used.  In certain instances, a license exception may be available under the EAR.4. What is an export?Under the EAR, the term “export” is broadly defined.  It includes: (1) an actual physical shipment or transmission of controlled items out of the United States; and (2) any written, oral, or visual release or disclosure of controlled technology, information, or software to a non-U.S. person either inside or outside the United States. Therefore, transmissions to a non-U.S. person within the United States, e.g., a person working in a U.S. company or participating in a university research project involving controlled technology, are also covered by the EAR and may require a license. Such transmissions are called “deemed exports.”  Non-U.S. persons include anyone other than a U.S. citizen, a lawful permanent resident of the United States (such as individuals with Green Cards), or a “protected individual” (e.g., refugees or persons seeking asylum).  In addition, taking along controlled technology (e.g., laptops and software) during travel to a foreign country may also raise export control issues.A software export under the EAR includes “any release of technology or software subject to the EAR in a foreign country,” or any release of “source code subject to the EAR to a foreign national.” The actions comprising a release of software and technology are broad, extending beyond the physical export of tangible goods or electronic transmissions. These actions include the visual inspection by foreign nationals, exchanges of information, or the application abroad of personal knowledge or technical experience acquired in the United States.5. How will the proposed regulations relating to cybersecurity items effect the export licensing requirements for Metasploit and similar products?On May 20, 2015, BIS published a proposed rule (the “Proposed Rule”) imposing a restrictive license requirement on exports, reexports, and transfers (in-country) of systems, equipment and software for the “generation, operation or delivery of intrusion software” (“Intrusion Items”) and Internet Protocol (IP) Network communication surveillance systems or equipment (“Surveillance Items),” as well as related software that is specially designed for such items and technology for the development and production of such items..  As published in the Proposed Rule, the terms “intrusion software” and “surveillance systems and equipment” are broadly defined and would restrict exports of many commercially available penetration testing and network monitoring products, including commercial versions of Metasploit.Most of the cybersecurity items covered by the Proposed Rule are currently controlled as encryption items. However, the Proposed Rule would subject cybersecurity items with encryption functionality to overlapping regulatory requirements and increase the compliance burden for exporters, who would have to comply with the requirements of both the existing encryption controls and the Proposed Rule.In addition, the Proposed Rule would render covered cybersecurity products ineligible for most license exceptions under the EAR, including License Exception ENC. License Exception ENC currently permits the export of encryption items to foreign subsidiaries of U.S. companies as well as deemed exports to foreign national employees of U.S. companies. However, the Proposed Rule would require an export license in those instances.Specifically, under the Proposed Rule, exports of Metasploit would require a specific license for all destinations other than Canada. BIS has indicated that it would review favorably license requests to certain destinations, including U.S. companies or subsidiaries not located in embargoed countries (currently, the Crimea region of the Ukraine, Cuba, Iran, North Korea, Syria, and Sudan) or countries of national security concern (currently, Armenia, Azerbaijan, Belarus, Burma, Cambodia, China, Georgia, Iraq, Kazakhstan, North Korea, Kyrgyzstan, Laos, Libya, Macau, Moldova, Mongolia, Russia, Tajikistan, Turkmenistan, Ukraine, and Vietnam), commercial partners located in certain countries that are close allies of the U.S., and government end users in Australia, Canada, New Zealand, and the United Kingdom. However, BIS has also indicated that it would apply a presumption of denial to license applications for items that have or support “rootkit” or “zero-day” exploit capabilities. Depending on how these terms are applied, they could extend to Metasploit.While BIS states that it anticipates “licensing  broad authorizations  to  certain types  of  end users  and  destinations” to counterbalance the loss of the use of License Exception ENC, BIS has not specified any details of those authorizations.6. How will the Proposed Rule relating to cybersecurity items effect research activities?Research is not specifically addressed in the Proposed Rule, although BIS has stated that the intent is not to interfere with “non-proprietary research,” by which we interpret BIS to mean research activities that are intended to lead to the public identification and reporting of vulnerabilities and exploits (in contrast to bug bounties, or research relating to exploits that will be sold commercially).  In the Federal Register notice announcing the Proposed Rule, BIS explained that the proposed controls on technology for the development of intrusion software would include proprietary research on the vulnerabilities and exploitation of computers and network-capable devices. BIS has since elaborated that these proposed controls would regulate, among other things, proprietary (i.e., non-public) technology relating to the development, testing, evaluating, and productizing of exploits, zero days and intrusion software.Notably, research regarding exploits that incorporate encryption is currently (and will continue to be) regulated pursuant to the encryption controls under the EAR.  These controls restrict the ability of researchers to transmit or publish exploits that utilize encryption if those exploits are not in the public domain.  Under the EAR, the only way to report and publish exploits that utilize encryption without an export license is to make the exploit publicly available pursuant to License Exception Technology Software Unrestricted (TSU), whereby the exporter provides the U.S. Government with a "one-time" notification of the location of the publicly available encryption code prior to or at the time the code is placed in the public domain.7. BIS has published an FAQ indicating that researchers can simply publish exploits without the need for a license and that published information is not subject to the EAR.  Is this true?Not entirely. BIS's Proposed Rule focuses on the command and delivery platforms for generating, operating, delivering and communicating with “intrusion software.”  The Proposed Rule does not control any “intrusion software.”Intrusion software that does not contain any encryption functionality is currently and will remain designated for export control purposes as “EAR99” and may be exported to most destinations without an export license. Consistent with BIS's FAQ, items that are designated EAR99 are no longer subject to the EAR once they are published.However, most exploits utilize encryption functionality to avoid detection and to communicate with the command and delivery platform.  Such exploits are subject to the encryption controls under the EAR.  As explained above, the only way to report and publish exploits that utilize encryption without an export license is to make the exploit publicly available pursuant to License Exception TSU.8. How will the Proposed Rule relating to cybersecurity items affect open source versions of Metasploit?The Proposed Rule will have no effect on open source versions of Metasploit, which will continue to be exempt from licensing requirements under the EAR.We intend to advocate for clear protections for activities relating to security research activities and to the public reporting of exploits.9. What challenges will the Proposed Rule present to users of Metasploit and similar penetration testing products?Currently, Metasploit and similar penetration test products that utilize encryption are eligible for certain license exceptions under the EAR that permit such products to be used effectively in international environments.  However, under the Proposed Rule, Intrusion and Surveillance Items will not be eligible for these license exceptions. As a result, activities relating to the use of these products that are currently authorized would require a specific license under the new Proposed Rule. The following are two examples:Hand carriage of Intrusion Software or Surveillance Items: The hand carriage of a computer and/or software outside of the United States constitutes an export.  Currently, there are several license exceptions under the EAR that authorize individuals to hand carry a computer and/or software outside of the United States provided that the conditions and limitations associated with the license exceptions are followed. Under the Proposed Rule, however, individuals would not be able to rely on these license exceptions when traveling outside of the United States with Intrusion or Surveillance Items. In such cases, individuals would be required to obtain a specific license in advance of their trip.Internal Use and Dissemination of Intrusion or Surveillance Items:  As noted above, penetration testing and other cybersecurity products that utilize encryption are currently subject to export restrictions under the EAR. Currently, the international deployment of such products with encryption functionality by U.S. companies and their overseas subsidiaries is authorized under a license exception. However, the Proposed Rule would eliminate this license exception and U.S. companies would need to obtain export licenses to send cybersecurity products to their overseas facilities for internal use.  We believe that imposing a licensing requirement on the internal deployment of products will lead to delays in the deployment and use of new and effective cyber security products.Again, if you would like to propose a question to be added to this FAQ, please email us, or post it in the comments section below.- @infosecjen

Response to the US Proposal for Implementing the Wassenaar Arrangement Export Controls for Intrusion Software

On May 20th 2015, the Bureau of Industry and Security (BIS) published its proposal for implementing new export controls under the Wassenaar Arrangement. These controls would apply to: systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion…

On May 20th 2015, the Bureau of Industry and Security (BIS) published its proposal for implementing new export controls under the Wassenaar Arrangement. These controls would apply to: systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion software; software specially designed or modified for the development or production of such systems, equipment or components; software specially designed for the generation, operation or delivery of, or communication with, intrusion software; technology required for the development of intrusion software. The controls also apply to surveillance software; however, in this post I will be focusing on the category above as this relates more closely to Rapid7, our customers, and our community. The background The Wassenaar Arrangement is not a new thing; it has been around since 1996. It's an export control understanding between 41 nation states and was originally devised to control the export of weapons and dual-use technologies (which is a broad term that seems to mean technology that can have both military and non-military uses). The Wassenaar's decision-making body – the Plenary – meets annually in December to discuss changes and implementations. During the December 2013 meeting, the group agreed the Arrangement should also cover the intrusion and surveillance technologies. Since then, many of the 41 member nations have implemented the rules. This post will focus on the US implementation as we are a US company, and it is the US implementation which is currently open for consultation. We may update our accompanying FAQ with information on other international implementations in the future. The US Government has not yet implemented this addition to the Wassenaar. In May, it published a proposal outlining the way the new rules would be implemented in the US, and invited feedback. The community has until Monday, July 20th 2015 to submit comments in response. Rapid7 intends to participate in the consultation process and will submit comments. We encourage you to do likewise if you have concerns with the proposal. Rapid7's take In some ways the proposal raises more questions than it answers, and in seeming recognition of this, BIS recently published a supplemental FAQ. This does seem to have answered some of our questions, but not all. I've reproduced below some of the key questions we asked ourselves as we reviewed the proposal. How will Wassenaar relate to Metasploit exports? Since the BIS proposal came out, we have received many questions from customers and community members about how it will relate to the export of our penetration testing platform, Metasploit. We have been cautious in answering this question. It's clear from the way BIS defines the technology covered by the Proposed Rule that it WILL apply to Metasploit; however, it's currently unclear to what extent. It should be noted here that the proprietary versions of Metasploit (namely Metasploit Community and Metasploit Pro) are already subject to US export controls, and the Wassenaar Arrangement will extend, rather than replace those controls. Will it apply to the open source Metasploit Framework? BIS's Wassenaar proposal doesn't explicitly tackle this as the US plans to maintain the current status quo of how open source projects are addressed. Under the current US export controls – the Export Administration Regulation (EAR) – the Metasploit Framework is exempt from the export control due to its open source status. The exemptions for open source are nuanced, so if you run a related open source project, please don't assume you will be exempt – you should seek legal counsel. Can we export the commercial editions of Metasploit? We can export commercial editions of Metasploit to Canada without complication. For any other exports, it is a question of whether we would require a license or would be subject to a blanket denial, which would mean we could not sell Metasploit internationally at all. The blanket denials will be applied to technologies that include “zero day exploits” and “rootkits.” The proposal does not offer definitions for either term, leaving them open to multiple interpretations, which would create a murky regulatory environment for exporters attempting to comply with the new regulations. There is some discussion in the community around whether these categories should be included in the Proposed Rule at all, particularly with an emphasis on the impact on security research (more on that below). I expect/hope we will see a lot of comments submitted that make a case against the inclusion of these elements. In case these submissions are unsuccessful, our comments to BIS will include the following recommendations for definitions. As you'll see, we focused on whether exploits have been publicly disclosed: Zero-Day Exploit:  A software tool that takes advantage of a security vulnerability that is not publicly known.  Security vulnerabilities will be deemed publicly known if: (1) they are the subject of a published notice or advisory that is generally available to the public, or (2) [45] days have passed since the vulnerability was reported to a software developer or a vulnerability reporting organization. Rootkit: A non-public, post-exploit software tool that is primarily useful for maintaining control of a computer system without being detected, in a manner that is not authorized by the owner or system administrator of the computer system, after the computer system has been compromised. If we can export the commercial editions under licenses, how will that work? According to the proposal, vendors can apply for “broad authorizations to certain types of end users and destinations.” As yet, there is little information on how these licenses will work in practice. It is likely to place a hefty burden on companies like Rapid7. We will be forced to devote increased resources to prepare license requests, and to comply with other potential regulatory requirements, such as enhanced reporting and pre-shipment notifications.  The costs associated with these compliance efforts will inevitably be passed on to consumers in the form of higher prices. In addition, customers will face delays while the Government processes license applications. The licensing burden will also put Rapid7 and other US companies at a disadvantage when compared to non-American and non-Wassenaar-state competitors who rely on Metasploit Framework, but have none of the regulatory obligations that American companies will have. How will Wassenaar impact Metasploit contributors? The commercial editions of Metasploit (and several other security testing solutions) are built on top of the open source Metasploit Framework. We have a fantastic community that contributes exploits to this from all around the world. We wanted to understand how the Wassenaar Arrangement would impact our international contributors given that in 2014 alone, a third of the Top 25 Metasploit contributors were based outside the US. There's no quick answer to this. According to BIS' FAQ, you can export the exploits themselves without issue. However, every participating nation state implements Wassenaar in their own way, consistent with their broader export approach. If you are a Metasploit contributor outside the US, we recommend you look into the export rules in your own country. How will Wassenaar impact our Metasploit customers? As noted above, penetration testing and other cybersecurity products that utilize encryption are currently subject to export restrictions under the EAR. This rule currently grants a license exception to US companies, authorizing them to transfer technologies like Metasploit for use in their overseas subsidiaries and facilities. Wassenaar will eliminate this license exception and US companies will need to obtain export licenses to send cybersecurity products to their overseas facilities for internal use. Rapid7 believes that the removal of this license exception unnecessarily interferes with the legitimate security activities of US companies. Imposing a licensing requirement on the internal deployment of products will lead to delays, and will chill the widespread use of new and effective products.  The Proposed Rule should be revised to authorize exports of cybersecurity products to US companies and their overseas subsidiaries for their own internal use. How will Wassenaar impact penetration testers? Penetration testers rely on intrusion technologies to do their job, and many travel internationally in the course of their work. The hand carriage of a computer and/or software outside of the US constitutes an export. Currently, there are several license exceptions under the EAR that authorize individuals to hand carry a computer and/or software outside the US, provided that the conditions and limitations associated with the license exceptions are followed. Under the Proposed Rule, however, individuals would NOT be able to rely on these license exceptions when traveling outside of the United States with intrusion or surveillance items. In these cases, penetration testers (and other individuals) would be required to obtain a specific license in advance of their trip. How will Wassenaar impact security researchers? The regulatory and legislative pressures placed on security researchers is a major area of concern for Rapid7. It should be a concern for everyone as research drives awareness of risks, enables consumers to make informed choices to protect themselves, and facilitates remediation efforts. Research is not specifically addressed in the Proposed Rule, although BIS has stated that its intent is not to interfere with “non-proprietary research.” While undefined by BIS, we interpret this to mean research activities that are intended to lead to the public identification and reporting of vulnerabilities and exploits (in contrast to bug bounties, or research relating to exploits that will be sold commercially). In the Federal Register notice announcing the Proposed Rule, BIS explained that the proposed controls on technology for the development of intrusion software would include proprietary research on the vulnerabilities and exploitation of computers and network-capable devices. BIS has since elaborated that these proposed controls would regulate, among other things, proprietary (i.e., non-public) technology relating to the development, testing, evaluating, and productizing of exploits, zero days and intrusion software. Notably, research regarding exploits that incorporate encryption is currently (and will continue to be) regulated pursuant to the encryption controls under the EAR.  These controls restrict the ability of researchers to transmit or publish exploits that utilize encryption if those exploits are not in the public domain.  Under the EAR, the only way to report and publish exploits that utilize encryption without an export license is to make the exploit publicly available pursuant to License Exception Technology Software Unrestricted (TSU), whereby the exporter provides the U.S. Government with a "one-time" notification of the location of the publicly available encryption code prior to or at the time the code is placed in the public domain. The way I read all this does not look positive for researchers participating in international bug bounties, or those selling exploits. We hope BIS will revisit this and consider the impact on the general level of security for the internet, and for US and international organizations alike. Many US businesses rely on software made outside the US, which will suffer if US researchers are discouraged from testing and validating their technologies. Next steps In the current political and technological climate, export controls for intrusion technologies feel inevitable to me. The proposed US implementation does raise some very serious concerns though, and security professionals should pay attention. Whatever your area of focus, as a security professional, the likelihood is that the Proposed Rule will impact you or your organization in some way. As stated above, we're working on a comment submission for BIS that will cover the points outlined in this post. We encourage you to do likewise, and if there is anything we can do to help, please email us, or you can post a comment below. We have an accompanying FAQ that we will add to as questions arise – please do let us know if you have a question you would be interested in us addressing. - @infosecjen

Will the Data Security and Breach Notification Act Protect Consumers?

Last week, the House Energy and Commerce Committee published a discussion draft of a proposed breach notification bill – the Data Security and Breach Notification Act of 2015. I'm a big fan of the principles at play here: as a consumer, I expect that if a…

Last week, the House Energy and Commerce Committee published a discussion draft of a proposed breach notification bill – the Data Security and Breach Notification Act of 2015. I'm a big fan of the principles at play here: as a consumer, I expect that if a company I have entrusted with my personally identifiable information (PII) has reason to believe that information has been compromised on their watch, they will tell me.  I believe this kind of transparency is not only important, it should be a consumer right. I also support a single approach across all 50 US States. Having 47 different state laws to address breach notification is better than having none from a consumer protection standpoint, but it places a heavy burden on companies doing business in the US. It's time to simplify this approach with one consistent standard for the entire country. This is where the new bill proposal comes in, and it gets some things right in my opinion. But it also raises some questions and concerns, which I've outlined below.  As usual, please remember: I'm not a lawyer! Some Good Basics Typically when thinking about data breach notification requirements there are several key points to cover, and I like how this bill proposal deals with a couple of them: Thresholds for disclosure The original proposal published by the White House in January indicated that ANY compromise of personal information should trigger a disclosure. That concerned me, because it meant that a researcher uncovering a vulnerability and accidentally accessing PII would result in an organization needing to disclose, and my worry was that would lead to increased vendor defensiveness, and an even stronger approach taken against researchers. The bill proposal addresses this concern by stating that notification only occurs when: “the breach of security has resulted in, or will result in, identity theft, economic loss or economic harm, or financial fraud...” It should be noted that while that addresses my research disclosure concern, some consumer protections and privacy advocates will probably prefer this threshold not exist, and notification to occur whenever PII is accessed.  It will be interesting to see whether this stays in the bill or not. Considerations for impact on small businesses and non-profits There is a very valid concern that a data breach notification statute creates a crippling burden for small businesses and non-profits that have limited resources and staff.  These kinds of organizations may in many cases be the easiest targets for attackers, and the least able to deal with the fallout. We don't want to lose these kinds of organizations or stifle innovation and entrepreneurship. This proposal acknowledges this and makes appropriate allowances for these kinds of organizations. Generally it seems keen to make sure all requirements are proportionate to what can be reasonably expected of a business given its size and resources. Room for Improvement There are some parts that look to be going in the right direction, but could do with some tweaking. Before I get into them, I want to flag that this is a discussion draft of the proposal, and so I think the whole point is that people will read it and provide feedback and questions like the ones below. Hopefully going through this process will lead to a stronger eventual outcome. The definition of “Personal Information” This is lengthy, so I'm not going to reproduce it here, but it's on pages 20 and 21 of the proposal if you want to take a look. This covers a lot of the right things, but I think there are some important things missing – for example there's no reference to health or geo-location information. Timeline and means of communication for disclosure Here we see another departure from the White House proposal, which stated organizations would have up to 30 days to notify.  This was a concern as some states have more stringent requirements and it would be a miss to see a federal law worsen the situation for those already covered. This proposal addresses that concern by stating that disclosure must be made: “as expeditiously as possible and without unreasonable delay, not later than 30 days after such covered entity has taken the necessary measures to determine the scope of the breach of security and restore the reasonable integrity, security, and confidentiality of the data system.” In theory this brings the proposal in line with the most stringent state laws for breach notification timing. The wording on when the clock starts is a little vague though – on restoration of “reasonable integrity, security, and confidentiality.” I think the challenge for me here is the word “reasonable” feels too open to interpretation, and full clean up can take a very long time. In terms of breach notification, I think the crucial elements are that you need to have regained control over your network and assets, and determined who is impacted, and how. I'd tweak the wording to more specifically call that out as the point when clock starts. The piece around means of communication all seems pretty reasonable and straightforward, though I imagine companies won't like having to keep it posted on their site for 90 days. References to cybersecurity measures There are two areas that touch on this, one on the need for security measures and the other on the role of encryption. Let's start with the need for security measures: I work for a security company so it's not too shocking that I like that it pushes for security measures. My concern is that there are no real guidelines here to make this into a real requirement. I'd love to see some specifics on what the requirement should be or what “appropriate for the size and complexity” means. There are some conversations starting to happen on the Hill around what sane basic security hygiene requirements might look like and this could feed in here. If you have thoughts on the kind of basics that could be mandated, please share in the comments below. The part on encryption comes in the definitions section (I'm on page 19 for those following along). There is a definition for encryption which ties further in to the definition of personal information: Creating an exception for encryption makes sense, but I am concerned that the way this is worded is too broad to ensure stringent practices are being followed. Not all encryption standards are created equally after all. Beyond Breach Notification One thing that's interesting about this bill is that it's not JUST about breach notification. The title itself indicates that the bill seeks to go further and address broader data security concerns. It makes a start towards this with the section mentioned above where it sets a requirement for security measures to protect sensitive information. Hopefully some meat might be added to make that section more impactful. Another, more concerning area where we see the allusion to broader data security reach is in this section: This seems to indicate that the bill will trump any other state law relating to “the security of data in electronic form.”  I'm not sure whether this is intentional. I understand that the bill needs to pre-empt state breach notification bills if it's to alleviate the strain on businesses, and that makes sense to me. But also pre-empting other kinds of data security laws seems unnecessary and strange. My concern is that this bill could inadvertently establish a dangerous precedent in how we view the responsibility and role of organizations in protecting their customers from cybersecurity threats in the future. To give you an example, say a state had a law mandating certain security measures be taken by businesses, or perhaps a law pushing some form of liability for poor security practices and standards in code development, this bill could potentially nullify those laws given the way this section is currently worded. That would mean consumers couldn't benefit from the intended protections of such laws, which seems kind of at odds with the stated purpose of the bill, so I'm inclined to think this wording may be unintentionally broad. Hopefully we'll see it edited to focus more clearly on pre-emption for breach notification only. Will the Bill Protect Consumers? I think the bill has potential. Yes, in its current form it needs quite a bit of work, but I suppose that is the point of a discussion draft, and we will likely see some updates to the language currently being circulated. Tackling cybersecurity legislatively is never going to be simple, and this bill is effectively trying to do two things – mandate notification behavior AND address the need for security measures. I'm not sure it's able to do both well and also keep the bill simple and easy to apply. It will be interesting to see how the language evolves. Rapid7 will be providing feedback on the proposal to try to explain these concerns and get them addressed. If you're concerned about the potential outcome of this legislation, I encourage you to do likewise. It falls to those of us in the security community to take the lead on helping others understand our world and how best to navigate it. ~ @infosecjen

GHOST in the Machine - Is CVE-2015-0235 another Heartbleed?

CVE-2015-0235 is a remote code execution vulnerability affecting Linux systems using older versions of the GNU C Library (glibc versions less than 2.18). The bug was discovered by researchers at Qualys and named GHOST in reference to the _gethostbyname function (and possibly because it…

CVE-2015-0235 is a remote code execution vulnerability affecting Linux systems using older versions of the GNU C Library (glibc versions less than 2.18). The bug was discovered by researchers at Qualys and named GHOST in reference to the _gethostbyname function (and possibly because it makes for some nice puns). To be clear, this is NOT the end of the Internet as we know, nor is it further evidence (after Stormaggedon) that the end of the world is nigh. It's also not another Heartbleed. But it is potentially nasty and you should patch and reboot your affected systems immediately. What's affected? Linux-based appliances from a variety of vendors are going to be impacted, though as with most library-level vulnerabilities, the attack surface is still largely unknown. If you use Linux-based appliances, check with your vendor to determine whether an update is available and needs to be applied. glibc is a core component of Linux used to implement C libraries. The vulnerability impacts most Linux distributions released between November 10, 2000 and mid-2013. This means that, similarly to Heartbleed, it affects a wide range of applications that happen to call the vulnerable API. The bug was fixed in 2013, but wasn't flagged as a security issue at the time (or since until now), so vendors using older branches of glibc didn't update the library. We recommend that you apply the latest patches available from your vendor and reboot the patched machine. Applying the update without a reboot may leave vulnerable services exposed to the network. How bad is it? Successful exploitation of this vulnerability can result in remote code execution, so it has the potential to be pretty bad. This issue can also be exploited locally in some cases, allowing an unprivileged user to gain additional access. In contrast to a vulnerability like Heartbleed, this issue is not always exploitable. In fact, in a general sense, this is not an easy bug to exploit. Only one easily-exploitable case has been identified so far, though that may change as additional information comes to light.The one already identified is the Exim mail server. An attacker could abuse this vulnerability to execute arbitrary commands on an unpatched server. How can you test for it? This issue is difficult to test for, as the full attack surface is not yet known. As mentioned, the Exim mail server is one example of a vulnerable service and it is possible to test for the issue remotely, without authentication. In general, we recommend using a credentialed vulnerability scan to identify unpatched systems. Qualys says that they plan to release a Metasploit module targeting the Exim mail server – thank you – however, please note that this exploit depends on a non-default configuration being selected. The Nexpose update (5.12.0) scheduled for release tomorrow (Wednesday, Jan 28) will include a check for this vulnerability in relevant RHEL, CentOS, Ubuntu, Debian and SUSE distributions. What should you do about it? Patch immediately and reboot. Without a reboot, services using the old library will not be restarted. Ubuntu versions newer than 12.04 have already been upgraded to a non-vulnerable glibc library. Older Ubuntu versions (as well other linux distributions) are still using older versions of glibc and are either waiting on a patch or a patch is already available. Are Rapid7 solutions impacted by this? Our native code does not use the vulnerable function call, so the solutions themselves are not affected.  However, if you are running Nexpose on an Ubuntu 12.04-based appliance, it is vulnerable, and we are investigating whether it can be exploited remotely and will provide an update. Again, we recommend patching immediately, and it's always sensible to ensure systems are not accessible from the public-facing internet unless they have to be. UserInsight used some of the impacted libraries.  Again, we know of no way that this could be remotely exploited but we are redeploying immediately based on a patched version of glibc. If you have any questions about this bug, please let us know. ~ @infosecjen

How Do We De-Criminalize Security Research? AKA What's Next for the CFAA?

Anyone who read my breakdown on the President's proposal for cybersecurity legislation will know that I'm very concerned that both the current version of the Computer Fraud and Abuse Act (CFAA), and the update recently proposed by the Administration, have and will have a strong…

Anyone who read my breakdown on the President's proposal for cybersecurity legislation will know that I'm very concerned that both the current version of the Computer Fraud and Abuse Act (CFAA), and the update recently proposed by the Administration, have and will have a strong chilling effect on security research. You will also know that I believe that the security community can and must play a central role in changing it for the better. This post is about how we do that. A quick recap The CFAA is currently worded in a very vague way that not only creates confusion and doubt for researchers, but also allows for a very wide margin of prosecutorial discretion in the way the statute is applied.  It contains both criminal and civil actions, the penalties for which are pretty harsh, and that increases the severity of the risk for researchers. Too often, we see the CFAA being used as a stick to threaten researchers by companies that are not willing or able to face up to their responsibilities, and don't want to be publicly embarrassed for not doing so. These factors have resulted in many researchers deciding not to conduct or disclose research, or being forced into not doing so. The new proposal is potentially worse. It makes the penalties even harsher and, while it does attempt to create more clarity on what is or is not fair game, it is worded in such a way that a great deal of research activity could be subject to legal action. For more details on that, look at the other post. Still, I believe that opening the CFAA for discussion is A Good Thing. It affords us an opportunity to highlight the issues and propose some solutions.  This latter part is where we stumble; we are frequently more comfortable pointing out weakness and failures than recommending solutions. We must move beyond this if our industry is to survive, and if we ever hope to create a more secure ecosystem. While I believe everyone will pay the price if we cannot solve this problem – in the form of an inherently insecure ecosystem that threatens our privacy, our economy, and potentially our safety – the more immediate risk of imprisonment or other penalties is carried by researchers. In other words, no one is going to care more about this issue or be more motivated to fix it than us. So how do we do that? I've spent the past year asking and being asked that question, and unfortunately my answer right now is that I don't know. Most people I know in the community agree with the basic premise of having an anti-hacking law of some kind, and we need to be careful that any effort to decriminalize research does not inadvertently create a backdoor in the law for criminals to abuse. Finding a solution is tough, but I have faith in the extraordinary perseverance and intelligence of our community, and I believe together we can find a solution. That sounds like a cheesy cop out. What I mean is that while I don't know in technical detail all the possible use cases that will test the law, I know great researchers that live them every day. And while I don't know how to write law or policy, I know smart, experienced lawyers that do, and that care about this issue. And though I'm learning how to navigate DC, there are amazing people already well-engaged there that recognize the problem and advocate for change. Collaboration then is the key. Getting started As I said, I've spent a lot of time discussing this problem and potential solutions and I thought sharing some of that thought process might help kick start a discussion – not on the problem, but on a potential solution. What we're likely looking at here is an exemption or “carve out” to the law for research. Below are some ways we might think of doing that, all of which have problems – and here I am guilty of my own crime of flagging the problem without necessarily having a solution. Hopefully though this will stimulate discussion that leads to a proposed solution. A role-based approach One of the most common suggestions I've heard is that you exempt researchers based expressly on the fact that they ARE researchers. There are a few problems with this. Firstly, when I use the term “researcher” I mean someone that finds and discloses a security flaw. That could be a security professional, but it could just as easily be a student, child, or Joe Internet User who unintentionally stumbles on an issue while trying to use a website. People reporting findings for the first time have no way of establishing their credibility. Defining researchers is tough and likely defeats its own purpose. I've heard the idea of some kind of registration for researchers be kicked around, and those outside the community will often point to the legal or medical professions where there is a governing body within the community that sets a mutually agreed bar and polices it. I can feel many shuddering as they read that – ours is not a community that enjoys the concept of conformity or being told what to do. Even if that evolves over time, registration and self-government don't address the point above that ANYONE can be a researcher in the sense of uncovering a vulnerability. Then too there is the sad fact that some people may work as “white hat” security professionals during the day, but by night they wear a different colored hat altogether (or a balaclava if you believe the stock imagery strewn across the internet). If they commit a crime they should be punished for it accordingly and should not have a Get Out of Jail Free card just because they are a security professional by day. A behavior-based approach Perhaps the easiest way to recognize research then is through behavior. There may be a set of activities we can point to and say “That is real research; that should be exempt.” The major challenge with this is that much researcher behavior may not be distinguishable from the initial stages of an attack. Both a researcher and a criminal may scan the internet looking to see where a certain vulnerability is in effect. Both a researcher and a criminal may access sensitive personally identifiable information through a glitch on a website. It seems to me that what they do with that information afterwards would indicate whether they are a criminal or not, and as an aside, I would have thought most criminal acts would be covered by other statutes, eg. theft, destruction of property, fraud. This is not how this law currently works, but perhaps it merits further discussion. A problem with this could be that you would have to consider every possible scenario and set down rules for it, and that's simply not feasible. Still, I think investigating various scenarios and determining what behavior should be considered “safe” is a worthwhile exercise. If nothing else, it can help to clarify what is risky and what is not under the current statute. Uncertainty over this is one of the main factors chilling research today. This could potentially be addressed through an effort that creates guidelines for research behavior, allowing for effective differentiation between research and criminal activity. For example, as a community we could agree on thresholds for the number of records touched, number of systems impacted, or communication timelines. There are challenges with this approach too – for one thing we don't have great precedents of the community adopting standards like this. Secondly, even if we could see something like this endorsed by the Department of Justice and prosecutors, it would not protect researchers from civil litigation.  And then there is the potential of forcing a timeline for self-identification, which would raise the likelihood of incomplete or inconclusive research, and the probability of “cease and desist” notifications over meaningful and accurate disclosures. A disclosure-based approach This again is about behavior, but focused exclusively around disclosure. The first challenge with this stems from that fact that anyone can be a researcher. If you stumble on a discovery, can you be expected to know the right way to disclose? Can you expect students and enthusiasts to know? Before you get to that though, there is the matter of agreeing on a “right” way to disclose. Best practices and guidelines abound on this topic, but the community varies hugely in its views between full, coordinated, and private disclosures. Advocates of full disclosure will generally point to the head-in-the-ground or defensive response of the vast majority of vendors – unfortunately companies with an open attitude to research are still the exception, not the norm. And those companies are not the ones likely to sue you under the CFAA. This does raise one interesting idea – basing the proposal not just on how the researcher discloses, but also on how the vendor responds. In other words, a vendor would only be able to pursue a researcher if they had also satisfied various requirements for responding to the disclosure. This would at least spread the accountability so that it isn't solely on the shoulders of the researcher. Over time it would hopefully engender a more collaborative approach and we'd see civil litigations against researchers disappear.  This is the approach proposed in a recent submission for a security research exemption to the Digital Millennium Copyright Act (DMCA). An intent-based approach This brings me to my last suggestion, and the one that I think the Administration tried to lean towards in its latest proposal. One of the criticisms of the current CFAA has long been that it does not consider intent.  That's actually a bit of an over-simplification as it is always the job of the prosecutor to prove that the defendant was really up to no good. But essentially the statute doesn't contain any actual provision for intent, or mens rea for those craving a bit of Latin. This is the point at which I should remind you that I'm not a lawyer (I don't even play one on TV). However, to the limited degree that I understand this, I do want to flag that the legal concept of intent is NOT the same as the common usage understanding of it. It's not enough to simply say “I intended X.” or “I didn't intend Y” and expect that it will neutralize the outcome of your actions. Still, I've been a fan of an exemption based on intent for a while because, as I've already stated: 1) anyone can be a researcher, and 2) some of the activities of research and cybercrime will be the same. So I thought understanding intent was the only viable way to demarcate research from crime. It's a common legal concept, present in many laws, hence there being a nice Latin term for it.  And in law, precedent seems to carry weight, so I thought intent would be our way in. Unfortunately the new proposal highlights how hard this is to put into practice. It introduces the notion of acting “willfully”, which it defines as: “Intentionally to undertake an act that the person knows to be wrongful.” So now we have a concept of intent. But what does “wrongful” mean?  Does it mean I knowingly embarrassed a company through a disclosure, potentially causing negative impact to revenue and reputation? Does it mean I pressured the company to invest time and resources into patching an issue, again with a potential negative impact to the bottom line? If so, the vast majority of bona fide researchers will meet the criteria set out above to prove bad intent, as will media reporting on research, and anyone sharing the information over social media. This doesn't necessarily mean we should just abandon the idea of an intent-based approach. The very fact that the Administration introduced intent into their proposal indicates that there may be merit to pursuing this approach.  It could be a question of needing to fine-tune the language and test the use cases, rather than giving up on it altogether.  We may have the ability to clarify and codify what criteria demonstrates and documents good intent. What do you think? Next steps It's time the security research community came up with its own proposal for improving the CFAA. It won't be easy; most of us have never done anything like this before and we probably don't know enough Latin. But it's worth the effort of trying. Again, researchers bear the most immediate risk. And researchers are the ones that understand the issues and nuances best. It falls to this community then to lead the way on finding a solution. The above are some initial ideas, but this by no means exhausts the conversation. What would you do? What have I not considered? (A lot certainly.) How can we move this conversation forward to find OUR solution? ~ @infosecjen

Will the President's Cybersecurity Proposal Make Us More Secure?

Last week, President Obama proposed a number of bills to protect consumers and the economy from the growing threat of cybercrime and cyberattacks. Unfortunately in their current form, it's not clear that they will make us more secure. In fact, they may have the potential…

Last week, President Obama proposed a number of bills to protect consumers and the economy from the growing threat of cybercrime and cyberattacks. Unfortunately in their current form, it's not clear that they will make us more secure. In fact, they may have the potential to make us more INsecure due to the chilling effect on security research. To explain why, I've run through each proposed bill in turn below, with my usual disclaimer that I'm not a lawyer. Before we get into the details, I want to start by addressing the security community's anger and worry over the proposal, particularly the Law Enforcement Provisions piece. The community is right to be concerned about the proposals in their current form, but there is some good news here, and an important opportunity for both the Government and the security community. Firstly, it's a positive sign that both the President and Congress are prioritizing cybersecurity and we're seeing scrutiny and discussion of the issues. There seems to be alignment between the Government and the security community on a few things too: for example, I think we agree that there needs to be more collaboration and transparency, and a stronger focus on preventing and detecting cyberattacks. Creating consistency for data breach notification is also a sensible measure. Lastly, I'm excited to see the Computer Fraud and Abuse Act (CFAA) being opened for updating. Yes, the current proposal raises a number of serious concerns, but so does the version that is actively being prosecuted and litigated today. The security research community has wanted to see updates to the CFAA for a long time, and this is our opportunity to engage in that process and influence legislators to make changes that really WILL make us more secure. The Critical Role of Research One thing I want to applaud in the President's position is the focus on prevention. Specifically, the Administration is advocating sharing information on threat indicators to create best practices and help organizations mount a defense. This is certainly important. Understanding attackers and their methods is something we talk about a great deal at Rapid7, and we definitely agree it's a critical part of a company's security program. If we want to prevent attacks though, we need to do and know more. Opportunities for attackers abound across the internet within the technology itself, making effective defense an almost impossible task. Addressing this requires a shift in focus so we are building security into the very fabric of our systems and processes. We need flaws, misconfigurations, and vulnerabilities to be identified, analyzed, discussed, and addressed as quickly as possible. This is the only way we can meaningfully reduce the opportunity for attackers and increase cybersecurity. Yet, at present, we do not encourage or support researchers and enable them to be effective. Rather, legislation like the CFAA creates confusion and fear, discouraging research efforts. Too often we see companies use this and other legislation as a stick to threaten (or beat) researchers with. (One thing to note here is that when I use the term “researcher,” I am referring variously to security professionals, enthusiasts, tinkerers, and even Joe Internet User, who accidently stumbles on a vulnerability or misconfiguration discovery. It's not easy to define what a security researcher is, which is one of the challenges with building a legislative carve out for them.) The defensive position described above is generally driven by conscientious business concerns for stability, revenue, reputation and corporate liability. Though understandable, in the long term this approach only increases the risk exposure for the business and their customers. We need to change this status quo and create more collaboration between security experts and businesses if we want to prevent and effectively respond to cyberattacks. Updating the Computer Fraud and Abuse Act There's a lot to discuss in the CFAA proposal, much of which raises concerns, so I'm just going to go through it all in turn as it appears in the proposal. SEC 101 - Prosecuting Organized Crime Groups That Utilize Cyber Attacks This is actually an amendment to the Racketeer Influenced and Corrupt Organizations Act (RICO), which basically allows for the leaders of a criminal organization to be tried for the crimes undertaken by others within that enterprise. The amendment proposed adds violations of the CFAA (1030) as acts that can be subject to RICO. The concern with this is that the definition of “enterprise” is incredibly broad: “Enterprise” includes any individual, partnership, corporation, association, or other legal entity, and any union or group of individuals associated in fact although not a legal entity; The security industry is built on interaction and information sharing in online communities. We help each other tackle difficult technical challenges and make sense of data on a regular basis. If this work can be interpreted as an act of conspiracy, it will undermine our ability to effectively collaborate and communicate. For a more specific example, let's consider Metasploit, which is an open source penetration testing framework designed to enable organizations to test their security against attacks they may experience in the wild. Rapid7 runs Metasploit, so if a Metasploit module is used in a crime, would that make the leadership of Rapid7, a party to that crime? Would other Metasploit contributors also be implicated? This concern is just as valid for any other open source security tool. SEC 103 – Modernizing the Computer Fraud and Abuse Act (a)(2)(B) – In response to requests from the legal and academic communities, and circuit splits over prosecutions, this amendment aims to clarify that a violation of Terms of Service IS a crime under the CFAA if the actor: This is a big concern. Firstly, there is a general sense of disquiet over the idea of businesses being able to set and amend law as easily as they set and amend Terms of Service. From a security research point of view though, there is more to why this is concerning. It is highlighted in the definitions section: This essentially means any research activity a business does not like becomes illegal. And you have to know the organization has banned it. Now while that does create a burden on the organization to state this (in their Terms of Service), it effectively means the end of internet-wide scanning efforts, which can be hugely valuable in identifying threats and understanding the reach and impact of issues. The qualifiers are supposed to address this point, but do little to help. An organization can easily claim that the value of information uncovered in a research effort was more than $5,000.  And government systems will, and should be, included in an internet-wide research effort. (a)(6) – This is another area of serious concern: There are four key parts to this: 1) “Willfully” – Interestingly, this word seems to be the Administration's attempt to introduce intent as a means of drawing a line between criminal acts and those that might appear the same but are actually bona fide. In other words, this piece might be key to separating research from criminal activity.  The problem lies in the definition of “willfully,” and the concept of “prosecutorial discretion”. The amendment defines “willfully” as follows: "Intentionally to undertake an act that the person knows to be wrongful” Unfortunately this definition begs for another definition. What does “wrongful” mean? A company embarrassed by a research disclosure could argue that the researcher intentionally caused injury to its reputation and customer confidence and was therefore wrongful. Regarding “prosecutorial discretion” – there are a lot of prosecutors, and they vary greatly in their level of technical and security understanding, and their reasons for pursuing cases. It's the anomaly cases – the occasional extremes of questionable prosecutions – that drive the most press coverage, giving a somewhat distorted view of the idea of prosecutorial discretion in the security community. As a result, there is little trust between prosecutors and security researchers, and it only takes the POSSIBILITY of prosecution for research to be chilled. In addition, the CFAA is both criminal and civil legislation, and we have seen motivated organizations take a more aggressive approach with their application of this law.  2) “Traffics” – this word is incredibly broadly defined. For example, in his blog post on the CFAA proposal, security researcher Rob Graham imagines a scenario in which an individual could be prosecuted for retweeting a tweet containing a link to data dump. Coupled with the lack of clarity around having an intent to do wrong, there is a concern that the security community will be penalized purely for being inherently snarky, highly active on the internet, and interested in security information. 3) “Any other means of access” – In some cases this could refer to the public disclosure of a vulnerability as it could be argued that provides a “means of access”.  If a researcher provides a proof of concept exploit for the vulnerability to highlight how it works, this would very likely be considered a means of access, likewise with exploit code provided for penetration testing purposes. This language effectively makes research disclosure pretty much illegal. 4) “Knowing or having reason to know that a protected computer would be accessed or damaged without authorization” – You can make an argument that anyone in the security community always has this knowledge. We know that there are cybercriminals and that they are attacking systems across the internet. If we disclose research findings, we know that some criminals may try to use them to their advantage, but we believe many are ALREADY using the vulnerability to their advantage without security professionals having a chance to defend against them. This is why disclosure is so important – it enables (sometimes forces) issues to be addressed so organizations can protect themselves and their customers. (c) – Penalties.  The amendment increases the penalties for CFAA violations. For researchers concerned about the issues above, the harsher penalties increase the risk and further discourages them from undertaking research or disclosing findings. So all in all, there are some serious concerns in the CFAA, which can be summarized as it potentially chilling security research to such a degree that it would seriously undermine US businesses and the internet as a whole. That sounds melodramatic, and perhaps it is, but I've heard from a great many researchers that the proposal would stop them conducting research altogether. The risk level would simply be too great. One challenge with finding the right approach for the CFAA is that research often looks much like nefarious activity and it's hard to create law that allows for one and criminalizes the other. This is a challenge we MUST address. The Personal Data Notification and Protection Act We see a similar problem emerge with thePersonal Data Notification and Protection Act, which aims to create consistency and clarity around breach notification requirements across the entire country (as currently there are different laws on this in 47 individual States). The challenge here again resides in where research may look like a breach. If a researcher accesses SPII in the course of finding and investigating a vulnerability, then once disclosed to the business in question, the company will need to go through the customer disclosure process. In order to protect themselves from this, we could see organizations discouraging researchers from testing their systems by taking a hard line stance that they will be prosecuted. Apart from this concern, I'm generally supportive of creating consistency for breach notification requirements. This has been needed for a while and it should help both businesses and consumers to better understand their rights and what is expected of them in the case of a security incident. The specifics of the bill look reasonable: 30 days for notification from discovery of the incident is not optimal, but is OK. The terms laid out for exemptions and the parameters on the kind of data that represents SPII seem fair. I do agree though with the EFF's assertion that it would be better if this law were:“A ‘floor,' not a ‘ceiling,' allowing states like California to be more privacy protective and not depriving state attorneys general from being able to take meaningful action.” Information Sharing The emphasis on transparency of the breach notification piece was also present in an Information Sharing proposal; the latest in a long line of bills focused on encouraging the private sector to share security information with the Government. This proposal specifically asks for “cyber threat indicator” information to be shared via the National Cybersecurity and Communications Integration Center (NCCIC), and offers limited liability as an inducement. This feels like the least impactful of the three bills to me. It's a voluntary program and offers no hugely compelling incentive for participation, other than liability limitation. This supposes that the only barrier to information sharing currently is fear of liability, and I'm not convinced that's accurate. For example, it's often the case that organizations don't know what's going on in their environment from a security point of view, and don't know what to look for or where to start. A shortage of security skills exacerbates this problem. In addition, while I do think sharing this kind of information is potentially very valuable, I'm not convinced that doing so through the Government is the most efficient or effective way to realize this value, and I definitely don't think it promotes collaboration between security and business leaders. In fact, I'm concerned that it could create a privileged class of information that is tightly controlled, rather than open and accessible to all.  One thing this proposal does raise for me though is a question around liability.  At present, in the relationship between a vendor and researcher, all the liability rests on the shoulders of the researcher (and this weight increases under the new CFAA proposal). They carry all the risk and one false move or poorly worded communication and they can be facing a criminal or civil action. The proposed information sharing bill doesn't address that, but it got me thinking… if the Government is prepared to offer limited liability to organizations reporting cybersecurity information, perhaps it could do something similar for researchers disclosing vulnerabilities…. Will the President's Cybersecurity Proposal Make Us More Secure? If you've stuck with me this far, thank you and well done. As I said at the start of this piece, it's good to see cybersecurity being prioritized and discussed by the Government. I hope something good will come of it, and certainly I think data breach notification and information sharing are important positive steps if handled correctly. But as I've stated numerous times through this piece, I believe our best and only real chance for addressing the security challenge is identifying and fixing our vulnerabilities. The bottom line is that we can't do that if researchers don't want to conduct or disclose research for fear of ending up in prison or losing their home. Again, it's important to remember that these are only the initial proposals and we're now entering a period of consultation where various Congressional and Senate committees will be looking at the goals and evaluating the language to see what will work and what won't. It's critical that the security community participates in this process in a constructive way. We need to remember that we are more immersed in security than most, and share our expertise to ensure the right approach is taken. We share a common goal with the Government: improving cybersecurity. We can only do that by working together.  Let me know what you think and how you're getting involved. @infosecjen

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More


Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now


Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now