Rapid7 Blog

Legal  

The Legal Perspective of a Data Breach

The following is a guest post by Christopher Hart, an attorney at Foley Hoag and a member of Foley Hoag’s cybersecurity incident response team. This is not meant to constitute legal advice; instead, Chris offers helpful guidance for building an incident preparation and breach…

The following is a guest post by Christopher Hart, an attorney at Foley Hoag and a member of Foley Hoag’s cybersecurity incident response team. This is not meant to constitute legal advice; instead, Chris offers helpful guidance for building an incident preparation and breach response framework in your own organization. A data breach is a business crisis that requires both a quick and a careful response. From my perspective as a lawyer, I want to provide the best advice and assistance I possibly can to help minimize the costs (and stress) that arise from a security incident. When I get a call from someone saying that they think they’ve had a breach, the first thing I’m often asked is, “What do I do?” My response is often something like, “Investigate.” The point is that normally, before the legal questions can be answered and the legal response can be crafted, as full a scope of the incident as possible first needs to be understood. I typically think of data breaches as having three parts: Planning, managing, and responding. Planning is about policy-making and incident preparation. Sometimes, the calls that I get when there is a data breach involve conversations I’m having for the first time—that is, the client has not yet thought ahead of time about what would happen in a breach situation, and how she might need to respond. But sometimes, they come from clients with whom I have already worked to develop an incident response plan. In order to effectively plan for a breach, think about the following questions: What do you need to do to minimize the possibility of a breach? What would you need to do if and when a breach occurs? Developing a response plan allows you to identify members of a crisis management team—your forensic consultant, your legal counsel, your public relations expert—and create a system to take stock of your data management. I can’t emphasize enough how important this stage is. Often, clients still think of data breaches as technical, IT issues. But the trend I am seeing now, and the advice I often give, is to think of data security as a risk management issue. That means not confining the question of data security to the tech staff, but having key players throughout the organization weigh in, from the boardroom on down. Thinking about data security as a form of managing risk is a powerful way of preparing for and mitigating against the worst case scenario. Managing involves investigating the breach, patching it and restoring system security, notifying affected individuals, notifying law enforcement authorities as necessary and appropriate, and taking whatever other steps might be necessary to protect anyone affected. A good plan will lead to better management. When people call me (or anyone at my firm’s Cybersecurity Incident Response Team, a group of lawyers at Foley Hoag who specialize in data breach response) about data breaches, they are often calling me about how to manage this step. But this is only one part of a much broader and deeper picture of data breach response. Responding can involve investigation and litigation. If you’ve acted reasonably and used best practices to minimize the possibility of a breach; and if you’ve quickly and reasonably complied with your legal obligations; and if you’ve done all you can to protect consumers, then not only have you minimized the damage from a breach—which is good for your company and for the individuals affected by a breach—but you’ve also minimized your risks in possible litigation. In any event, this category involves responding to inquiries and investigation demands from state and federal authorities, responding to complaints from individuals and third parties, and generally engaging in litigation until the disputes have been resolved. This can be a frustratingly time-consuming and expensive process. This should give you a good overall picture of how I, or any lawyer, thinks about data security incidents. I hope it helps give you a framework for thinking about data security in your own organizations. Need assistance? Check out Rapid7's incident response services if you need assistance developing or implementing an incident response plan at your organization.

Cybersecurity for NAFTA

When the North American Free Trade Agreement (NAFTA) was originally negotiated, cybersecurity was not a central focus. NAFTA came into force – removing obstacles to commercial trade activity between the US, Canada, and Mexico – in 1994, well before most digital services existed. Today, cybersecurity is a…

When the North American Free Trade Agreement (NAFTA) was originally negotiated, cybersecurity was not a central focus. NAFTA came into force – removing obstacles to commercial trade activity between the US, Canada, and Mexico – in 1994, well before most digital services existed. Today, cybersecurity is a major economic force – itself a large industry and important source of jobs, as well as an enabler of broader economic health by reducing risk and uncertainty for businesses. Going forward, cybersecurity should be an established component of modernized trade agreements and global trade policy. The Trump Administration is now modernizing NAFTA, with the first renegotiation round concluding recently. There are several key ways the US, Mexican, and Canadian governments can use this opportunity to advance cybersecurity. In this blog post, we briefly describe two of them: 1) Aligning cybersecurity frameworks, and 2) protecting strong encryption. For more about Rapid7's recommendations on cybersecurity and trade, check out our comments on NAFTA to the US Trade Representative (USTR), or check out my upcoming presentation on this very subject at Rapid7's UNITED conference! Align cybersecurity frameworks Trade agreements should broadly align approaches to cybersecurity planning by requiring the parties to encourage voluntary use of a comprehensive, standards-based cybersecurity risk management framework. The National Institute of Standards and Technology's (NIST) Cybersecurity Framework for Critical Infrastructure ("NIST Cybersecurity Framework") is a model of this type of framework, and is already experiencing strong adoption in the U.S. and elsewhere. In addition to our individual comments to USTR, Rapid7 joined comments from the Coalition for Cybersecurity Policy and Law, and also organized a joint letter with ten other cybersecurity companies, urging USTR to incorporate this recommendation into NAFTA. International alignment of risk management frameworks would promote trade and cybersecurity by Streamlining trade of cybersecurity products and services. To oversimplify, think of a cybersecurity framework like a list of goals and activities – it is easier to find the right products and services if everyone is referencing a similar list. Alignment on a comprehensive framework would enable cybersecurity companies to map their products and services to the framework more consistently. Alignment can also help less mature markets know what specific cybersecurity goals to work toward, which will clarify the types of products they need to achieve these goals, leading to more informed investment decisions that hold service providers to consistent benchmarks. Enabling many business sectors by strengthening cybersecurity. Manufacturing, agriculture, healthcare, and virtually all other industries are going digital, making computer security crucial for their daily operations and future success. Broader use of a comprehensive risk management framework can raise the baseline cybersecurity level of trading partners in all sectors, mitigating cyber threats that hinder commercial activity, fostering greater trust in services that depend upon secure infrastructure, and strengthening the system of international trade. Helping address trade barriers and market access issues. Country-specific approaches to cyber regulation – such as data localization or requiring use of specific technologies – can raise market access issues or force ICT companies to make multiple versions of the same product. International alignment on interoperable, standards-based cybersecurity principles and processes would reduce unnecessary variation in regulatory approaches and help provide clear alternatives to cybersecurity policies that inhibit free trade. To keep pace with innovation and evolving threats, prevent standards from reducing market access, and incorporate the input of private sector experts, the risk management framework should be voluntary, flexible, and developed in an industry-led and transparent process. For example, the NIST Cybersecurity Framework is voluntary and was developed through an open process in which anyone can participate. The final trade agreement text need not dictate the framework content beyond basic principles, but should instead encourage the development, alignment, and use of functionally similar cybersecurity frameworks. Prohibit requirements to weaken encryption Critical infrastructure, commerce, and individuals depend on encryption as a fundamental means of protecting data from unauthorized access and use. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive advantage with uncompromised products. Requirements to weaken encryption can impose significant security risks on companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments – ultimately undermining the security of the end-users, businesses, and governments. NAFTA should include provisions forbidding parties from conditioning market access for cryptography used for commercial applications on the transfer of private keys, algorithm specification, or other design details. The final draft text of the Trans-Pacific Partnership (TPP) contained a similar provision – though Congress never ratified TPP, so it never came into force. Although this provision would be helpful to protect strong encryption, it would only apply to commercial activities. The current version of NAFTA contains exceptions for regulations undertaken for national security (as did TPP, in addition to clarifications that a nation's law enforcement agencies could still demand information pursuant to their legal processes). This may limit the overall protectiveness of the provision, but should also moderate concerns a nation might have about including encryption protection in the trade agreement. This is beginning The NAFTA parties have set an aggressive pace for negotiations, with the goal of agreeing on a final draft by the end of the year. However, the original agreement took years to finalize, and NAFTA covers many subjects that can attract political controversy. So NAFTA's timeline, and openness to incorporating new cybersecurity provisions, are not entirely clear. Nonetheless, the Trump Administration has indicated that both international trade and cybersecurity are priorities. Even as the NAFTA negotiations roll on, the Administration has begun examining the Korea-US trade agreement, and both new agreements and modernization of previous agreements are likely future opportunities. Trade agreements can last decades, so considering how best to embed cybersecurity priorities should not be taken lightly. Rapid7 will continue to work with private and public sector partners to strengthen cybersecurity and industry growth through trade agreements.

Copyright Office Calls For New Cybersecurity Researcher Protections

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built…

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built into this report, submitting two sets of detailed comments—see here and here—to the Copyright Office with Bugcrowd, HackerOne, and Luta Security, as well as participating in official roundtable discussions. Here we break down why this matters for researchers, what the Copyright Office's study concluded, and how it matches up to Rapid7's recommendations. What is DMCA Sec. 1201 and why does it matter to researchers? Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs, like encryption, authentication requirements, region coding, user agents) to access copyrighted works, including software, without permission of the owner. That creates criminal penalties and civil liability for independent security research that does not obtain authorization for each TPM circumvention from the copyright holders of software. This hampers researchers' independence and flexibility. While the Computer Fraud and Abuse Act (CFAA) is more famous and feared by researchers, liability for DMCA Sec. 1201 is arguably broader because it applies to accessing software on devices you may own yourself while CFAA generally applies to accessing computers owned by other people. To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years in its "triennial rulemaking" process. The permanent exception to the prohibition on circumventing TPMs for security testing is quite limited – in part because researchers are still required to get prior permission from the software owner. The temporary exemptions go beyond the permanent exemptions. In Oct. 2015 the Copyright Office granted a very helpful exemption to Sec. 1201 for good faith security testing that circumvents TPMs without permission. However, this temporary exemption will expire at the end of the three-year exemption window. In the past, once a temporary exemption expires, advocates must start from scratch in re-applying for another temporary exemption. The temporary exemption is set to expire Oct. 2018, and the renewal process will ramp up in the fall of this year. Copyright Office study and Rapid7's recommendations The Copyright Office announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study to weigh legislative and procedural reforms to Sec. 1201, including the permanent exemptions and the three-year rulemaking process. The Copyright Office solicited two sets of public comments and held a roundtable discussion to obtain feedback and recommendations for the study. At each stage, Rapid7 provided recommendations on reforms to empower good faith security researchers while preserving copyright protection against infringement – though, it should be noted, there were several commenters opposed to reforms for researchers on IP protection grounds. Broadly speaking, the conclusions reached in the Copyright Office's study are quite positive for researchers and largely tracked the recommendations of Rapid7 and other proponents of security research. Here are four key highlights: Authorization requirement: As noted above, the permanent exemption for security testing under Sec. 1201(j) is limited because it still requires researchers to obtain authorization to circumvent TPMs. Rapid7's recommendation is to remove this requirement entirely because good faith security research does not infringe copyright, yet an authorization requirement compromises independence and speed of research. The Copyright Office's study recommended [at pg. 76] that Congress make this requirement more flexible or remove it entirely. This is arguably the study's most important recommendation for researchers. Multi-factor test: The permanent exemption for security testing under Sec. 1201(j) also partially conditions liability protection on researchers when the results are used "solely" to promote the security of the computer owner, and when the results are not used in a manner that violates copyright or any other law. Rapid7's recommendations are to remove "solely" (since research can be performed for the security of users or the public at large, not just the computer owner), and not to penalize researchers if their research results are used by unaffiliated third parties to infringe copyright or violate laws. The Copyright Office's study recommended [at pg. 79] that Congress remove the "solely" language, and either clarify or remove the provision penalizing researchers when research results are used by third parties to violate laws or infringe copyright. Compliance with all other laws: The permanent exemption for security testing only applies if the research does not violate any other law. Rapid7's recommendation is to remove this caveat, since research may implicate obscure or wholly unrelated federal/state/local regulations, those other laws have their own enforcement mechanisms to pursue violators, and removing liability protection under Sec. 1201 would only have the effect of compounding the penalties. Unfortunately, the Copyright Office took a different approach, tersely noting [at pg. 80] that it is unclear whether the requirement to comply with all other laws impedes legitimate security research. The Copyright Office stated they welcome further discussion during the next triennial rulemaking, and Rapid7 may revisit this issue then. Streamlined renewal for temporary exemptions: As noted above, temporary exemptions expire after three years. In the past, proponents must start from scratch to renew the temporary exemption – a process that involves structured petitions, multiple rounds of comments to the Copyright Office, and countering the arguments of opponents to the exemption. For researchers that want to renew the temporary security testing exemption, but that lack resources and regulatory expertise, this is a burdensome process. Rapid7's recommendation is for the Copyright Office to presume renewal of previously granted temporary exemptions unless there is a material change in circumstances that no longer justifies granting the exemption. In its study, the Copyright Office committed [at pg. 143] to streamlining the paperwork required to renew already granted temporary exemptions. Specifically, the Copyright Office will ask parties requesting renewal to submit a short declaration of the continuing need for an exemption, and whether there has been any material change in circumstances voiding the need for the exemption, and then the Copyright Office will consider renewal based on the evidentiary record and comments from the rulemaking in which the temporary exemption was originally granted. Opponents of renewing exemptions, however, must start from scratch in submitting evidence that a temporary exemption harms the exercise of copyright. Conclusion—what's next? In the world of policy, change typically occurs over time in small (often hard-won) increments before becoming enshrined in law. The Copyright Office's study is one such increment. For the most part, the study is making recommendations to Congress, and it will ultimately be up to Congress (which has its own politics, processes, and advocacy opportunities) to adopt or decline these recommendations. The Copyright Office's study comes at a time that House Judiciary Committee is broadly reviewing copyright law with an eye towards possible updates. However, copyright is a complex and far-reaching field, and it is unclear when Congress will actually take action. Nonetheless, the Copyright Office's opinion on these issues will carry significant weight in Congress' deliberations, so it would have been a heavy blow if the Copyright Office's study had instead called for tighter restrictions on security research. Importantly, the Copyright Office's new, streamlined process for renewing already granted temporary exemptions will take effect without Congress' intervention. The streamlined process will be in place for the next "triennial rulemaking," which begins in late 2017 and concludes in 2018, and which will consider whether to renew the temporary exemption for security research. This is a positive, concrete development that will reduce the administrative burden of applying for renewal and increase the likelihood of continued protections for researchers. The Copyright Office's study noted that "Independent security test[ing] appears to be an important component of current cybersecurity practices". This recognition and subsequent policy shifts on the part of the Copyright Office are very encouraging. Rapid7 believes that removing legal barriers to good faith independent research will ultimately strengthen cybersecurity and innovation, and we hope to soon see legislative reforms that better balance copyright protection with legitimate security testing.

Legislation to Strengthen IoT Marketplace Transparency

Senator Ed Markey (D-MA) is poised to introduce legislation to develop a voluntary cybersecurity standards program for the Internet of Things (IoT). The legislation, called the Cyber Shield Act, would enable IoT products that comply with the standards to display a label indicating a strong…

Senator Ed Markey (D-MA) is poised to introduce legislation to develop a voluntary cybersecurity standards program for the Internet of Things (IoT). The legislation, called the Cyber Shield Act, would enable IoT products that comply with the standards to display a label indicating a strong level of security to consumers – like an Energy Star rating for IoT. Rapid7 supports this legislation and believes greater transparency in the marketplace will enhance cybersecurity and protect consumers.The burgeoning IoT marketplace holds a great deal of promise for consumers. But as the Mirai botnet made starkly clear, many IoT devices – especially at the inexpensive range of the market – are as secure as leaky boats. Rapid7's Deral Heiland and Tod Beardsley, among many others, have written extensively about IoT's security problems from a technical perspective.Policymakers have recognized the issue as well and are taking action. Numerous federal agencies (such as FDA and NHTSA) have set forth guidance on IoT security as it relates to their area of authority, and others (such as NIST) have long pushed for consistent company adoption of baseline security frameworks. In addition to these important efforts, we are encouraged that Congress is also actively exploring market-based means to bring information about the security of IoT products to the attention of consumers.Sen. Markey's Cyber Shield Act would require the Dept. of Commerce to convene public and private sector experts to establish security benchmarks for select connected products. The working group would be encouraged to incorporate existing standards rather than create new ones, and the benchmark would change over time to keep pace with evolving threats and expectations. The process, like that which produced the NIST Cybersecurity Framework, would be open for public review and comment. Manufacturers may voluntarily display "Cyber Shield" labels to IoT products that meet the security benchmarks (as certified by an accredited testing entity).Rapid7 supports voluntary initiatives to raise awareness to consumers about product security, like that proposed in the Cyber Shield Act. Consumers are often not able to easily determine the level of security in products they purchase, so an accessible and reliable system is needed to help inform purchase decisions. As consumers evaluate and value IoT security more, competing manufacturers will respond by prioritizing secure design, risk management practices, and security processes. Consumers and the IoT industry can both benefit from this approach.The legislation is not without its challenges, of course. To be effective, the security benchmarks envisioned by the bill must be clear and focused, rather than generally applicable to all connected devices. The program would need buy-in from security experts and responsible manufacturers, and consumers would need to learn how to spot and interpret Cyber Shield labels. But overall, Rapid7 believes Sen. Markey's Cyber Shield legislation could encourage greater transparency and security for IoT. Strengthening the IoT ecosystem will require a multi-pronged approach from policymakers, and we think lawmakers should consider incorporating this concept as part of the plan.

Rapid7 issues comments on NAFTA renegotiation

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North…

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North American Free Trade Agreement (NAFTA). NAFTA is a trade agreement between the US, Canada, and Mexico, that covers a huge range of topics, from agriculture to healthcare. Rapid7 submitted comments in response, focusing on 1) preventing data localization, 2) alignment of cybersecurity risk management frameworks, 3) protecting strong encryption, and 4) protecting independent security research. Rapid7's full comments on the renegotiation of NAFTA are available here. 1) Preserving global free flow of information – preventing data localization Digital goods and services are increasingly critical to the US economy. By leveraging cloud computing, digital commerce offers significant opportunities to scale globally for individuals and companies of all sizes – not just large companies or tech companies, but for any transnational company that stores customer data. However, regulations abroad that disrupt the free flow of information, such as "data localization" (requirements that data be stored in a particular jurisdiction), impede both trade and innovation. Data localization erodes the capabilities and cost savings that cloud computing can provide, while adding the significant costs and technical burdens of segregating data collected from particular countries, maintaining servers locally in those countries, and navigating complex geography-based laws. The resulting fragmentation also undermines the fundamental concept of a unified and open global internet. Rapid7's comments [pages 2-3] recommended that NAFTA should 1) Prevent compulsory localization of data, and 2) Include an express presumption that governments would minimize disruptions to the flow of commercial data across borders. 2) Promote international alignment of cybersecurity risk management frameworks When NAFTA was originally negotiated, cybersecurity was not the central concern that it is today. Cybersecurity is presently a global affair, and the consequences of malicious cyberattack or accidental breach are not constrained by national borders. Flexible, comprehensive security standards are important for organizations seeking to protect their systems and data. International interoperability and alignment of cybersecurity practices would benefit companies by enabling them to better assess global risks, make more informed decisions about security, hold international partners and service providers to a consistent standard, and ultimately better protect global customers and constituents. Stronger security abroad will also help limit the spread of malware contagion to the US. We support the approach taken by the National Institute of Standards and Technology (NIST) in developing the Cybersecurity Framework for Critical Infrastructure. The process was open, transparent, and carefully considered the input of experts from the public and private sector. The NIST Cybersecurity Framework is now seeing impressive adoption among a wide range of organizations, companies, and government agencies – including some critical infrastructure operators in Canada and Mexico. Rapid's comments [pages 3-4] recommended that NAFTA should 1) recognize the importance of international alignment of cybersecurity standards, and 2) require the Parties to develop a flexible, comprehensive cybersecurity risk management framework through a transparent and open process. 3) Protect strong encryption Reducing opportunities for attackers and identifying security vulnerabilities are core to cybersecurity. The use of encryption and security testing are key practices in accomplishing these tasks. International regulations that require weakening of encryption or prevent independent security testing ultimately undermine cybersecurity. Encryption is a fundamental means of protecting data from unauthorized access or use, and Rapid7 believes companies and innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive disadvantage with uncompromised products. Requirements to weaken encryption would impose significant security risks on US companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments. Rapid7's comments [page 5] recommended that NAFTA forbid Parties from conditioning market access for cryptography in commercial applications on the transfer of decryption keys or alteration of the encryption design specifications. 4) Protect independent security research Good faith security researchers access software and computers to identify and assess security vulnerabilities. To perform security testing effectively, researchers often need to circumvent technological protection measures (TPMs) – such as encryption, login requirements, region coding, user agents, etc. – controlling access to software (a copyrighted work). However, this activity can be chilled by Sec. 1201 of the Digital Millennium Copyright Act (DMCA) of 1998, which forbids circumvention of TPMs without the authorization of the copyright holder. Good faith security researchers do not seek to infringe copyright, or to interfere with a rightsholder's normal exploitation of protected works. The US Copyright Office recently affirmed that security research is fair use and granted this activity, through its triennial rulemaking process, a temporary exemption from the DMCA's requirement to obtain authorization from the rightsholder before circumventing a TPM to safely conduct security testing on lawfully acquired (i.e., not stolen or "borrowed") consumer products. Some previous trade agreements have closely mirrored the Digital Millennium Copyright Act's (DMCA) prohibitions on unauthorized circumvention of TPMs in copyrighted works. This approach replicates internationally the overbroad restrictions on independent security testing that the US is now scaling back. Newly negotiated trade agreements should aim to strike a more modern and evenhanded balance between copyright protection and good faith cybersecurity research. Rapid7's comments [page 6] recommended that any anti-circumvention provisions of NAFTA should be accompanied by provisions exempting security testing of lawfully acquired copyrighted works. Better trade agreements for the Digital Age? Data storage and cybersecurity have undergone considerable evolution since NAFTA was negotiated more than a quarter century ago. To the extent that renegotiation may better address trade issues related to digital goods and services, we view the modernization of NAFTA and other agreements as potentially positive. The comments Rapid7 submitted regarding NAFTA will likely apply to other international trade agreements as they come up for renegotiation. We hope the renegotiations result in a broadly equitable and beneficial trade regime that reflects the new realities of the digital economy.

White House Cybersecurity Executive Order Summary

Yesterday President Trump issued an Executive Order on cybersecurity: “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure.” The Executive Order (EO) appears broadly positive and well thought out, though it is just the beginning of a long process and not a sea change in…

Yesterday President Trump issued an Executive Order on cybersecurity: “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure.” The Executive Order (EO) appears broadly positive and well thought out, though it is just the beginning of a long process and not a sea change in itself. The EO directs agencies to come up with plans for securing and modernizing their networks; develop international cyber norms; work out a deterrence strategy against hacking; and reduce the threat of botnets – all constructive, overdue goals. Below are an overview, a highlight reel, and some additional takeaway thoughts. Cybersecurity Executive Order Overview Executive orders are issued only by the President, direct the conduct of executive branch agencies (not the private sector, legislature, or judiciary), and have the force of law. All public (not classified) EOs are published here. This cyber EO is the first major move the Trump White House (as distinct from other federal agencies) has made publicly on cybersecurity. The cyber Executive Order takes action on three fronts: Federal network security. Directs agencies to take a risk management approach, adopt the NIST Framework, and favor shared services and consolidated network architectures (including for cloud and cybersecurity services). Protecting critical infrastructure. Directs agencies to work with the private sector to protect critical infrastructure, incentivize more transparency on critical infrastructure cybersecurity, improve resiliency of communication infrastructure, and reduce the threat of botnets. National preparedness and workforce development. Directs agencies to assess strategic options for deterring and defending against adversaries. Directs agencies to report their international cybersecurity priorities, and to promote international norms on cybersecurity. Cybersecurity Executive Order Highlights Federal network cybersecurity: The US will now manage cyber risks as an executive branch enterprise. The President is holding cabinet and agency heads accountable for implementing risk management measures commensurate with risks and magnitude of harm. [Sec. 1(c)(i)] Agencies are directed to immediately use the NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST Framework) to manage their cyber risk. [Sec. 1(c)(ii)] DHS and OMB must report to the President, within 120 days, a plan to secure federal networks, address budgetary needs for cybersecurity, and reconcile all policies and standards with the NIST Framework. [Sec. 1(c)(iv)] Agencies must now show preference for shared IT services (including cloud and cybersecurity services) in IT procurement. [Sec. 1(c)(vi)(A)] This effort will be coordinated by the American Technology Council. [Sec. 1(c)(vi)(B)] The White House, in coordination with DHS and other agencies, must submit a report to the President, within 60 days, on modernizing federal IT, including transitioning all agencies to consolidated network architectures and shared IT services – with specific mention of cybersecurity services. [Sec. 1(c)(vi)(B)-(C)] Defense and intel agencies must submit a similar report for national security systems within 150 days. [Sec. 1(c)(vii)] Critical infrastructure cybersecurity: Critical infrastructure includes power plants, oil and gas, financial system, other systems that would risk national security if damaged. The EO states that it is the government's policy to use its authorities and capabilities to support the cybersecurity risk management of critical infrastructure. [Sec. 2(a)] The EO directs DHS, DoD, and other agencies to assess authorities and opportunities to coordinate with the private sector to defend critical infrastructure. [Sec. 2(b)] DHS and DoC must submit a report to the President, within 90 days, on promoting market transparency of cyber risk management practices by critical infrastructure operators, especially those that are publicly traded. [Sec. 2(c)] DoC and DHS shall work with industry to promote voluntary actions to improve the resiliency of internet and communications infrastructure and “dramatically" reduce the threat of botnet attacks. [Sec. 2(d)] Agencies shall assess cybersecurity risks and mitigation capabilities related to the electrical sector and the defense industrial base (including supply chain). [Sec. 2(e)-(f)] National preparedness and workforce: The EO reiterates the US government's commitment to an open, secure Internet that fosters innovation and communication while respecting privacy and guarding against disruption. [Sec. 3(a)] Cabinet members must submit a report to the President, within 90 days, on options for deterring adversaries and protecting Americans from cyber threats. [Sec. 3(b)] Cabinet members must report to the President, within 45 days, on international cybersecurity priorities, including investigation, attribution, threat info sharing, response, etc. The agencies must report to the President, within 90 days, on a strategy for international cooperation in cybersecurity. [Sec. 3(c)] Agencies must report to the President, within 120 days, how to grow and sustain a workforce skilled in cybersecurity and related fields. [Sec. 3(d)(i)] The Director of National Intelligence must report to the President, within 60 days, on workforce development practices of foreign peers to compare long-term competitiveness in cybersecurity. [Sec. 3(d)(ii)] Agencies must report to the President, within 150 days, on US efforts to maintain advantage in national-security-related cyber capabilities. [Sec. 3(d)(iii)] The Executive Order is just the start As you can see, the EO initially requires a lot of multi-agency reports, which we can expect to surface in coming months, and which can then be used to craft official policy. There are opportunities for the private sector to provide input to the agencies as they develop those reports, though the 2-4 month timelines are pretty tight for such complex topics. But the reports are just the beginning of long processes to accomplish the goals set forth in the EO - it will take a lot longer than 60 days, for example, to fully flesh out and implement a plan to modernize federal IT. The long haul is beginning, and we won't know how transformative or effective this process will be for some time.

12 Days of HaXmas: Year-End Policy Comment Roundup

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. On the seventh day of Haxmas, the Cyber gave to me: a list of seven Rapid7 comments to government policy proposals! Oh, tis a magical season. It was an active 2016 for Rapid7's policy team. When government agencies and commissions proposed rules or guidelines affecting security, we often submitted formal "comments" advocating for sound cybersecurity policies and greater protection of security researchers. These comments are typically a cross-team effort, reflecting the input of our policy, technical, industry experts, and submitted with the goal of helping government better protect users and researchers and advance a strong cybersecurity ecosystem. Below is an overview of the comments we submitted over the past year. This list does not encompass the entirety of our engagement with government bodies, only the formal written comments we issued in 2016. Without further ado: 1. Comments to the National Institute for Standards and Technology (NIST), Feb. 23: NIST asked for public feedback on its Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity. The Framework is a great starting point for developing risk-based cybersecurity programs, and Rapid7's comments expressed support for the Framework. Our comments also urged updates to better account for user-based attacks and ransomware, to include vulnerability disclosure and handling policies, and to expand the Framework beyond critical infrastructure. We also urged NIST to encourage greater use of multi-factor authrntication and more productive information sharing. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-nist-fr amework-022316.pdf 2. Comments to the Copyright Office, Mar. 3: The Copyright Office asked for input on its (forthcoming) study of Section 1201 of the DMCA. Teaming up with Bugcrowd and HackerOne, Rapid7 submitted comments that detailed how Section 1201 creates liability for good faith security researchers without protecting copyright, and suggested specific reforms to improve the Copyright Office's process of creating exemptions to Section 1201. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-bugcrowd--hackerone -joint-comments-to-us-copyright-office-s… 3. Comments to the Food and Drug Administration (FDA), Apr. 25: The FDA requested comments for its postmarket guidance for cybersecurity of medical devices. Rapid7 submitted comments praising the FDA's holistic view of the cybersecurity lifecycle, use of the NIST Framework, and recommendation that companies adopt vulnerability disclosure policies. Rapid7's comments urged FDA guidance to include more objective risk assessment and more robust security update guidelines. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-fda-dra ft-guidance-for-postmarket-management-of… 4. Comments to the Dept. of Commerce's National Telecommunications and Information Administration (NTIA), Jun. 1: NTIA asked for public comments for its (forthcoming) "green paper" examining a wide range of policy issues related to the Internet of Things. Rapid7's comprehensive comments detailed – among other things – specific technical and policy challenges for IoT security, including insufficient update practices, unclear device ownership, opaque supply chains, the need for security researchers, and the role of strong encryption. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-ntia-in ternet-of-things-rfc-060116.pdf 5. Comments to the President's Commission on Enhancing National Security (CENC), Sep. 9: The CENC solicited comments as it drafted its comprehensive report on steps the government can take to improve cybersecurity in the next few years. Rapid7's comments urged the government to focus on known vulnerabilities in critical infrastructure, protect strong encryption from mandates to weaken it, leverage independent security researchers as a workforce, encourage adoption of vulnerability disclosure and handling policies, promote multi-factor authentication, and support formal rules for government disclosure of vulnerabilities. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-cenc-rf i-090916.pdf 6. Comments to the Copyright Office, Oct. 28: The Copyright Office asked for additional comments on its (forthcoming) study of Section 1201 reforms. This round of comments focused on recommending specific statutory changes to the DMCA to better protect researchers from liability for good faith security research that does not infringe on copyright. Rapid7 submitted these comments jointly with Bugcrowd, HackerOne, and Luta Security. The comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-bugcrowd-hackerone- luta-security-joint-comments-to-copyrigh… 7. Comments to the National Highway Traffic Safety Administration (NHTSA), Nov. 30: NHTSA asked for comments on its voluntary best practices for vehicle cybersecurity. Rapid7's comments recommended that the best practices prioritize security updating, encourage automakers to be transparent about cybersecurity features, and tie vulnerability disclosure and reporting policies to standards that facilitate positive interaction between researchers and vendors. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-nhtsa-c ybersecurity-best-practices-for-modern-v… 2017 is shaping up to be an exciting year for cybersecurity policy. The past year made cybersecurity issues even more mainstream, and comments on proposed rules laid a lot of intellectual groundwork for helpful changes that can bolster security and safety. We are looking forward to keeping up the drumbeat for the security community next year. Happy Holidays, and best wishes for a good 2017 to you!

NCSAM: The Danger of Criminalizing Curiosity

This is a guest post from Kurt Opsahl, Deputy Executive Director and General Counsel of the Electronic Frontier Foundation. October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for…

This is a guest post from Kurt Opsahl, Deputy Executive Director and General Counsel of the Electronic Frontier Foundation. October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face. It's been thirty years since Congress enacted the Computer Fraud and Abuse Act, the primary federal law to criminalize hacking computers. Since 1986, the CFAA has been amended many times, most often to make the law harsher and more expansive. While it was written to focus on what were then relatively rare networked computers, the ubiquity of the Internet means today that almost every machine more useful than a portable hand held calculator is a “protected computer” under the law. As the anti-hacking law's scope expanded, so did the penalties. Today there is little room for hacking misdemeanors under the CFAA.  Even where there is, there's virtually no crime that can't be upgraded to a felony by an aggressive U.S. Attorney through the simple expedient of tying the crime to a parallel state law.  This ends up meaning that you might get sentenced to community service for vandalizing a printed billboard, but endure two years in prison for vandalizing a Web page. There are many problems with the CFAA and its interpretation. The Department of Justice has used the anti-hacking law to criminalize violations of Terms of Service or iterating a URL. But one problem stands out: the over-criminalization of curiosity. A few months before the CFAA originally passed, the e-zine Phrack published the The Hacker Manifesto, in which The Mentor explained to the world his crime: curiosity. The Mentor, a hacker affiliated with the Legion of Doom, celebrated exploring and outsmarting, discovering a “door opened to a world” beyond the mendacity of the day-to-day. Over the years of working on the EFF's Coders' Rights Project, we've had the honor of representing many smart and skilled security researchers, who've made considerable contributions to our collective security. We won't be naming names, but at least a few of these luminaries got their start in computer security from curiosity, figuring out at a young age how the machines in their lives worked, and how to make those machines work in unexpected ways. An intellectual adventure, but one fraught with the risk that these unexpected ways could cross the line into “access without authorization,” and then felony charges. Perhaps the clearest example of the danger of criminalizing curiosity comes from the prosecution of Aaron Swartz.  In 2011, a federal grand jury in Massachusetts charged Aaron Swartz with multiple felonies for his alleged computer intrusion to download a large number of academic journal articles from the MIT network. Swartz was instrumental in the development of key parts of modern technologies, including RSS, Creative Commons, and Reddit, getting his start at an early age.  He was just 14 when he joined the working group that authored RSS 1.0. Yet for his curiosity, Swartz was facing a maximum sentence of decades in prison.  Realistically, he would have had to serve less time, but even so he would have been saddled with felony convictions forever.  Instead, Swartz took his own life, and the world lost a bright star could have continued to bring great things to the Internet. Paul Allen and Bill Gates provide the counterexample. In his autobiography, Allen told the story of when the two founders of Microsoft “got hold of” an administrator password at the Computer Center Corporation (which they called C-Cubed), a company that provided timeshare computers to the pair. With the cost of access mounting, eighth grade Allen and Gates wanted to use the password to give themselves free access to the timeshare servers.  However, C-Cubed discovered the hack.  As Allen explained: “We hoped we'd get let off with a slap on the wrist, considering we hadn't done anything yet. But then the stern man said it could be ‘criminal' to manipulate a commercial account. Bill and I were almost quivering.” But instead of calling the police, Dick Wilkinson, one of C-Cubed's partners, kicked them off the system for six weeks, letting them off with a warning. Eventually, C-Cubed exchanged free computer time for bug reports, allowing the kids to play and learn, so long as they carefully documented what led the computer to crash. Computer security will always be difficult, and protecting our networks and devices requires the hard work of skilled security researchers to find, publish and fix vulnerabilities. The skills that allow security researchers to discover vulnerabilities and develop proof of concept exploits are often rooted in satisfying their curiosity, sometimes breaking some rules along the way. Society may still want to discourage intentionally destructive behavior, and hold people accountable for any harm they may have caused, but certainly not every hack needs to be a felony, backed by stiff sentences and they lifelong encumbrances that come with being a felon. By allowing for infractions and misdemeanors, coupled with recognition that a little mischief may be an early sign of someone who will be vital to protecting us all. Kurt Opsahl is the Deputy Executive Director and General Counsel of the Electronic Frontier Foundation. In addition to representing clients on civil liberties, free speech and privacy law, Opsahl counsels on EFF projects and initiatives. Opsahl is the lead attorney on the Coders' Rights Project, and is representing several companies who are challenging National Security Letters. Additionally, Opsahl co-authored "Electronic Media and Privacy Law Handbook." Follow Kurt at @kurtopsahl and the EFF at @eff.

National Cybersecurity Awareness Month 2016 - This one's for the researchers

October was my favorite month even before I learned it is also National Cybersecurity Awareness Month (NCSAM) in the US and EU. So much the better – it is more difficult to be aware of cybersecurity in the dead of winter or the blaze of…

October was my favorite month even before I learned it is also National Cybersecurity Awareness Month (NCSAM) in the US and EU. So much the better – it is more difficult to be aware of cybersecurity in the dead of winter or the blaze of summer. But the seasonal competition with Pumpkin Spice Awareness is fierce. To do our part each National Cybersecurity Awareness Month, Rapid7 publishes content that aims to inform readers about a particular theme, such as the role of executive leadership, and primers to protect users against common threats. This year, Rapid7 will use NCSAM to celebrate security research – launching blog posts and video content showcasing research and raising issues important to researchers. Rapid7 strongly supports independent research to identify and assess security vulnerabilities with the goal of correcting flaws. Such research strengthens cybersecurity and helps protect consumers by calling attention to flaws that software vendors may have ignored or missed. There are just too many vulnerabilities in complex code to expect vendors' internal security teams to catch everything. Independent researchers are antibodies in our immune system.This NCSAM is an extra special one for security researchers for a couple reasons. First, new legal protections for security research kick in under the DMCA later this month. Second, October 2016 is the 30th anniversary of a problematic law that chills beneficial security research – the CFAA. DMCA exception – copyright gets out of the way (for a while)This October 29th, a new legal protection for researchers will activate: an exemption from liability under Section 1201 of the Digital Millennium Copyright Act (DMCA). The result of a long regulatory battle, this helpful exemption will only last two years, after which we can apply for renewal.Sec. 1201 of the DMCA prohibits circumventing a technological protection measure (TPM) to copyrighted works (including software). [17 USC 1201(a)(1)(A)] The TPMs can be anything that controls access to the software, such as weak encryption. Violators can incur civil and criminal penalties. Sec. 1201 can hinder security research by forbidding researchers from unlocking licensed software to probe for vulnerabilities. This problem prompted security researchers – including Rapid7 – to push the Copyright Office to create a shield for research from liability under Sec. 1201. The Copyright Office ultimately did so last October, issuing a rule that limits liability for circumventing TPMs on lawfully acquired (not stolen) consumer devices, medical devices, or land vehicles solely for the purpose of good faith security testing. The Copyright Office delayed activation of the exception for a year, so it takes effect this month. Rapid7 analyzed the exception in more detail here, and is pushing the Copyright Office for greater researcher protections beyond the exception.The exception is a positive step for researchers, and another signal that policymakers are becoming more aware of the value that independent research can drive for cybersecurity and consumers. However, there are other laws – without clear exceptions – that create legal problems for good faith researchers. Happy 30th, CFAA – time to grow upThe Computer Fraud and Abuse Act (CFAA) was enacted on October 16th, 1986 – 30 years ago. The internet was in its infancy in 1986, and platforms like social networking or the Internet of Things simply did not exist. Today, the CFAA is out of step with how technology is used. The CFAA's wide-ranging crimes can sweep in both ordinary internet activity and beneficial research. For example, as written, the CFAA threatens criminal and civil penalties for violations of the website's terms of service, a licensing agreement, or a workplace computer use agreement. [18 USC 1030(a)(2)(C)] People violate these agreements all the time – if they lie about their name on a social network, or they run unapproved programs on their work computer, or they violate terms of service while conducting security research to test whether a service has accidentally made sensitive information available on the open internet.Another example: the CFAA threatens criminal and civil penalties for causing any impairment to a computer or information. [18 USC 1030(a)(5)] No harm is required. Any unauthorized change to data, no matter how innocuous, is prohibited. Even slowing down a website by scanning it with commercially available vulnerability scanning tools can violate this law. Since 1986, virtually no legislation has been introduced to meaningfully address the CFAA's overbreadth – with Aaron's Law, sponsored by Rep. Lofgren and Sen. Wyden, being the only notable exception. Even courts are sharply split on how broad the CFAA should be, creating uncertainty for prosecutors, businesses, researchers, and the public. So for the CFAA's 30th anniversary, Rapid7 renews our call for sensible reform. Although we recognize the real need for computer crime laws to deter and prosecute malicious acts, Rapid7 believes balancing greater flexibility for researchers and innovators with law enforcement needs is increasingly important. As the world becomes more digital, computer users, innovators, and researchers will need greater freedom to use computers in creative or unexpected ways.More coming for NCSAMRapid7 hopes National Cybersecurity Awareness Month will be used to enhance understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges researchers face. To celebrate research over the coming weeks, Rapid7 will – among other things – make new vulnerability disclosures, publish blog posts showcasing some of the best research of the year, and release videos that detail policy problems affecting research. Stay tuned, and a cheery NCSAM to you.

Rapid7 Supports Researcher Protections in Michigan Vehicle Hacking Law

Yesterday, the Michigan Senate Judiciary Committee passed a bill – S.B. 0927 – that forbids some forms of vehicle hacking, but includes specific protections for cybersecurity researchers. Rapid7 supports these protections. The bill is not law yet – it has only cleared a Committee…

Yesterday, the Michigan Senate Judiciary Committee passed a bill – S.B. 0927 – that forbids some forms of vehicle hacking, but includes specific protections for cybersecurity researchers. Rapid7 supports these protections. The bill is not law yet – it has only cleared a Committee in the Senate, but it looks poised to keep advancing in the state legislature. Our background and analysis of the bill is below.In summary:The amended bill offers legal protections for independent research and repair of vehicle computers. These protections do not exist in current Michigan state law.The amended bill bans some forms of vehicle hacking that damage property or people, but we believe this was already largely prohibited under current Michigan state law.The bill attempts to make penalties for hacking more proportional, but this may not be effective.BackgroundEarlier this year, Michigan state Senator Mike Kowall introduced S.B. 0927 to prohibit accessing a motor vehicle's electronic system without authorization. The bill would have punished violations with a potential life sentence. As noted by press reports at the time, the bill's broad language made no distinction between malicious actors, researchers, or harmless access. The original bill is available here.After S.B. 0927 was introduced, Rapid7 worked with a coalition of cybersecurity researchers and companies to detail concerns that the bill would chill legitimate research. We argued that motorists are safer as a result of independent research efforts that are not necessarily authorized by vehicle manufacturers. For example, in Jul. 2015, researchers found serious security flaws in Jeep software, prompting a recall of 1.4 million vehicles. Blocking independent research to uncover vehicle software flaws would undermine cybersecurity and put motorists at greater risk.Over a four-month period, Rapid7 worked rather extensively with Sen. Kowall's office and Michigan state executive agencies to minimize the bill's damage to cybersecurity research. We applaud their willingness to consider our concerns and suggestions. The amended bill passed by the Michigan Senate Judiciary Committee, we believe, will help provide researchers with greater legal certainty to independently evaluate and improve vehicle cybersecurity in Michigan.The Researcher ProtectionsFirst, let's examine the bill's protections for researchers – Sec. 5(2)(B); pg. 6, lines 16-21. Explicit protection for cybersecurity researchers does not currently exist in Michigan state law, so we view this provision as a significant step forward.This provision says researchers do not violate the bill's ban on vehicle hacking if the purpose is to test, refine, or improve the vehicle – and not to damage critical infrastructure, other property, or injure other people. The research must also be performed under safe and controlled conditions. A court would need to interpret what qualifies as "safe and controlled" conditions – hacking a moving vehicle on a highway probably would not qualify, but we would argue that working in one's own garage likely sufficiently limits the risks to other people and property.The researcher protections do not depend on authorization from the vehicle manufacturer, dealer, or owner. However, because of the inherent safety risks of vehicles, Rapid7 would support a well-crafted requirement that research beyond passive signals monitoring must obtain authorization from the vehicle owner (as distinct from the manufacturer).The bill offers similar protections for licensed manufacturers, dealers, and mechanics [Sec. 5(2)(A); pg. 6, lines 10-15]. However, both current state law and the bill would not explicitly give vehicle owners (who are not mechanics, or are not conducting research) the right to access their own vehicle computers without manufacturer authorization. However, since Michigan state law does not clearly give owners this ability, the bill is not a step back here. Nonetheless, we would prefer the legislation make clear that it is not a crime for owners to independently access their own vehicle and device software.The Vehicle Hacking BanThe amended bill would explicitly prohibit unauthorized access to motor vehicle electronic systems to alter or use vehicle computers, but only if the purpose was to damage the vehicle, injure persons, or damage other property [Sec. 5(1)(c)-(d); pgs. 5-6, lines 23-8]. That is an important limit that should exclude, for example, passive observation of public vehicle signals or attempts to fix (as opposed to damage) a vehicle.Although the amended bill would introduce a new ban on certain types of vehicle hacking, our take is that this was already illegal under existing Michigan state law. Current Michigan law – at MCL 752.795 – prohibits unauthorized access to "a computer program, computer, computer system, or computer network." The current state definition of "computer" – at MCL 752.792 – is already sweeping enough to encompass vehicle computers and communications systems. Since the law already prohibits unauthorized hacking of vehicle computers, it's difficult to see why this legislation is actually necessary. Although the bill's definition of "motor vehicle electronic system" is too broad [Sec. 2(11); pgs. 3-4, lines 25-3], its redundancy with current state law makes this legislation less of an expansion than if there were no overlap.Penalty ChangesThe amended bill attempts to create some balance to sentencing under Michigan state computer crime law [Sec. 7(2)(A); pg. 8, line 11]. This provision essentially makes harmless violations of Sec. 5 (which includes the general ban on hacking, including vehicles) a misdemeanor, as opposed to a felony. Current state law – at MCL 752.797(2) – makes all Sec. 5 violations felonies, which is potentially harsh for innocuous offenses. We believe that penalties for unauthorized hacking should be proportionate to the crime, so building additional balance in the law is welcome.However, this provision is limited and contradictory. The Sec. 7 provision applies only to those who "did not, and did not intend to," acquire/alter/use a computer or data, and if the violation can be "cured without injury or damage." But to violate Sec. 5, the person must have intentionally accessed a computer to acquire/alter/use a computer or data. So the person did not violate Sec. 5 in the first place if the person did not do those things or did not do them intentionally. It's unclear under what scenario Sec. 7 would kick in and provide a more proportionate sentence – but at least this provision does not appear to cause any harm. We hope this provision can be strengthened and clarified as the bill moves through the Michigan state legislature.ConclusionOn balance, we think the amended bill is a major improvement on the original, though not perfect. The most important improvements we'd like to see are Clarifying the penalty limitation in Sec. 7;  Narrowing the definition of "motor vehicle electrical system" in Sec. 2; andLimiting criminal liability for owners that access software on vehicle computers they own.However, the clear protections for independent researchers are quite helpful, and Rapid7 supports them. To us, the researcher protections further demonstrate that lawmakers are recognizing the benefits of independent research to advance safety, security, and innovation. The attempt at creating proportional sentences is also sorely needed and welcome, if inelegantly executed.The amended bill is at a relatively early stage in the legislative process. It must still pass through the Michigan Senate and House. Nonetheless, it starts off on much more positive footing than it did originally. We intend to track the bill as it moves through the Michigan legislature and hope to see it improve further. In the meantime, we'd welcome feedback from the community.

Security vs. Security - Rapid7 supports strong encryption

A major area of focus in the current cybersecurity policy discussion is how growing adoption of encryption impacts law enforcement and national security, and whether new policies should be developed in response. This post briefly evaluates several potential outcomes of the debate, and provides Rapid7's…

A major area of focus in the current cybersecurity policy discussion is how growing adoption of encryption impacts law enforcement and national security, and whether new policies should be developed in response. This post briefly evaluates several potential outcomes of the debate, and provides Rapid7's current position on each. Background Rapid7 has great respect for the work of our law enforcement and intelligence agencies. As a cybersecurity company that constantly strives to protect our clients from cybercrime and industrial espionage, we appreciate law enforcement's role in deterring and prosecuting wrongdoers. We also recognize the critical need for effective technical tools to counter the serious and growing threats to our networks and personal devices. Encryption is one such tool. Encryption is a fundamental means of protecting data from unauthorized access or use. Commerce, government, and individual internet users depend on strong security for our communications. For example, encryption helps prevent unauthorized parties from reading sensitive communications – like banking or health information – traveling over the internet. Another example: encryption underpins certificates that demonstrate authenticity (am I who I say I am?), so that we can have high confidence that a digital communication – such as a computer software security update – is coming from the right source and not a man-in-the-middle attacker. The growing adoption of encryption for features like these has made users much more safe than we would be without it. Rapid7 believes companies and technology innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system. However, we also recognize this increased data security creates a security trade-off. Law enforcement will at times encounter encryption that it cannot break by brute force and for which only the user – not the software vendor – has the key, and this will hinder lawful searches. The FBI's recently concluded efforts to access the cell phone belonging to deceased terrorist Syed Farook of San Bernardino, California, was a case study in this very issue. Although the prevalence of systems currently secured with end-to-end encryption with no other means of access should not be overstated, law enforcement search attempts may be thwarted more often as communications evolve to use unbreakable encryption with greater frequency. This prospect has tempted government agencies to seek novel ways around encryption. While we do not find fault with law enforcement agencies attempting to execute valid search or surveillance orders, several of the options under debate for circumventing encryption pose broad negative implications for cybersecurity. Weakening encryption One option under discussion is a legal requirement that companies weaken encryption by creating a means of "exceptional access" to software and communications services that government agencies can use to unlock encrypted data. This option could take two forms – one in which the government agencies hold the decryption keys (unmediated access), and one in which the software creator or another third party holds the decryption keys (mediated access). Both models would impose significant security risks for the underlying software or service by creating attack surfaces for bad actors, including cybercriminals and unfriendly international governments. For this reason, Rapid7 does not support a legal requirement for companies or developers to undermine encryption for facilitating government access to encrypted data. The huge diversity of modern communications platforms and software architecture makes it impossible to implement a one-size-fits-all backdoor into encryption. Instead, to comply with a hypothetical mandate to weaken encryption, different companies are likely to build different types of exceptional access. Some encryption backdoors will be inherently more or less secure than others due to technical considerations, the availability of company resources to defend the backdoor against insider and external threats, the attractiveness of client data to bad actors, and other factors. The resulting environment would most likely be highly complex, vulnerable to misuse, and burdensome to businesses and innovators. Rapid7 also shares concerns that requiring US companies to provide exceptional access to encrypted communications for US government agencies would lead to sustained pressure from many jurisdictions – both local and worldwide – for similar access. Companies or oversight bodies may face significant challenges in accurately tracking when, by whom, and under what circumstances client data is accessed – especially if governments have unmediated access to decryption keys. If US products are designed to be inherently insecure and "surveillance-ready," then US companies will face a considerable competitive disadvantage in international markets where more secure products are available. Legal mandates to weaken encryption are unlikely to keep unbreakable encryption out of the hands of well-resourced criminals and terrorists. Open source software is commonly "forked," and it should be expected that developers will modify open source software to remove an encryption backdoor. Jurisdictions without an exceptional access requirement could still distribute closed source software with unbreakable encryption. As a result, the cybersecurity risks of weakened encryption are especially likely to fall on users who are not already security-conscious enough to seek out these workarounds. Intentionally weakening encryption or other technical protections ultimately undermines the security of the end users, businesses, and governments. That said, if companies or software creators voluntarily choose to build exceptional access mechanisms into their encryption, Rapid7 believes it is their right to do so. However, we would not recommend doing so, and we believe companies and creators should be as transparent as possible with their users about any such feature. "Technical assistance" – compelled malware Another option under debate is whether the government can force developers to build custom software that removes security features of the developers' products. This prospect arose in connection with the FBI's now-concluded bid to unlock Farook's encrypted iPhone to retrieve evidence for its terrorism investigation. In that case, a magistrate judge ordered Apple to develop and sign a custom build of iOS that would disable several security features preventing the FBI from using electronic means to quickly crack the phone's passcode via brute force. This custom version of iOS would have been deployed like a firmware update only to the deceased terrorist's iPhone, and Apple would have maintained control of both the iPhone and the custom iOS. However, the FBI ultimately cracked the iPhone without Apple's assistance – with help, according to some reports, from a third party company – and asked the court to vacate its order against Apple. Still, it's possible that law enforcement agencies could again attempt to legally compel companies to hack their own products in the future. In the Farook case, the government had good reason to examine the contents of the iPhone, and clearly took steps to help prevent the custom software from escaping into the wild. This was not a backdoor or exceptional access to encryption as traditionally conceived, and not entirely dissimilar to cooperation Apple has offered law enforcement in the past for unencrypted older versions of iOS. Nonetheless, the legal precedent that would be set if a court compels a company or developer to create malware to weaken its own software could have broad implications that are harmful to cybersecurity. FBI Director James Comey confirmed in testimony before Congress that if the government succeeded in court against Apple, law enforcement agencies would likely use the precedent as justification to demand companies create custom software in the future. It's possible the precedent could be applied to a prolonged wiretap of users of an encrypted messaging service like WhatsApp, or a range of other circumstances. Establishing the limits of this authority would be quite important. If the government consistently compelled companies to create custom software to undermine the security of their own products, the effect could be proliferation of company-created malware. Companies would need to defend their malware from misuse by both insiders and external threats while potentially deploying the malware to comply with many government demands worldwide, which – like defending an encryption backdoor – would be considerably burdensome on companies. This outcome could reduce user trust in the security of vendor-issued software updates, even though it is generally critical for cybersecurity for users to keep their software as up to date as possible. Companies may also design their products to be less secure from the outset, in anticipation of future legal orders to circumvent their own security. These scenarios raise difficult questions for cybersecurity researchers and firms like Rapid7. Government search and surveillance demands are frequently paired with gag orders that forbid the recipient (such as the individual user or a third party service provider) from discussing the demands. Could this practice impact public disclosure or company acknowledgment of a vulnerability when researchers discover a security flaw or threat signature originating from software a company is compelled to create for law enforcement? When would a company be free to fix its government-ordered vulnerability? Would cybersecurity firms be able to wholeheartedly recommend clients accept vendor software updates? Rapid7 does not support legal requirements – whether via legislation or court order – compelling companies to create custom software to degrade security. Creating secure software is very difficult under the best of circumstances, and forcing companies to actively undermine their own security features would undo decades of security learnings and practice. If the government were to compel companies to provide access to its products, Rapid7 believes it would be preferable to use tools already available to the companies (such as that which Apple offered prior to iOS 8) in limited circumstances that do not put non-targeted users at risk. If a company has no means to crack its products already available, the government should not compel a company to create custom software to undermine their products' security features. Software developers should also be free to develop patches or introduce more secure versions of their products to fix vulnerabilities at any time. Government hacking and forensics Finally, there is the option of government deploying its own tools to hack products and services to obtain information. End-to-end encryption provides limited protection when one of the endpoints is compromised. If government agencies do not compel companies to weaken their own products, they could exploit existing vulnerabilities themselves. As noted above, the government's exploitation of existing vulnerabilities was the outcome of the FBI's effort to compel Apple to provide access to Farook's iPhone. Government has also turned to hacking or implanting malware in other contexts well before the Farook case. In many ways, this activity is to be expected. It is not an irrational priority for law enforcement agencies to modernize their computer penetration capabilities to be commensurate with savvy adversaries. A higher level of hacking and digital forensic expertise for law enforcement agencies should improve their ability to combat cybercriminals more generally. However, this approach raises its own set of important questions related to transparency and due process. Upgrading the technological expertise of law enforcement agencies will take time, education, and resources. It will also require thoughtful policy discussions on what the appropriate rules for government hacking should be – there are few clear and publicly available standards for government use of malware. One potentially negative outcome would be government stockpiling of zero day vulnerabilities for use in investigations, without disclosing the vulnerabilities to vendors or the public. The picture is clouded further when the government partners with third party organizations to hack on the government's behalf, as may have occurred in the case of Farook's iPhone – if the third party owns a software exploit, could IP or licensing agreements prevent the government from disclosing the vulnerability to the vendor? White House Cybersecurity Coordinator Michael Daniel noted there were "few hard and fast rules" for disclosing vulnerabilities, but pointed out that zero day stockpiles put Internet users at risk and would not be in the interests of national security. We agree and appreciate the default of vulnerability disclosure, but clearer rules on transparency and due process in the context of government hacking are quickly becoming increasingly important. No easy answers We view the complex issue of encryption and law enforcement access as security versus security. To us, the best path forward is that which would provide the best security for the most number of individuals. To that end, Rapid7 believes that we should embrace the use of strong encryption without compelling companies to create software that undermines their product security features. We want the government to help prevent crime by working with the private sector to make communications services, commercial products, and critical infrastructure trustworthy and resilient. The foundation of greater cybersecurity will benefit us all in the future. Harley Geiger Director of Public Policy, Rapid7

Wassenaar Arrangement - Recommendations for cybersecurity export controls

The U.S. Departments of Commerce and State will renegotiate an international agreement – called the Wassenaar Arrangement – that would place broad new export controls on cybersecurity-related software. An immediate question is how the Arrangement should be revised. Rapid7 drafted some initial revisions to the Arrangement…

The U.S. Departments of Commerce and State will renegotiate an international agreement – called the Wassenaar Arrangement – that would place broad new export controls on cybersecurity-related software. An immediate question is how the Arrangement should be revised. Rapid7 drafted some initial revisions to the Arrangement language – described below and attached as a .pdf to this blog post. We welcome feedback on these suggestions, and we would be glad to see other proposals that are even more effective. Background When the U.S. Departments of Commerce and State agreed – with 40 other nations – to export controls related to "intrusion software" in 2013, their end goal was a noble one: to prevent malware and cyberweapons from falling into the hands of bad actors and repressive governments. As a result of the 2013 addition, the Wassenaar Arrangement requires restrictions on exports for "technology," "software," and "systems" that develop or operate "intrusion software." These items were added to the Wassenaar Arrangement's control list of "dual use" technologies – technologies that can be used maliciously or for legitimate purposes. Yet the Arrangement's new cyber controls would impose burdensome new restrictions on much legitimate cybersecurity activity. Researchers and companies routinely develop proofs of concept to demonstrate a cybersecurity vulnerability, use software to refine and test exploits, and use penetration testing software – such as Rapid7's Metasploit Pro software – to root out flaws by mimicking attackers. The Wassenaar Arrangement could (depending how each country implements it) either require new licenses for each international export of such software, or prohibit international export altogether. This would create significant unintended negative consequences for cybersecurity since cybersecurity is a global enterprise that routinely requires cross-border collaboration. Rapid7 submitted detailed [comments](https://community.rapid7.com/servlet/JiveServlet/download/7173-27375/Rapid7 - Comments to BIS Proposed Cyber Rule_final.pdf) to the Dept. of Commerce describing this problem in July 2015, as did many other stakeholders. The Wassenaar Arrangement was also the subject of a Congressional hearing in January 2016. [For additional info, check out Rapid7's FAQ on the Wassenaar Arrangement – available here.] Revising the Wassenaar Arrangement To their credit, the Depts. of Commerce and State recognize the overbreadth of the Arrangement and are motivated to negotiate modifications to the core text. The agencies recently submitted agenda items for the next Wassenaar meeting – specifically, removal of the "technology" control, and then placeholders for other controls. A big question now is what should happen under those placeholders – a placeholder does not necessarily mean that the agencies will ultimately renegotiate those items. To help address this problem, Rapid7 drafted initial suggestions on how to revise the Wassenaar Arrangement, incorporating feedback from numerous partners. Rapid7's proposal builds on the good work of Mara Tam of HackerOne and her colleagues, as well as that of Sergey Bratus, one of the most important contributions of which was to emphasize that authorization is a distinguishing feature of legitimate – as opposed to malicious – use of cybersecurity tools. Our suggested revisions can be broken down into three categories: 1) Exceptions to the Wassenaar Arrangement controls on "systems," "software," and "technology." These are the items on which the Wassenaar Arrangement puts export restrictions. We suggest creating exceptions for software and systems designed to be installed by administrators or users for security enhancement purposes. These changes should help exclude many cybersecurity products from the Arrangement's controls, since such products are typically used only with authorization for the purpose of enhancing security – as compared with (for example) FinFisher, which is not designed for cybersecurity protection. It's worth noting that our language is not based solely on the intent of the exporter, since the proposed language requires the software to be designed for security purposes, which is a more objective and technical measure than intent alone. In addition, we agree with the Depts. of State and Commerce that the control on "technology" should be removed because it is especially overbroad. Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough: 4.A.5.   Systems, equipment, and components therefor, specially designed or modified for the generation, operation or delivery of, or communication with, "intrusion software". Note:  4.A.5 does not apply to systems, equipment, or components specially designed to be installed or used with authorization by administrators, owners, or users for the purposes of asset protection, asset tracking, asset recovery, or ‘ICT security testing'. 4.D.4.  "Software" specially designed or modified for the generation, operation or deliver of, or communication with, "intrusion software". Note:  4.D.4 does not apply to "software" specially designed to be installed or used with authorization by administrators, owners, or users for the purposes of asset protection, asset tracking, asset recovery, or ‘ICT security testing'. “Software” shall be deemed "specially designed" where it incorporates one or more features designed to confirm that the product is used for security enhancement purposes. Examples of such features include, but are not limited to: a. A disabling mechanism that permits an administrator or software creator to prevent an account from receiving updates; or b. The use of extensive logging within the product to ensure that significant actions taken by the user can be audited and verified at a later date, and a means to protect the integrity of the logs. 4.E.1.a. "Technology" [...] for the "development," "production" or "use" of equipment or "software" specified by 4.A. or 4.D. 4.E.1.c. "Technology" for the "development" of "intrusion software". 2) Redefining "intrusion software." Although the Wassenaar Arrangement does not directly control "intrusion software," the "intrusion software" definition underpins the Arrangement's controls on software, systems, and technology that operate or communicate with "intrusion software." Our goal here is to help narrow the definition of "intrusion software" to code that can be used for malicious purposes. To do this, we suggest redefining "intrusion software" as specially designed to be run or installed without authorization of the owner or administrator and extracting, modifying, or denying access to a system or data without authorization. Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough: Cat 4 "Intrusion software"1. "Software" a. specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', or to be run or installed without the authorization of the user, owner, or ‘administrator' of a computer or network-capable device, and b. performing any of the following: a.1. The unauthorized extraction of or denial of access to data or information from a computer or network-capable device, or the modification of system or user data; or b.2. The unauthorized modification of the standard execution path or a program or process in order to allow the execution of externally provided instructions system or user data to facilitate access to data stored on a computer or network-capable device by parties other than parties authorized by the owner, user, or ‘administrator' of the computer or network-capable device. 3) Exceptions to the definition of "intrusion software." The above modification to the Arrangement's definition of "intrusion software" is not adequate on its own because exploits – which are routinely shared for cybersecurity purposes – are designed to be used without authorization. Therefore, we suggest creating two exceptions to the definition of "intrusion software." The first is to confirm that "intrusion software" does not include software designed to be installed or used with authorization for security enhancement. The second is to exclude software that is distributed for the purpose of preventing its unauthorized execution to particular end users. Those end users include 1) organizations conducting research, education, or security testing, 2) computer emergency response teams (CERT), 3) creators or owners of products vulnerable to unauthorized execution of the software, or 4) among an entities subsidiaries or affiliates. So, an example: A German researcher discovers a vulnerability in a consumer software product, and she shares a proof-of-concept with 2) CERT, and 3) a UK company that owns the flawed product; the UK company then shares the proof-of-concept with 4) its Ireland-based subsidiary, and 1) a cybersecurity testing firm. The beneficial and commonsense information sharing outlined in this scenario would not require export licenses under our proposed language. Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough: Notes 1. "Intrusion software" does not include any of the following: a. Hypervisors, debuggers or Software Reverse Engineering (SRE) tools; b. Digital Rights Management (DRM) "software"; or_ c. "Software" designed to be installed or used with authorization by manufacturers, administrators, owners, or users, for the purposes of asset protection, asset tracking, or asset recovery., or ‘ICT security testing'; or_ d. “Software” that is distributed, for the purposes of helping detect or prevent its unauthorized execution, 1) To organizations conducting or facilitating research, education, or 'ICT security testing', 2) To Computer Emergency Response Teams, 3) To the creators or owners of products vulnerable to unauthorized execution of the software, or 4) Among and between an entity's domestic and foreign affiliates or subsidiaries. _Technical Notes_ _1._ _Monitoring tools': "software" or hardware devices, that monitor system behaviours or processes running on a device. This includes antivirus (AV) products, end point security products, Personal Security Products (PSP), Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS) or firewalls._ _2\._ 'Protective countermeasures': techniques designed to ensure the safe execution of code, such as Data Execution Prevention (DEP), Address Space Layout Randomisation (ASLR) or sandboxing. _3\. ‘Authorization' means the affirmative or implied consent of the owner, user, or administrator of the computer or network-capable device._ 4. ‘Administrator' means owner-authorized agent or user of a network, computer, or network-capable device 5. 'Information and Communications Technology (ICT) security testing' means discovery and assessment of static or dynamic risk, vulnerability, error, or weakness affecting “software”, networks, computers, network-capable devices, and components or dependencies therefor, for the demonstrated purpose of mitigating factors detrimental to safe and secure operation, use, or deployment. This is a complex issue on several fronts. For one, it is always difficult to clearly distinguish between software and code used for legitimately beneficial versus malicious purposes. For another, the Wassenaar Arrangement itself is a convoluted international legal document with its own language, style, and processes. Our suggestions are a work in progress, and we may ultimately throw our support behind other, more effective language. We don't presume these suggestions are foolproof, and constructive feedback is certainly welcome. Time is relatively short, however, as meetings concerning the renegotiation of the Wassenaar Arrangement will begin again during the week of April 11th. It's also worth bearing in mind that even if many cybersecurity companies, researchers, and other stakeholders come to agreement on revisions, any final decisions will be made with the consensus of the 41 nations party to the Arrangement. Still, we hope suggesting this language helps inform the discussion. As written, the Arrangement could cause significant damage to legitimate cybersecurity activities, and it would be very unfortunate if that were not corrected.

Rapid7, Bugcrowd, and HackerOne file pro-researcher comments on DMCA Sec. 1201

On Mar. 3rd, Rapid7, Bugcrowd, and HackerOne submitted joint comments to the Copyright Office urging them to provide additional protections for security researchers. The Copyright Office requested public input as part of a study on Section 1201 of the Digital Millennium Copyright Act (DMCA). Our…

On Mar. 3rd, Rapid7, Bugcrowd, and HackerOne submitted joint comments to the Copyright Office urging them to provide additional protections for security researchers. The Copyright Office requested public input as part of a study on Section 1201 of the Digital Millennium Copyright Act (DMCA). Our comments to the Copyright Office focused on reforming Sec. 1201 to enable security research and protect researchers. Our comments are available here. Background Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs) to access copyrighted works, including software, without permission of the owner. That hinders a lot of security research, tinkering, and independent repair. Violations of Sec. 1201 can carry potentially stiff criminal and civil penalties. To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years. These temporary exemptions automatically expire at the end of the three-year window, and advocates for them must reapply every time the exemption window opens. Sec. 1201 includes a permanent exception to the prohibition on circumventing TPMs for security testing, but the exception is quite limited – in part because researchers are still required to get prior permission from the software owner, as we describe in more detail below. Because the permanent exemption is limited, many researchers, organizations, and companies (including Rapid7) urged the Copyright Office to use its power to grant a temporary three-year exemption for security testing that would not require researchers to get prior permission. The Copyright office did so in Oct. 2015, granting an exemption to Sec. 1201 for good faith security research that circumvents TPMs without permission. However, this exemption will expire at the end of the the three year exemption window,  after which security researchers will have to start from zero in re-applying for another temporary exemption. The Copyright Office then announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study, as the Office put it, to assess the operation of Sec. 1201, including the permanent exemptions and the 3-year rulemaking process. This study comes at a time that House Judiciary Committee Chairman Goodlatte is reviewing copyright law with an eye towards possible updates, so the Copyright Office's study may help inform that effort. Rapid7 supports the goal of protecting copyrighted works, but hopes to see legal reforms that reduce the overbreadth of copyright law so that it no longer unnecessarily restrains security research on software. Overview of Comments For its study, the Copyright Office asked a series of questions on Sec. 1201 and invited the public to submit answers. Below are some of the questions, and the responses we provided in our comments. "Please provide any insights or observations regarding the role and effectiveness of the prohibition on circumvention of technological measures in section 1201(a)." Our comments to the Copyright Office emphasized that Sec. 1201 adversely affects security research by forbidding researchers from unlocking TPMs to analyze software for vulnerabilities. We argued that good faith researchers do not seek to infringe copyright, but rather to evaluate and test software for flaws that could cause harm to individuals and businesses. The risk of harm resulting from exploitation of software vulnerabilities can be quite serious, as Rapid7 Senior Security Consultant Jay Radcliffe described in 2015 comments to the Copyright Office. Society would benefit – and copyright interests would not be weakened – by raising awareness and urging correction of such software vulnerabilities. "How should section 1201 accommodate interests that are outside of core copyright concerns[?]" Our comments responded that the Copyright Office should consider non-copyright interests only for scaling back restrictions under Sec. 1201 – for example, the Copyright Office should weigh the chilling effect Sec. 1201 has on security research in determining whether to grant an exemption for research to Sec. 1201. However, we argued that the Copyright Office should not consider non-copyright interests in denying an exemption, because copyright law is not the appropriate means of advancing non-copyright interests at the expense of activity that does not infringe copyright, like security research. Should section 1201 be adjusted to provide for presumptive renewal of previously granted exemptions—for example, when there is no meaningful opposition to renewal—or otherwise be modified to streamline the process of continuing an existing exemption? Our comments supported this commonsense concept. Currently, the three-year exemptions expire and must be re-applied for, which is a complex and resource-intensive process. We argued that a presumption of renewal should not hinge on a lack of "meaningful opposition," since the opposition to the 2015 security researcher exemption is unlikely to abate – though that opposition is largely based on concerns wholly distinct from copyright, like vehicular safety. Our comments also suggested that any presumption of renewal of exceptions to Sec. 1201 should be overcome only by a strong standard, such as a material change in circumstances. Please assess whether the existing categories of permanent exemptions are necessary, relevant, and/or sufficient. How do the permanent exemptions affect the current state of reverse engineering, encryption research, and security testing? Our comments said that Sec. 1201(j)'s permanent exemption for security testing was not adequate for several reasons. The security testing exemption requires the testing to be performed for the sole purpose of benefiting the owner or operator of the computer system – meaning research taken for the benefit of software users or the public at large may not qualify. The security testing exemption also requires researchers to obtain authorization of owners or operators of computers prior to circumventing software TPMs – so the owners and operators can dictate the circumstances of any research that takes place, which may chill truly independent research. Finally, the security testing exemption only applies if the research violates no other laws – yet research can implicate many laws with legal uncertainty in different jurisdictions. These and other problems with Sec. 1201's permanent exemptions should give impetus for improvements – such as removing the requirements1) that the researcher must obtain authorization before circumventing TPMs, 2) that the security testing must be performed solely for the benefit of the computer owner, and 3) that the research not violate any other laws. We sincerely appreciate the Copyright Office conducting this public study of Sec. 1201 and providing the opportunity to submit comments. Rapid7 submitted comments with HackerOne and Bugcrowd to demonstrate unity on the importance of reforming Sec. 1201 to enable good faith security research. Although the public comment period for this study is now closed, potential next steps include a second set of comments in response to any of the 60 organizations and individuals that provided input to the Copyright Office's study, as well as potential legislation or other Congressional action on Sec. 1201. For each next step, we will aim to work with our industry colleagues and other stakeholders to propose reforms that can protect both copyright and independent security research.

I've joined Rapid7!

Hello! My name is Harley Geiger and I joined Rapid7 as director of public policy, based out of our Washington, DC-area office. I actually joined a little more than a month ago, but there's been a lot going on! I'm excited to be a part…

Hello! My name is Harley Geiger and I joined Rapid7 as director of public policy, based out of our Washington, DC-area office. I actually joined a little more than a month ago, but there's been a lot going on! I'm excited to be a part of a team dedicated to making our interconnected world a safer place.Rapid7 has demonstrated a commitment to helping promote legal protections for the security research community. I am a lawyer, not a technologist, and part of the value I hope to add is as a representative of security researchers' interests before government and lawmaking bodies – to help craft policies that recognize the vital role researchers play in strengthening digital products and services, and to help prevent reflexive anti-hacking regulations. I will also work to educate the public and other security researchers about the impact laws and legislation may have on cybersecurity.Security researchers are on the front lines of dangerous ambiguities in the law. Discovering and patching security vulnerabilities is a highly valuable service – vulnerabilities can put property, safety, and dignity at risk. Yet finding software vulnerabilities often means using the software in ways the original coders do not expect or authorize, which can create legal issues. Unfortunately, many computer crime laws - like the Computer Fraud and Abuse Act (CFAA) - were enacted decades ago and make little distinction between beneficial security research and malicious hacking. And, due to the steady stream of breaches, there is constant pressure on policymakers to expand these laws even further.I believe the issues currently facing security researchers also have broader societal implications that will grow in importance. Modern life is teeming with computers, but the future will be even more digitized. The laws governing our interactions with computers and software will increasingly control our interactions with everyday objects – including those we supposedly own – potentially chilling cybersecurity research, repair, and innovation when these activities should be broadly encouraged. We, collectively, will need greater freedom to oversee, modify, and secure the code around us than the law presently affords.That is a major reason why the opportunity to lead Rapid7's public policy activities held a lot of appeal for me. I strongly support Rapid7's mission of making digital products and services safer for all users. In addition, it helped that I got to know Rapid7's leadership team years before joining. I first met Corey Thomas, Lee Weiner, and Jen Ellis while working on "Aaron's Law" for Rep. Zoe Lofgren in the US House of Representatives. After working for Rep. Lofgren, I was Senior Counsel and Advocacy Director at the Center for Democracy & Technology (CDT), where I again collaborated with Rapid7 on cybersecurity legislation. I've been consistently impressed by the team's overall effectiveness and dedication. Now that I'm part of the team, I look forward to working with all of you to modernize how the law approaches security research and cybersecurity. Please let me know if you have ideas for collaboration or opportunities to spread our message. Thank you!Harley GeigerDirector of Public PolicyRapid7@HarleyGeiger

12 Days of HaXmas: Political Pwnage in 2015

This post is the ninth in the series, "The 12 Days of HaXmas."2015 was a big year for cybersecurity policy and legislation; thanks to the Sony breach at the end of 2014 year, we kicked the new year off with a renewed focus on…

This post is the ninth in the series, "The 12 Days of HaXmas."2015 was a big year for cybersecurity policy and legislation; thanks to the Sony breach at the end of 2014 year, we kicked the new year off with a renewed focus on cybersecurity in the US Government. The White House issued three legislative proposals, held a cybersecurity summit, and signed a new Executive Order, all before the end of February. The OPM breach and a series of other high profile cybersecurity stories continued to drive a huge amount of debate on cybersecurity across the Government sphere throughout the year. Pretty much every office and agency is building a position on this topic, and Congress introduced more than 100 bills referencing “cybersecurity” during the year.So where has the security community netted out at the end of the year in terms of policy and legislation? Let's recap some of the biggest cybersecurity policy developments…Cybersecurity Information SharingThis was Congress' top priority for cybersecurity legislation in 2015 and the TL;DR is that an info sharing bill was passed right before the end of the year. The idea of an agreed legal framework for cybersecurity information sharing has merit; however, the bill has drawn a great deal of fire over privacy concerns, particularly with regard to how intelligence and law enforcement agencies will use and share information.The final bill was the result of more than five years of debate over various legislative proposals for information sharing, including three separate bills that went through votes in 2015 (two in the House and one in the Senate).  In the end, the final bill was agreed through a conference process between the House and Senate, and included in the Omnibus Appropriations Act.Despite this being the big priority for cybersecurity legislation, a common view in the security community seems to be that this is unlikely to have much impact in the near term. This is partly because organizations with the resources and ability to share cybersecurity information, such as large financial or retail organizations, are already doing so.  The liability limitation granted in the bill means they are able to continue to do this with more confidence. It's unlikely to draw new organizations into the practice as participation has traditionally centered more on whether the business has the requisite in-house expertise, and a risk profile that makes security a headline priority for the business, rather than questions of liability. For many organizations that strongly advocated for legislation, a key goal was to get the government to improve its processes for sharing information with the private sector. It remains to be seen whether the legislation will actually help with this.Right to ResearchFor those that have read any of my other posts on legislation, you probably know that protecting and promoting research is the primary purpose of Rapid7's (and my) engagement in the legislative arena. This year was an interesting year in terms of the discussion around the right to research…The DMCA Rulemaking The Digital Millennium Copyright Act (DMCA) prohibits the circumvention of technical measures that control access to copyrighted works, and thus it has traditionally been at odds with security research.  Every three years, there is a “rulemaking” process for the DMCA whereby exemptions to the prohibition can be requested and debated through a multi-phase public process. All granted exemptions reset at this point, so even if your exemption has been passed before, it needs to be re-requested every three years.  The idea of this is to help the law keep up with the changing technological landscape, which is sensible, but the reality is a pretty painful, protracted process that doesn't really achieve the goal.In 2015, several requests for security research exemptions were submitted: two general ones, one for medical devices, and one for vehicles. The Library of Congress, who oversees the rulemaking process, rolled these exemption requests together and at the end of October it announced approval of an exemption for good faith security research on consumer-centric devices, vehicles, and implantable medical devices. Hooray!Don't get too excited though – the language in the announcement sounded slightly as though the Library of Congress was approving the exemption against its own better judgment and with some heavy caveats, most notably that it won't come in to effect for a year (apart from for voting machines, which you can start researching now). More on that here.Despite that, this is a positive step in terms of acceptance for security research, and demonstrates increased support and understanding of its value in the government sector.CFAA ReformThe Computer Fraud and Abuse Act (CFAA) is the main anti-hacking law in the US and all kinds of problematic.  The basic issues as relates to security research can be summarized as follows:It's out of date – it was first passed in 1986, and despite some “updates” since then, it feels woefully out of date. One of the clearest examples of this is that the law talks about access to “protected computers,” which back in '86 probably meant a giant machine with an actual fence and guard watching over it. Today it means pretty much any device you use that is more technically advanced than a sliderule.It's ambiguous – the law hinges on the notion of “authorization” (you're either accessing something without authorization, or exceeding authorized access), yet this term is not defined anywhere, and hence there is no clear line of what is or is not permissible. Call me old fashioned, but I think people should be able to understand what is covered by a law that applies to them…It contains both civil and criminal causes of action. Sadly, most researchers I know have received legal threats at some point. The vast majority have come from technology providers rather than law enforcement; the civil causes of action in the CFAA provide a handy stick for technology providers to wield against researchers when they are concerned about negative consequences of a disclosure.The CFAA is hugely controversial, with many voices (and dollars spent) on all sides of the debate, and as such efforts to update it to address these issues have not yet been successful.In 2015 though, we saw the Administration looking to extend the law enforcement authorities and penalties of the CFAA as a means of tackling cybercrime. This focus found resonance on the Hill, resulting in the development of the International Cybercrime Prevention Act, which was then abridged and turned into an amendment that its sponsors hoped to attach to the cybersecurity information sharing legislation. Ultimately, the amendment was not included with the bill that went to the vote, which was the right outcome in my opinion.The interesting and positive thing about the process though was the diligence of staff in seeking out and listening to feedback from the security research community. The language was revised several times to address security research concerns. To those who feel that the government doesn't care about security research and doesn't listen, I want to highlight that the care and consideration shown by the White House, Department of Justice, and various Congressional offices through discussions around CFAA reform this year suggests that is not universally the case. Some people definitely get it, and they are prepared to invest the time to listen to our concerns and get the approach right.It's Not All Positive NewsDespite my comments above, it certainly isn't all plain sailing, and there are those in the Government that fear that researchers may do more harm than good. We saw this particularly clearly with a vehicle safety bill proposal in the second half of the year, which would make car research illegal. Unfortunately, the momentum for this was fed by fears over the way certain high profile car research was handled this year.The good news is that there were plenty of voices on the other side pointing out the value of research as the bill was debated in two Congressional hearings. As yet, this bill has not been formally introduced, and it's unlikely to be without a serious rewrite. Still, it behooves the security research community to consider how its actions may be viewed by those on the outside – are we really showing our good intentions in the best light? I have increasingly heard questions arise in the government about regulating research or licensing researchers. If we want to be able to address that kind of thinking in a constructive way to reach the best outcome, we have to demonstrate an ability to engage productively and responsibly.Vulnerability DisclosureFollowing high profile vulnerability disclosures in 2014 (namely, Heartbleed and Shellshock), and much talk around bug bounties, challenges with multi-party coordinated disclosures, and best practices for so called “safety industries” – where problems with technology can adversely impact human health and safety, so e.g. medical devices, transportation, power grids etc. – the topic of vulnerability disclosure was once again on the agenda. This time, it was the Obama Administration taking an interest, led by the National Telecommunications and Information Administration (NTIA, part of the Department of Commerce). They convened a public multi-stakeholder process to tackle the thorny and already much-debated topic of vulnerability disclosure.  The project is still in relatively early stages, and could probably do with a few more researcher voices, so get involved! One of the inspiring things for me is the number of vendors that are new to thinking about these things and are participating.  Hopefully we will see them adopting best practices and leading the way for others in their industries.At this stage, participants have split into four groups to tackle multiple challenges: awareness and adoption of best practices; multi-party coordinated disclosure; best practices for safety industries; and economic incentives.  I co-lead the awareness and adoption group with the amazing Amanda Craig from Microsoft, and we're hopeful that the group will come up with some practical measures to tackle this challenge.  If you're interested in more information on this issue specifically, you can email us.Export ControlsThanks to the Wassenaar Arrangement, in 2015, export controls became a hot topic in the security industry, probably for the first time since the Encryption Wars (Part I).The Wassenaar Arrangement is an export control agreement amongst 41 nation states with a particular focus on national security issues – hence it pertains to military and dual use technologies. In 2013, the members decided that should include both intrusion and surveillance technologies (as two separate categories).  From what I've seen, the surveillance category seems largely uncontested; however, the intrusion category has caused a great deal of concern across the security and other business communities.This is a multi-layered concern – the core language that all 41 states agreed to raises concerns, and the US proposed rule for implementing the control raises additional concerns. The good news is that the Bureau of Industry and Security (BIS) – the folks at the Department of Commerce that implement the control – and various other parts of the Administration, have been highly engaged with the security and business communities on the challenges, and have committed to redrafting the proposed rule, implementing a number of exemptions to make it livable in the US, and opening a second public comment period in the new year.  All of which is actually kind of unheard of, and is a strong indication of their desire to get this right.  Thank you to them!Unfortunately the bad news is that this doesn't tackle the underlying issues in the core language. The problem here is that the definition of what's covered is overly broad and limits sharing information on exploitation. This has serious implication for security researchers, who often build on each other's work, collaborate to reach better outcomes, and help each other learn and grow (which is also critical given the skills shortage we face in the security industry). If researchers around the world are not able to share cybersecurity information freely, we all become poorer and more vulnerable to attack.There is additional bad news: more than 30 of the member states have already implemented the rule and seem to have fewer concerns over it, and this means the State Department, which represents the US at the Wassenaar discussions, are not enthusiastic about revisiting the topic and requesting the language be edited or overturned. The Wassenaar Arrangement is based on all members arriving at consensus, so all must vote and agree when a new category is added, meaning the US agreed to the language and agreed to implement the rule. From State's point of view, we missed our window to raise objections of this nature and it's now our responsibility to find a way to live with the rule. Dissenters ask why the security industry wasn't consulted BEFORE the category was added.The bottom line is that while the US Government can come up with enough exemptions in their implementation to make the rule toothless and not worth the paper it's written on, it will still leave US companies exposed to greater risk if the core language is not addressed.As I mentioned, we've seen excellent engagement from the Administration on this issue and I'm hopeful we'll find a solution through collaboration. Recently, we've also seen Congress start to pay close attention to this issue, which is also likely to help move the discussion forward:In December, 125 members of the House of Representatives signed a letter addressed to the White House asking them to step into the discussion around the intrusion category. That's a lot of signatories and will hopefully encourage the White House to get involved in an official capacity. It also indicates that Wassenaar is potentially going to be a hot topic for Congress in 2016.Reflecting that, the House Committee on Homeland Security, and the House Committee on Oversight and Government Reform are joining forces for a joint hearing on this topic in January.The challenges with the intrusion technology category of the Wassenaar Arrangement highlight a hugely complex problem: how do we reap the benefits of a global economy, while clinging to regionalized nation state approaches to governing that economy?  How do you apply nation state laws to a borderless domain like the internet? There are no easy answers to these questions, and we'll see the challenges continue to arise in many areas of legislation and policy this year.Breach NotificationIn the US, there was talk of a federal law to set one standard for breach notification.  The US currently has 47 distinct state laws setting requirements for breach notification.  For any businesses operating in multiple states, this creates confusion and administrative overhead.  The goal for those that want a federal breach notification law is to simplify this by having one standard that applies across the entire country. In principle this sounds very sensible and reasonable.  The problem is that the federal legislative process does not move quickly, and there is concern that by making this a federal law, it will not be able to keep up with changes in the security or information landscape, and thus consumers will end up worse off than they are today. To address this concern, consumer protection advocates urge that the federal law not pre-empt state law that sets a higher standard for notification. However, this does not alleviate the core problem any breach notification bill is trying to get at – it just adds yet another layer of confusion for businesses. So I suspect it's unlikely we'll see a federal breach notification bill pass any time soon, but I wouldn't be surprised if we see this topic come up again in cybersecurity legislative proposals this year.Across the pond, there was an interesting development on this topic at the end of the year – the European Union issued the Network and Information Security Directive, which, amongst other things, requires that operators of critical national infrastructure must report breaches (interestingly, this is kind of at odds with the US approach, where there is a law protecting critical infrastructure from public disclosure). The EU directive is not a law – member states will now have to develop their own in-country laws to put this into practice. This will take some time, so we won't see a change straight away.  My hope is that many of the member states will not limit their breach notification requirements to only organizations operating critical infrastructure – consumers should be informed if they are put at risk, regardless of the industry. Over time, this could mark a significant shift in the dialogue and awareness of security issues in Europe; today there seems to be a feeling that European companies are not being targeted as much as US ones, which seems hard to believe. It seems likely to me that a part of the reason we don't hear about it so much because the breach notification requirement is not there, and many victims of attacks keep it confidential.Cyber HygieneThis was a term I heard a great deal in government circles this year as policy makers tried to come up with ways of encouraging organizations to put basic security best practices in place. The kinds of things that would be on the list here would be patching, using encryption, that kind of thing.  It's almost impossible to legislate for this in any meaningful way, partly because the requirements would likely already be out of date by the time a bill passed, and partly because you can't take a one-size-fits-all approach to security. It's more productive for Governments to take a collaborative, educational approach and provide a baseline framework that can be adapted to an organization's needs. This is the approach the US takes with the NIST Framework (which is due for an update in 2016), and similarly CESG in the UK provides excellent non-mandated guidance.There was some discussion around incentivizing adoption of security practices – we see this applied with liability limitation in the information sharing law.  Similarly, there was an attempt at using this carrot to incentivize adoption of security technologies.  The Department of Homeland Security (DHS) awarded FireEye certification under the SAFETY Act. This law is designed to encourage the use of anti-terrorism technologies by limiting liability for a terrorist attack. So let's say you run a football stadium and you deploy body scanners for everyone coming on to the grounds, but someone still manages to smuggle in and set off an incendiary device; you could be protected from liability because you were using the scanners and taking reasonable measures to stop an attack. In order for organizations to receive the liability limitation, the technology they deploy must be certified by DHS.Now when you're talking about terrorist attacks, you're talking about some very extreme circumstances, with very extreme outcomes, and something that statistically is rare (tragically not as rare as it should be). By contrast, cybercrime is extremely common, and can range vastly in its impact, so this is basically like comparing apples to flamethrowers. On top of that, using a specific cybersecurity technology may be less effective than an approach that layers a number of security practices together, e.g. setting appropriate internal policies, educating employees, patching, air gapping etc. Yet, if an organization has liability limitation because it is deploying a security technology, it may feel these other measures are unnecessarily costly and resource hungry. So there is a pretty reasonable concern that applying the SAFETY Act to cybersecurity may be counter-productive and actually encourage organizations to take security less seriously than they might without liability limitation.None-the-less, there was a suggestion that the SAFETY Act be amended to cover cybersecurity. Following a Congressional hearing on this, the topic has not raised its head again, but it may reappear in 2016.EncryptionUnless you've been living under a rock for the past several months (which might not be a terrible choice all things considered), you'll already be aware of the intense debate raging around mandating backdoors for encryption. I won't rehash it here, but have included it because it's likely to be The Big Topic for 2016. I doubt we'll see any real resolution, but expect to see much debate on this, both in the US and internationally.~@infosecjen

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now