Rapid7 Blog

NCSAM  

NCSAM Security Crash Diet, Week 2: Social and Travel

Rapid7 guinea pig 'Olivia' describes her efforts during week two of her security 'crash diet for National Cyber Security Awareness Month. This week focused on social sharing and travel security.…

Rapid7 guinea pig 'Olivia' describes her efforts during week two of her security 'crash diet for National Cyber Security Awareness Month. This week focused on social sharing and travel security.

NCSAM: How Hackable Are You?

Rapid7 partnered with The Today Show to offer a fun, fast self-assessment quiz to determine individual cybersecurity risk levels. How hackable are you?…

Rapid7 partnered with The Today Show to offer a fun, fast self-assessment quiz to determine individual cybersecurity risk levels. How hackable are you?

NCSAM Security Crash Diet, Week 1: Maintenance

One of Rapid7's employees tries a month of different 'security diets' in the spirit of National Cyber Security Awareness Month. Week one highlights the importance of maintenance.…

One of Rapid7's employees tries a month of different 'security diets' in the spirit of National Cyber Security Awareness Month. Week one highlights the importance of maintenance.

NCSAM: A Personal Security Crash Diet

We're kicking of National Cyber Security Awareness Month by getting a Rapid7 employee to test out the practicality of common security advice. Follow along throughout October.…

We're kicking of National Cyber Security Awareness Month by getting a Rapid7 employee to test out the practicality of common security advice. Follow along throughout October.

NCSAM: Understanding UDP Amplification Vulnerabilities Through Rapid7 Research

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial…

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face. When we began brainstorming ideas for NCSAM, I suggested something related to distributed denial of service (DDoS) attacks, specifically with a focus on the UDP amplification vulnerabilities that are typically abused as part of these attacks.  Rarely a day goes by lately in the infosec world where you don't hear about DDoS attacks crushing the Internet presence of various companies for a few hours, days, weeks, or more.  Even as I wrote this, DDoS attacks are on the front page of many major news outlets. A variety of services that I needed to write this very blog post were down because of DDoS, and I even heard DDoS discussed on the only radio station I am able to get where I live. Timely. What follows is a brief primer on and a look at what resources Rapid7 provides for further understanding UDP amplification attacks. Background A denial of service (DoS) vulnerability is about as simple as it sounds -- this vulnerability exists when it is possible to deny, prevent or in some way hinder access to a particular type of service.  Abusing a DoS vulnerability usually involves an attack that consumes precious compute or network resources such as CPU, memory, disk, and network bandwidth. A DDoS attack is just a DoS attack on a larger scale, often using the resources of compromised devices on the Internet or other unwitting systems to participate in the attack. A distributed, reflected denial of service (DRDoS) attack is a specialized variant of the DDoS attack that typically exploits UDP amplification vulnerabilities.  These are often referred to as volumetric DDoS attacks, a more generic type of DDoS attack that specifically attempts to consume precious network resources. A UDP amplification vulnerability occurs when a UDP service responds with more data or packets than the initial request that elicited the response(s). Combined with IP packet spoofing/forgery, attackers send a large number of spoofed UDP datagrams to UDP endpoints known to be susceptible to UDP amplification using a source address corresponding to the IP address of the ultimate target of the DoS.  In this sense, the forged packets cause the UDP service to "reflect" back at the DoS target. The exact impact of the attack is a function of how many systems participate in the attack and their available network resources, the network resources available to the target, as well as the bandwidth and packet amplification factors of the UDP service in question. A UDP service that returns 100 bytes of UDP payload in response to a 1-byte request is said to have a 100x bandwidth amplification factor.  A UDP service that returns 5 UDP packets in response to 1 request is said to have a 5x packet amplification factor. Oftentimes a ratio is used in place of a factor. For example, a 5x amplification factor can also be said to have a 1:5 amplification ratio. For more information, consider the following resources: US-CERT's alert on UDP-Based Amplification Attacks The Spoofer project from the Center for Applied Internet Data Analysis (CAIDA) Sonar UDP Studies Rapid7's Project Sonar has been performing a variety of UDP scans on a monthly basis and uploading the results to scans.io for consumption by the larger infosec/network community for nearly three years. Infosec practitioners can use this raw scan data to research a variety of aspects related to UDP amplification vulnerabilities, including geographic/sector specific patterns, amplification factors, etc.  There are, however, some caveats: Although we do not currently have studies for all UDP services with amplification vulnerabilities, we have a fair number and are in the process of adding more. Not all of these studies specifically cover the UDP amplification vulnerabilities for the given service.  Some happen to use other probe types more likely to elicit a response.  In these cases, the existence of a response for a given IP simply means that it responded to our probe for that service, is likely running that service in question, but doesn't necessarily imply that the IP in question is harboring a UDP amplification vulnerability. Our current usage of zmap is such that we will only record the first UDP packet seen in response to our probe.  So, if a UDP service happens to suffer from a packet-based UDP amplification vulnerability, the Sonar data alone may not show the true extent. Currently, Rapid7's Project Sonar has coverage for a variety of  UDP services that happen to be susceptible to UDP amplification attacks. Dennis Rand, a security researcher from Denmark, recently reached out to us asking for us to provide regular studies of the qotd (17/UDP), chargen (19/UDP) and RIPv1 services (520/UDP). When discussing his use cases for these and the other existing Sonar studies, Dennis had the following to add: "I've been using the dataset from Rapid7 UDP Sonar for various research projects as a baseline and core part of the dataset in my research has been amazing. This data could be used by any ISPs out there to detect if they are potentially being part of the problem.  A simple task could be to setup a script that would pull the lists every month and the compare it against previous months, if at some point the numbers go way up, this could be an indication that you might have opened up for something you should not, or at least be aware of this fact in you internal risk assessment. Also it is awesome to work with people who are first of doing this for free, at least seen from my perspective, but still being so open to helping out in the research, like adding new service to the dataset help me to span even wider in my research projects." For each of the studies described below, the data provided on scans.io is gzip-compressed CSV with a header indicating the respective field values, which are, for every host that responded, the timestamp in seconds since the UNIX epoch, the source address and port of the response, the destination address and port (where Sonar does its scanning from), the IP ID, the TTL, and the hex-encoded UDP response payload, if any.  Precisely how to decode the data for each of the studies listed below is an exercise currently left for the reader that may be addressed in future documentation, but for now the descriptions below in combination with Rapid7's dap should be sufficient. DNS (53/UDP) This study sends a DNS query to 53/UDP for the VERSION.BIND text record in the CHAOS class. In some situations this will return the version of ISC BIND that the DNS server is running, and in others it will just return an error. Data can be found here in files with the -dns-53.csv.gz suffix.  In the most recent run of this study on 10/03/2016, there were 7,963,280 endpoints that responded. NTP (123/UDP) There are two variants of this study. The first sends an NTP version 2 MON_GETLIST_1 request, which will return a list of all recently connected NTP peers, generally up to 6 per packet with additional peers sent in subsequent UDP responses. Responses for this study can be found here in files with the ntpmonlist-123.csv.gz suffix.  This probe used in this study is the same as one frequently used in UDP amplification attacks against NTP.  In the most recent run of this study on 10/03/2016, 1,194,146 endpoints responded. The second variant of this study sends an NTP version 2 READVAR request and will return all of the internal variables known to the NTP process, which typically includes things like software version, information about the underlying OS or hardware, and data specific to NTP's time keeping. The responses can be found here in files with the ntpreadvar-123.csv.gz suffix. In the most recent run of this study on 10/03/2016, 2,245,681 endpoints responded. Other UDP amplification attacks in NTP that continue to enable DDoS attacks are described in R7-2014-12. NBNS (137/UDP) This study has been described in greater detail here, but the summary is that this study sends an NBNS name request.  Most endpoints speaking NBNS will return a wealth of metadata about the node/service in question, including system and group/domain names and MAC addresses.  This is the same probe that is frequently used in UDP amplification attacks against NBNS.  The responses can be found here in files with the -netbios-137.csv.gz suffix.  In the most recent run of this study on 10/03/2016, 1,768,634 endpoints responded. SSDP/UPnP (1900/UDP) This study sends an SSDP request that will discover the rootdevice service of most UPnP/SSDP-enabled endpoints.  The responses can be found here in files with the -upnp-1900.csv.gzsuffix.  UDP amplification attacks against SSDP/UPnP often involve a similar request but for all services, often resulting in a 10x packet amplification and a 40x bandwidth amplification.  In the most recent run of this study on 10/03/2016, 4,842,639 endpoints responded. Portmap (111/UDP) This study sends an RPC DUMP request to version 2 of the portmap service.  Most endpoints exposing 111/UDP that are the portmap RPC service will return a list of all of the RPC services available on this node.  The responses can be found here in files with the -portmap-111.csv.gz suffix.  There are numerous ways to exploit UDP amplification vulnerabilities in portmap, including the same one used in the Sonar study, a portmap version 3 variant that is often more voluminous, and a portmap version 4 GETSTAT request.  In the most recent run of this study on 10/03/2016, 2,836,710 endpoints responded. Quote of the Day (17/UDP) The qotd service is essentially the UNIX fortune bound to a UDP socket, returning quotes/adages in response to any incoming 17/UDP datagram.  Sonar's version of this study sends an empty UDP datagram to the port and records any responses, which is believed to be similar to the variant used in the UDP amplification attacks.  The responses can be found here in files with the -qotd-17.csv.gzsuffix.  In the most recent run of this newly added study on 10/21/2016, a mere 2,949 endpoints responded. Character Generator (19/UDP) The chargen service is a service from a time when the Internet was a wholly different place.  The UDP variant of chargen will send a random number bytes in response to any datagram arriving on 19/UDP.  While most implementations stick to purely ASCII strings of random lengths between 0 and 512, some are much chattier, spewing MTU-filling gibberish, packet after packet.  The responses can be found here in files with the -chargen-19.csv.gz suffix.  In the most recent run of this newly added study on 10/21/2016, only 3,791 endpoints responded. RIPv1 (520/UDP) UDP amplification attacks against the Routing Information Protocol version 1 (RIPv1) involve sending a specially crafted request that will result in RIP responding with 20 bytes of data for every route it knows about, with up to 25 routes per response for a maximum response size of 504 bytes, but RIP instances with more than 25 routes will split responses over multiple packets, adding packet amplification pains to the mix.  The responses can be found here in files with the -ripv1-520.csv.gzsuffix.  In the most recent run of this newly added study on 10/21/2016, 17,428 endpoints responded. metasploit modules Rapid7's Metasploit has coverage for a variety of these UDP amplification vulnerabilties built into "scanner" modules available to both the base metasploit-framework as well as the Metasploit community and Professional editions via: auxiliary/scanner/chargen/chargen_probe: this module probes endpoints for the chargen service, which suffers from a UDP amplification vulnerability inherent in its design. auxiliary/scanner/dns/dns_amp: in its default mode, this module will send an ANY query for isc.org to the target endpoints, which is similar to the query used while abusing DNS as part of DRDos attacks. auxiliary/scanner/ntp/ntp_monlist: this module sends the NTP MON_GETLIST request which will return all configured/connected NTP peers from the NTP endpoint in question.  This behavior can be abused as part of UDP amplification attacks and is described in more detail in US-CERT TA14-031a and CVE-2013-5211. auxiliary/scanner/ntp/ntp_readvar: this module sends the NTP READVAR request, the response to which can be used as part of UDP amplification attacks in certain situations. auxiliary/scanner/ntp/ntp_peer_list_dos: utilizes the NTP PEER_LIST request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_peer_list_sum_dos: utilizes the NTP PEER_LIST_SUM request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_req_nonce_dos: utilizes the NTP REQ_NONCE request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_reslist_dos utilizes the NTP GET_RESTRICT request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/ntp/ntp_unsettrap_dos: utilizes the NTP UNSETTRAP request to test the NTP endpoint for the UDP amplification vulnerability described in R7-2014-12 auxiliary/scanner/portmap/portmap_amp: this module has the ability to send three different portmap requests similar to those described previously, each of which has the potential to be abused for UDP amplification purposes. auxiliary/scanner/upnp/ssdp_amp: this module has the ability to send two different M-SEARCH requests that demonstrate UDP amplification vulnerabilities inherent in SSDP. Each of these modules uses the Msf::Auxiliary::UDPScanner mixin to support scanning multiple hosts at the same time. Most send probes and analyze the responses with the Msf::Auxiliary::DRDoS mixin to automatically calculate any amplification based on a high level analysis of the request/response datagram(s). Here is an example run of auxiliary/scanner/ntp/ntp_monlist: msf auxiliary(ntp_monlist) > run [+] 192.168.33.127:123 NTP monlist request permitted (5 entries) [+] 192.168.32.139:123 NTP monlist request permitted (4 entries) [+] 192.168.32.139:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): No packet amplification and a 37x, 288-byte bandwidth amplification [+] 192.168.33.127:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): No packet amplification and a 46x, 360-byte bandwidth amplification [+] 192.168.32.138:123 NTP monlist request permitted (31 entries) [+] 192.168.33.128:123 NTP monlist request permitted (23 entries) [+] 192.168.32.138:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 6x packet amplification and a 285x, 2272-byte bandwidth amplification [+] 192.168.33.128:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 4x packet amplification and a 211x, 1680-byte bandwidth amplification [+] 192.168.33.200:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 2x, 8-byte bandwidth amplification [*] Scanned 256 of 512 hosts (50% complete) [+] 192.168.33.251:123 NTP monlist request permitted (10 entries) [+] 192.168.33.251:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 92x, 728-byte bandwidth amplification [+] 192.168.33.254:123 - Vulnerable to NTP Mode 7 monlist DRDoS (CVE-2013-5211): 2x packet amplification and a 2x, 8-byte bandwidth amplification [*] Scanned 512 of 512 hosts (100% complete) [*] Auxiliary module execution completed msf auxiliary(ntp_monlist) > There is also the auxiliary/scanner/udp/udp_amplification module recently added as part of metasploit-framework PR 7489 that is designed to help explore UDP amplification vulnerabilities and audit for the presence of existing ones. Nexpose coverage Rapid7's Nexpose product has coverage for all of the ntp vulnerabilities described above and in R7-2014-12, along with: netbios-nbstat-amplification dns-amplification chargen-amplification qotd-amplification quake-amplification steam-amplification upnp-ssdp-amplification snmp-amplification Additional information about Nexpose's capabilities with regards to UDP amplification vulnerabilities can be found here. Future Research UDP amplification vulnerabilities have been lingering since the publication of RFC 768 in 1980, but only in the last couple of years have they really become a problem.  Whether current and historical efforts to mitigate the impact that attacks involving UDP amplification have been effective is certainly debatable.  Recent events have shown that only very well fortified assets can survive DDoS attacks and UDP amplification still plays a significant role.  It is our hope that the open dissemination of active scan data through projects like Sonar and the availability of tools for detecting the presence of this class of vulnerability will serve as a valuable tool in the fight against DDoS. If you are interested in collaborating on Metasploit modules for detecting other UDP amplification vulnerabilities, submit a Metasploit PR. If you are interesting in having Sonar perform additional relevant studies, have interests in related research, or if you have questions, we welcome your comments here as well as by reaching out to us at research@rapid7.com. Enjoy!

NCSAM: Coordinated Vulnerability Disclosure Advice for Researchers

This is a guest post from Art Manion, Technical Manager of the Vulnerability Analysis Team at the CERT Coordination Center (CERT/CC). CERT/CC is part of the Software Engineering Institute at Carnegie Mellon University.October is National Cyber Security Awareness month and Rapid7 is…

This is a guest post from Art Manion, Technical Manager of the Vulnerability Analysis Team at the CERT Coordination Center (CERT/CC). CERT/CC is part of the Software Engineering Institute at Carnegie Mellon University.October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.The CERT/CC has been doing coordinated vulnerability disclosure (CVD) since 1988. From the position of coordinator, we see the good, bad, and ugly from vendors, security researchers and other stakeholders involved in CVD. In this post, I'm eventually going to give some advice to security researchers. But first, some background discussion about sociotechnical systems, the internet of things, and chilling effects.While there are obvious technological aspects of the creation, discovery, and defense of vulnerabilities, I think of cybersecurity and CVD as sociotechnical systems. Measurable improvements will depend as much on effective social institutions ("stable, valued, recurring patterns of [human] behavior") as they will on technological advances. The basic CVD process itself -- discover, report, wait, publish -- is an institution concerned with socially optimal [PDF] protective behavior. This means humans making decisions, individually and in groups, with different information, incentives, beliefs, and norms. Varying opinions about "optimal" explain why, despite three decades of debate and both offensive and defensive technological advances, vulnerability disclosure remains a controversial topic.To add further complication, the rapid expansion of the internet of things has changed the dynamics of CVD and cybersecurity in general. Too many "things" have been designed with the same disregard to security associated with internet-connected systems of the 1980s, combined with the real potential to cause physical harm. The Mirai botnet, which set DDoS bandwidth records in 2016, used an order of magnitude fewer username and password guesses than the Morris worm did in 1988. Remote control attacks on cars and implantable medical devices have been demonstrated. The stakes involved in software security and CVD are no longer limited to theft of credit cards, account credentials, personal information, and trade or national secrets.In pursuit of socially optimal CVD and with consideration for the new dynamics of IoT, I've become involved in two policy areas: defending beneficial security research and defining CVD process maturity. These two areas intersect when researchers chose CVD as part of their work, and that choice is not without risk to the researcher.The security research community has valid and serious concerns about the chilling effects, real and perceived, of legal barriers and other disincentives [PDF] to performing research and disclosing results. On the other hand, there is a public policy desire to differentiate legitimate and beneficial security research from criminal activity. The confluence of these two forces leads to the following conundrum: If rules for security researchers are codified -- even with broad agreement from researchers, whose opinions differ -- any research activity that falls out of bounds could be considered unethical or even criminal.Codified rules could reduce they grey area created by "exceeds authorized access" and the steady supply of new vulnerabilities (often discovered by accessing things in interesting and unexpected ways). But with CVD, and most complex interactions, the exception is the rule and CVD is full of grey. Honest mistakes, competing interests, language, time zone, and cultural barriers, disagreements and other forms of miscommunication are commonplace. There's still too little information and too many moving parts to codify CVD rules in fair and effective way.Nonetheless, I see value in improving the quality of vulnerability reports and CVD as an institution, so here is some guidance for security researchers who choose the CVD option. Most of this advice is based on my experience at the CERT/CC for the last 15 years, which is bound to include some personal opinion, so caveat lector.Be aware. In three decades of debate, a lot has been written about vulnerability disclosure. Read up, talk to your peers. If you're subject to U.S. law and read only one reference, it should be the EFF's Coders' Rights Project Vulnerability Reporting FAQ.Be humble. Your vulnerability is not likely the most important out of 14,000 public disclosures this year. Please think carefully before registering a domain for your vulnerability. You might fully understand the system you're researching, or you might not, particularly if the system has significant, non-traditional-compute context, like implantable medical devices.Be confident. If your vulnerability is legit, then you'll be able to demonstrate it, and others will be able to reproduce. You're also allowed to develop your reputation and brand, just not at a disproportionate cost to others.Be responsible. CVD is full of externalities. Admit when you're wrong, make clarifications and corrections.Be concise. A long, rambling report or advisory costs readers time and mental effort and is usually an indication of a weak or invalid report. If you've got a real vulnerability, demonstrate it clearly and concisely.PoC||GTFO. This one goes hand in hand with being concise. Actual PoC may not be necessary, but provide clear evidence of the vulnerable behavior you're reporting and steps for others to reproduce. Videos might help, but they don't cut it by themselves.Be clear. Both with your intentions and your communications. You won't reach agreement with everyone, particularly about when to publish, but try to avoid surprises. Use ISO 8601 date/time formats. Simple phrasing, avoid idiom.Be professional. Professionals balance humility, confidence, candor, and caution. Professionals don't need to brag. Professionals get results. Let your work speak for itself, don't exaggerate.Be empathetic. Vendors and the others you're dealing with have their own perspectives and constraints. Take some extra care with those who are new to CVD.Minimize harm. Public disclosure is harmful. It increases risk to affected users (in the short term at least) and costs vendors and defenders time and money. The theory behind CVD is that in the long run, public disclosure is universally better than no disclosure. I'm not generally a fan of analogies in cybersecurity, but harm reduction in public health is one I find useful (informed by, but different than this take). If your research reveals sensitive information, stop at the earliest point of proving the vulnerability, don't share the information, and don't keep it. Use dummy accounts when possible.In a society increasingly dependent on complex systems, security research is important work, and the behavior of researchers matters. At the CERT/CC, much of our CVD effort is focused on helping others build capability and improving institutions, thus, the advice above. We do offer a public CVD service, so if you've reached an impasse, we may be able to help.Art Manion is the Technical Manager of the Vulnerability Analysis team in the CERT Coordination Center (CERT/CC), part of the Software Engineering Institute at Carnegie Mellon University. He has studied vulnerabilities and coordinated responsible disclosure efforts since joining CERT in 2001. After gaining mild notoriety for saying "Don't use Internet Explorer" in a conference presentation, Manion now focuses on policy, advocacy, and rational tinkering approaches to software security, including standards development in ISO/IEC JTC 1 SC 27 Security techniques. Follow Art at @zmanion and CERT at @CERT_Division.

NCSAM: The Danger of Criminalizing Curiosity

This is a guest post from Kurt Opsahl, Deputy Executive Director and General Counsel of the Electronic Frontier Foundation. October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for…

This is a guest post from Kurt Opsahl, Deputy Executive Director and General Counsel of the Electronic Frontier Foundation. October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face. It's been thirty years since Congress enacted the Computer Fraud and Abuse Act, the primary federal law to criminalize hacking computers. Since 1986, the CFAA has been amended many times, most often to make the law harsher and more expansive. While it was written to focus on what were then relatively rare networked computers, the ubiquity of the Internet means today that almost every machine more useful than a portable hand held calculator is a “protected computer” under the law. As the anti-hacking law's scope expanded, so did the penalties. Today there is little room for hacking misdemeanors under the CFAA.  Even where there is, there's virtually no crime that can't be upgraded to a felony by an aggressive U.S. Attorney through the simple expedient of tying the crime to a parallel state law.  This ends up meaning that you might get sentenced to community service for vandalizing a printed billboard, but endure two years in prison for vandalizing a Web page. There are many problems with the CFAA and its interpretation. The Department of Justice has used the anti-hacking law to criminalize violations of Terms of Service or iterating a URL. But one problem stands out: the over-criminalization of curiosity. A few months before the CFAA originally passed, the e-zine Phrack published the The Hacker Manifesto, in which The Mentor explained to the world his crime: curiosity. The Mentor, a hacker affiliated with the Legion of Doom, celebrated exploring and outsmarting, discovering a “door opened to a world” beyond the mendacity of the day-to-day. Over the years of working on the EFF's Coders' Rights Project, we've had the honor of representing many smart and skilled security researchers, who've made considerable contributions to our collective security. We won't be naming names, but at least a few of these luminaries got their start in computer security from curiosity, figuring out at a young age how the machines in their lives worked, and how to make those machines work in unexpected ways. An intellectual adventure, but one fraught with the risk that these unexpected ways could cross the line into “access without authorization,” and then felony charges. Perhaps the clearest example of the danger of criminalizing curiosity comes from the prosecution of Aaron Swartz.  In 2011, a federal grand jury in Massachusetts charged Aaron Swartz with multiple felonies for his alleged computer intrusion to download a large number of academic journal articles from the MIT network. Swartz was instrumental in the development of key parts of modern technologies, including RSS, Creative Commons, and Reddit, getting his start at an early age.  He was just 14 when he joined the working group that authored RSS 1.0. Yet for his curiosity, Swartz was facing a maximum sentence of decades in prison.  Realistically, he would have had to serve less time, but even so he would have been saddled with felony convictions forever.  Instead, Swartz took his own life, and the world lost a bright star could have continued to bring great things to the Internet. Paul Allen and Bill Gates provide the counterexample. In his autobiography, Allen told the story of when the two founders of Microsoft “got hold of” an administrator password at the Computer Center Corporation (which they called C-Cubed), a company that provided timeshare computers to the pair. With the cost of access mounting, eighth grade Allen and Gates wanted to use the password to give themselves free access to the timeshare servers.  However, C-Cubed discovered the hack.  As Allen explained: “We hoped we'd get let off with a slap on the wrist, considering we hadn't done anything yet. But then the stern man said it could be ‘criminal' to manipulate a commercial account. Bill and I were almost quivering.” But instead of calling the police, Dick Wilkinson, one of C-Cubed's partners, kicked them off the system for six weeks, letting them off with a warning. Eventually, C-Cubed exchanged free computer time for bug reports, allowing the kids to play and learn, so long as they carefully documented what led the computer to crash. Computer security will always be difficult, and protecting our networks and devices requires the hard work of skilled security researchers to find, publish and fix vulnerabilities. The skills that allow security researchers to discover vulnerabilities and develop proof of concept exploits are often rooted in satisfying their curiosity, sometimes breaking some rules along the way. Society may still want to discourage intentionally destructive behavior, and hold people accountable for any harm they may have caused, but certainly not every hack needs to be a felony, backed by stiff sentences and they lifelong encumbrances that come with being a felon. By allowing for infractions and misdemeanors, coupled with recognition that a little mischief may be an early sign of someone who will be vital to protecting us all. Kurt Opsahl is the Deputy Executive Director and General Counsel of the Electronic Frontier Foundation. In addition to representing clients on civil liberties, free speech and privacy law, Opsahl counsels on EFF projects and initiatives. Opsahl is the lead attorney on the Coders' Rights Project, and is representing several companies who are challenging National Security Letters. Additionally, Opsahl co-authored "Electronic Media and Privacy Law Handbook." Follow Kurt at @kurtopsahl and the EFF at @eff.

Stop, collaborate and listen... (...and think, and connect)

Since its inception, our wonderful connected world has been a battleground for cybercriminals vs law enforcement and security professionals, who are locked into a twisted dance of punches and counterpunches as the arena in which they fight evolves around them. We continue to connect more…

Since its inception, our wonderful connected world has been a battleground for cybercriminals vs law enforcement and security professionals, who are locked into a twisted dance of punches and counterpunches as the arena in which they fight evolves around them. We continue to connect more and more Things, providing new and elaborate opportunities for attackers to launch their weapons of mass disruption. Not everything is awesome, but you are part of a team! Somewhere down the line, if you're connected you're going to be (or have already been) affected – whether it's a device you own being pwned, or your account being compromised on a third party system. Cybercrime doesn't care which language(s) you speak, or where you pay your taxes, your data and information have a value either directly or indirectly (I can pretty much guarantee that someone reading this will have at some point re-used a web account password on their corporate network account). As cybercrime naturally transcends traditional borders, a consolidated global effort is required to combat this global foe. And yes, it needs reiterating – We Are All Responsible – you can't reap the benefits of the internet without playing a part in keeping it safe. You don't necessarily have to be an expert either – Team Global Security, which you are a part of (welcome to all of our new members!), has some very strong players in its ranks, and regardless of your level of expertise you do have an important part to play. Awareness, vigilance and frankly Just Not Being Bloody Stupid (yeah I'm looking at you, with the re-used password on your corporate account – go and change it right now, thanks) are all important ways in which you can help the cause. You have the security industry and profession on your side, and your government too. That's pretty solid backing I'd say. If you've ever uttered the words “the government should be doing something about this” then you'll be pleased to know when it comes to Cyber Security there are multiple collaborative initiatives happening Right Now. “Wow, that IS awesome!” I hear you say. Yes. Very Awesome Indeed. So what's going on? As I type this blog, the U.S. are in the midst of the 13th annual National Cyber Security Awareness Month – a joint venture between the National Cyber Security Alliance (NCSA) and the U.S. Department of Homeland Security (DHS). Every week in October has a theme [PDF], covering everything from securing critical infrastructure to how to practice good security habits on your personal devices. If you're of a Twitter persuasion, take a look at the #ncsam or #cyberaware tweets to get information and advice from industry gurus, vendors and businesses. Or if you love our blog (and of course you do), check out the series we have going. And whilst this is billed a U.S. party, Team Global Security can absolutely benefit from the event. Across the pond in the UK, the big news here is the opening of the National Cyber Security Centre. Whilst many of the NCSC team will operate from GCHQ in Cheltenham, around half of the 700 staff will be based in some rather stunning London offices close to Buckingham Palace. Via four key objectives, the centre aims to be the beating heart of the Government's strategy for the UK to become “the safest place to live and work online”. These objectives cover a multitude of areas, ranging from the all-important knowledge sharing through to being front and centre on critical national cyber security issues: To understand the cyber security environment, share knowledge, and use that expertise to identify and address systemic vulnerabilities. To reduce risks to the UK by working with public and private sector organisations to improve their cyber security. To respond to cyber security incidents to reduce the harm they cause to the UK. To nurture and grow our national cyber security capability, and provide leadership on critical national cyber security issues. The centre opening coincided with the launch of a new website, which is an excellent resource for both people and organisations in the UK, and for the wider global audience too. In Singapore, the government recently announced the formation of GovTech – a new agency established to “transform public service delivery with citizen-centric services and products.” Security naturally falls under the remit of the agency - GovTech will also play a critical role in overseeing the public sector's ICT infrastructure, putting in place policies for critical infrastructure and cybersecurity to enable the operation of a secure and resilient Smart Nation. No matter whether you're a citizen of the US, the UK, Singapore, or somewhere else entirely, there is plenty of information, advice and best practice sitting at your fingertips. Global issues need a global response, and these initiatives are vital efforts to help us all enjoy this wonderful connected world. Rapid7 has your back If you think your organisation would benefit from some cyber security awareness training, maybe it's time to book in a pen test, or you'd like some help with your overall security program - we're happy to help you. Do you need more foot soldiers to help you fight the good fight? Your army of cyber guardians are ready for enlistment [PDF]. Our team is your team – let us know how we can be of assistance.

NCSAM: You Should Use a Password Manager

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial…

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face. This year, NCSAM is also focused on taking steps towards online safety, including how to have more secure accounts. In 2016, just like in most of the last 15 years, we learned new information about recent and not so recent data breaches at large organizations, during which sensitive account information was made public. Essentially, these breaches have unearthed data on what puts accounts at higher risk for a breach. Putting aside the concerns about non-password account information being made public, one of the factors that determines how bad a data breach is for users is the format of leaked passwords. Are they plaintext? Plaintext passwords are just the actual password that a user would type. For example, the password "taco" is stored as "taco" and when made public, can be used by an attacker right away. Have they been hashed? Hashed passwords are mathematical one way transformations of the original password, meaning that it is easy to transform the password into a hash, but given a hash, it's very difficult to recover the original password. For example, the password "taco" is stored as "f869ce1c8414a264bb11e14a2c8850ed" when hashed with the MD5 hash algorithm, and the attacker must recover the original password from this hash in order to use it. Have they been salted and hashed? Hashed passwords are good, but there are several tools and methods that can be used to try to reveal the original password. There are even dictionaries that connect hashes back to their original passwords. Submitting "f869ce1c8414a264bb11e14a2c8850ed" to http://md5.gromweb.com/ reveals that the word "taco" was used to generate that hash. Adding a "salt" to a password, means to add extra data to it before it gets hashed. For example, the password "taco" is combined with the word "salsa" before being hashed, and the resulting hash is stored as "6b8dc43f9be3051e994cafdabadc2398". Now, an attacker looking up the hash "6b8dc43f9be3051e994cafdabadc2398" in a dictionary won't find anything, and will be forced to create a new dictionary which ideally is time consuming. Have they been hashed with a well studied unbroken algorithm? The MD5 algorithm has known attacks against it, so it is a good idea to use another algorithm. Have they been hashed multiple times? Or with a computationally expensive algorithm? Or with a memory expensive algorithm? These and other questions get into the nitty gritty of how passwords can be stored scurely so that they are of little use to an attacker once they are made public. Luckily, there are plenty of resources for security engineers to follow in order to make their sites more secure, and in particular, their storage of passwords more secure even if they are disclosed. Dropbox has an interesting post about how they store passwords, and this talk by Alec Muffet from Facebook, which describes their methods for storing passwords, is really interesting. In fact, there is an entire conference dedicated to passwords and the engineering that goes into keeping them secure. This site tracks published details about password storage polices of various sites, and this presentation provides the motivation for doing so. That's great, but I'm not a security engineer, what do I need to know about passwords? There is an unending list of articles, blog posts, howto guides and comics written about passwords. Passwords are going away. Passwords will eventually go away. Passwords are here to stay. Passwords are insecure. Two factor authentication will save us all. Biometrics will save us all. Whatever your opinion you probably have multiple accounts with multiple websites and ideally you're using multiple passwords. It's a good idea to recognize that whether or not the sites you use are doing a good job of protecting your passwords, you too can take steps to make your password use more secure. If you take nothing else away from this post, remember to setup a password manager (there are many), actually use it to create different passwords for each account you have, routinely look into whether your account information has been leaked recently, and if it has, change the password associated with that account. What's the big deal? If you have an account with an online service, like an email provider, a social network, or an ecommerce site, then it is very likely that you have a password associated with that account. It's also likely that you have more than a few accounts, and having so many accounts you have most likely been tempted to use the same or similar usernames and passwords across accounts. While there are clear benefits (among some privacy / tracking drawbacks) to having a consistent identity across services (ironicjen182@gmail.com, ironicjen182@facebook.com, ironicjen182@totallylegitonlinebusiness.biz), there are clear drawbacks to using the same password across services, mainly that if one of these services is attacked and account information is leaked, your accounts with identical or similar usernames at the other services could be vulnerable to misuse by an attacker. Ok, but who cares? It's just my (hotmail | twitter | ebay | farmersonly) account. You should care, these accounts paint a very detailed picture of who you are and what you do. For instance, you email has a record of emails you have sent and of those sent to you, and from that an attacker can learn a surprising amount about you. With email providers that offer effectively unlimited email storage and provide little incentive for users to erase emails, it's nearly impossible for a user to be sure that nothing useful to an attacker is buried somewhere inside. Furthermore, your email (and social media accounts) are effectively an extension of you. When an attacker has control of your account, emails, tweets, snaps sent from your account are accepted as coming from you, and attackers can take advantage of those assumptions and the trust that you've built up with you contacts. For example, consider the Stranded Traveler Scam in which an attacker sends a message to one or more of your contacts claiming to be in a bad situation somewhere far away, and if they could just wire some money, things would surely work out. There are news reports about these types of scams all the time (2011, 2011, 2012. 2013, 2014, 2015, 2016) Because the email has come from your account and bears your name, your relatives, friends and coworkers are more likely to believe it is actually you writing the message than a scammer. Similar attacks involve sending malware in attachments and requesting wire transfers for family members or executives, or requesting w-2 forms for employees. None of these attacks require that takeover of your account, but are certainly strengthened by it. Really, how often does this happen? Can't I just deal with it when I hear about it on the news? You could do that, and it would be better than not doing anything at all, but breaches that leak account information happen surprisingly frequently and they don't always make the news that you read. Sometimes, we don't learn about them for weeks or years after they happen, meaning that passwords you used a long time ago may have been known to attackers for a while before you were made aware of a breach. Is my password really out there? Sometimes. Maybe. It's hard to say. Often, sites will hash passwords before they are stored. However, different sites use different hash methods which have different security characteristics, and some methods previously believed to be secure are no longer considered so. Shouldn't these sites be more secure? That would be nice, but data security is a difficult and quickly changing field and not every site prioritizes security as highly as you might like. Fine, what should I do? You should to a few things: Use a password manager There are many password managers available to you, like LastPass, 1Password, KeepassX or if you're into the command line, try out pass. Use a different password for every account you have Now that you have a password manager storing all your passwords, there's no need to reuse passwords Use complex passwords Most password managers can create long random strings of letters, numbers and symbols for you. Since the password manager stores these passwords and you don't have to remember them, there's no need to use simple or short passwords. Keep an eye on sites that catalog leaked account information. Have a look from time to time at sites that keep track of leaked accounts to see if your account has been leaked. haveibeenpwned.com is usually kept up to date and is easy to use.

NCSAM: Independent Research and IoT

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial…

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.2016 can be characterized as the year that IoT security research took off, both here at Rapid7 and across the security researcher community. While IoT security research was certainly being conducted in years past, it was contentious within our community as "too easy" or simply "stunt hacking." But as the body of research in this space grows along with the prevalence of IoT, it's become obvious that this sort of "easy" hacking is critical to not only the security of these devices in and of themselves, but to the safety of the people nearby these devices.After all, the hallmarks of IoT is both the ability to interact with the real, physical space around the devices, and to communicate with other devices. Security failures, therefore, can both directly affect the humans who are relying on them and open attack vectors against nearby devices and the networks they are connected to.With that in mind, I'd like take a moment to consider the more noteworthy events in IoT Security, and how together, they make the case that this year is when we stopped considering IoT security "junk hacking," and started taking these things seriously.IoT Security in 2016In January, Brian Knopf announced that the "I Am the Cavalry" security research advocacy group will be publishing an open, collaboration-driven cybersecurity rating system for IoT devices, in contrast to Underwriter Labs' proprietary standards. This isn't the first announced strategy to comprehensively rate the security of IoT devices, but it's certainly the most open. Transparency is crucial in security research and testing, since it allows for independent verification, reproducibility, and accountability.In March, researcher Wish Wu of Trend Micro reported a pair of vulnerabilities in a Qualcomm component of the Android operating system which, if exploited, can allow a malicious application to grab root-level permissions. Wu presented these findings and others in May at the Hack in the Box security conference. While these bugs have been patched by Google, these findings remain significant for two reasons. One, Android is rapidly becoming the de facto standard operating system for the Internet of Things -- not just smartphones -- so bugs in Android can have downstream effects in everything from television set-top boxes, to children's toys, to medical devices. Second, and more worrying, many of these devices do not have the capability to get timely security updates or to upgrade their operating systems. Given the lack of moving parts in these machines, they can chug along for years without patches.In May, researcher Earlance Fernandes and others at the University of Michigan demonstrated several vulnerabilities in Samsung's SmartThings platform for home automation, which centered around abusing and subverting applications to gain privileges, including a remote attack to reset a door lock PIN. Design issues of over-privileged applications are certainly not limited to Samsung's software engineering, but is commonplace in IoT infrastructure. Oftentimes, the services and applications that ship on IoT devices aren't designed with privilege constraints in mind. If the service does what it's supposed to do with administrative / root privileges, there isn't a lot of incentive to design it to work in a less permissive model, especially when designers aren't considering the motives or opportunity of a malicious user. In other words, while most traditional computers have pretty solid user privilege models, IoT devices often ignore this fundamental security concept.In September, the Michigan Senate Judiciary Committee passed a bill that forbids some forms of "car hacking," but importantly, includes protections for research activities. These exemptions weren't included in the original text of the bill, but thanks to the efforts of Rapid7's Harley Geiger, we've seen a significant change in the way that legislators in Michigan view automotive cyber security and the value of security research there. While this bill is not yet law, the significance of this shift in thinking can't be understated.Also in September, some of the early fears of widespread IoT-based insecurity manifested in the highly public IoT-powered DDoS attack against journalist Brian Krebs. This attack was made possible, in part, by the massive population of unpatched, unmanaged, and unsecured home routers, a topic explored way back in January by yours truly. Of course, I'm not saying I called it, but... I kinda called it.🙂Some closing picturesWho doesn't love a good Google Trends graph? We can see from the below that interest in the "IoT Security" search term has doubled since the beginning of 2016, and I'd be surprised to see it hit any significant decline in the years to come.While much of this interest is pretty doomy and/or gloomy, it's healthy to be considering IoT security today, and I'm glad that IoT appears to be getting the respect and serious attention it deserves in the security research community. It's only through the efforts of individual, focused security researchers that we'll able to get a handle on the issues that bedevil the IoT-scape. Otherwise, we're looking at a future as envisioned by Joy of Tech:

National Cybersecurity Awareness Month 2016 - This one's for the researchers

October was my favorite month even before I learned it is also National Cybersecurity Awareness Month (NCSAM) in the US and EU. So much the better – it is more difficult to be aware of cybersecurity in the dead of winter or the blaze of…

October was my favorite month even before I learned it is also National Cybersecurity Awareness Month (NCSAM) in the US and EU. So much the better – it is more difficult to be aware of cybersecurity in the dead of winter or the blaze of summer. But the seasonal competition with Pumpkin Spice Awareness is fierce. To do our part each National Cybersecurity Awareness Month, Rapid7 publishes content that aims to inform readers about a particular theme, such as the role of executive leadership, and primers to protect users against common threats. This year, Rapid7 will use NCSAM to celebrate security research – launching blog posts and video content showcasing research and raising issues important to researchers. Rapid7 strongly supports independent research to identify and assess security vulnerabilities with the goal of correcting flaws. Such research strengthens cybersecurity and helps protect consumers by calling attention to flaws that software vendors may have ignored or missed. There are just too many vulnerabilities in complex code to expect vendors' internal security teams to catch everything. Independent researchers are antibodies in our immune system.This NCSAM is an extra special one for security researchers for a couple reasons. First, new legal protections for security research kick in under the DMCA later this month. Second, October 2016 is the 30th anniversary of a problematic law that chills beneficial security research – the CFAA. DMCA exception – copyright gets out of the way (for a while)This October 29th, a new legal protection for researchers will activate: an exemption from liability under Section 1201 of the Digital Millennium Copyright Act (DMCA). The result of a long regulatory battle, this helpful exemption will only last two years, after which we can apply for renewal.Sec. 1201 of the DMCA prohibits circumventing a technological protection measure (TPM) to copyrighted works (including software). [17 USC 1201(a)(1)(A)] The TPMs can be anything that controls access to the software, such as weak encryption. Violators can incur civil and criminal penalties. Sec. 1201 can hinder security research by forbidding researchers from unlocking licensed software to probe for vulnerabilities. This problem prompted security researchers – including Rapid7 – to push the Copyright Office to create a shield for research from liability under Sec. 1201. The Copyright Office ultimately did so last October, issuing a rule that limits liability for circumventing TPMs on lawfully acquired (not stolen) consumer devices, medical devices, or land vehicles solely for the purpose of good faith security testing. The Copyright Office delayed activation of the exception for a year, so it takes effect this month. Rapid7 analyzed the exception in more detail here, and is pushing the Copyright Office for greater researcher protections beyond the exception.The exception is a positive step for researchers, and another signal that policymakers are becoming more aware of the value that independent research can drive for cybersecurity and consumers. However, there are other laws – without clear exceptions – that create legal problems for good faith researchers. Happy 30th, CFAA – time to grow upThe Computer Fraud and Abuse Act (CFAA) was enacted on October 16th, 1986 – 30 years ago. The internet was in its infancy in 1986, and platforms like social networking or the Internet of Things simply did not exist. Today, the CFAA is out of step with how technology is used. The CFAA's wide-ranging crimes can sweep in both ordinary internet activity and beneficial research. For example, as written, the CFAA threatens criminal and civil penalties for violations of the website's terms of service, a licensing agreement, or a workplace computer use agreement. [18 USC 1030(a)(2)(C)] People violate these agreements all the time – if they lie about their name on a social network, or they run unapproved programs on their work computer, or they violate terms of service while conducting security research to test whether a service has accidentally made sensitive information available on the open internet.Another example: the CFAA threatens criminal and civil penalties for causing any impairment to a computer or information. [18 USC 1030(a)(5)] No harm is required. Any unauthorized change to data, no matter how innocuous, is prohibited. Even slowing down a website by scanning it with commercially available vulnerability scanning tools can violate this law. Since 1986, virtually no legislation has been introduced to meaningfully address the CFAA's overbreadth – with Aaron's Law, sponsored by Rep. Lofgren and Sen. Wyden, being the only notable exception. Even courts are sharply split on how broad the CFAA should be, creating uncertainty for prosecutors, businesses, researchers, and the public. So for the CFAA's 30th anniversary, Rapid7 renews our call for sensible reform. Although we recognize the real need for computer crime laws to deter and prosecute malicious acts, Rapid7 believes balancing greater flexibility for researchers and innovators with law enforcement needs is increasingly important. As the world becomes more digital, computer users, innovators, and researchers will need greater freedom to use computers in creative or unexpected ways.More coming for NCSAMRapid7 hopes National Cybersecurity Awareness Month will be used to enhance understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges researchers face. To celebrate research over the coming weeks, Rapid7 will – among other things – make new vulnerability disclosures, publish blog posts showcasing some of the best research of the year, and release videos that detail policy problems affecting research. Stay tuned, and a cheery NCSAM to you.

Cyber Security Awareness Month: Coordinated Disclosure & Working with the Security Community

October is promoted as cyber security awareness month in the US and across the European Union. We're all for increasing awareness of security issues and threats, so we're in, but we know our average SecurityStreet reader likely works in information security and is already “…

October is promoted as cyber security awareness month in the US and across the European Union. We're all for increasing awareness of security issues and threats, so we're in, but we know our average SecurityStreet reader likely works in information security and is already “aware.”Last year we did this through a series of primers designed to help you educate your users on the risks they face daily, and how to protect themselves.  If that sounds of interest, it's not too late to make use of them to educate your users about  phishing, mobile threats, basic password hygiene, avoiding cloud crises, and the value of vigilance.This year we're focusing on the executive team. Given the number of high profile breaches in the past year alone, the C-suite and Board are starting to pay attention to cyber security and the potential business risk in terms of liability, loss of reputation, and revenue impact. Over the next few weeks, we'll be covering why security matters now; data custodianship; building security into the corporate culture through policies and user education; how organizations can make security into a strength and advantage; and crisis communications and response.  This week, let's talk about coordinated disclosure and working with the security community.There are no guarantees in the safety, security, and resilience of technology. Even with airtight security teams, policies, processes, and tools operating at peak efficiency, there will always be newly discovered software vulnerabilities. Your organization simply can't test for every scenario and predict everything that might happen, there will never be enough money or time. Security inevitably becomes a balancing act. If you're too cavalier about it, you're inviting disaster. On the flip-side, you can invest as much as possible in as many security audits as you can afford, but sooner or later you still have to ship product to stay in business. It's usually only a matter of time until a company receives word from a researcher somewhere that they've found a security flaw in your product, service, or website. This is an opportunity for you and your business to demonstrate your leadership and commitment to customer experience and care. It all comes down to how you respond to the research. It's not unreasonable to have imperfections in your code, and oddly enough, it's often easier for third parties to spot these issues. This could be for any number of reasons: they're less immersed in the detail; they have different background experience or unique knowledge and skill sets; they identify a user scenario you did not foresee. Some flaws are found accidentally, as was the case with five year old Kristoffer Von Hassel, who wanted to play his father's Xbox games. However a researcher makes their discovery, it will likely lead to an awkward conversation, and you may feel defensive having poured resources and man hours into development. What happens next—specifically how your organization responds to this kind of feedback—will mark you as either a great company that responds to its imperfections in a healthy, productive way... or one that would rather put its customers and intellectual property at risk by denying the problem exists, or worse, by trying to take legal action against the researcher.To distinguish your company as one that truly cares about its customers, leverage the research community to make your products the best they can be. Researchers are effectively providing you with advanced product testing, which not only helps you ensure you are not putting your customers at risk, it can also help you drive innovation and demonstrate agility and leadership to your customers, prospects, and the industry as a whole. Building security into your offerings provides a competitive advantage and helps you increase customer loyalty and satisfaction.  In other words, you can flip the bad news of a vulnerability disclosure on its head and use it as an opportunity to get customer attention and show that you take their security seriously.And the opposite can be true too of course — if you ignore or even try to take action against a researcher who finds a vulnerability in your product, you might be seen as more keen to cover up an issue than actually fix the problem.Below are some suggestions on how to work effectively with researchers. Being open to the security community and their feedback is a great first step; we recommend you build on this with internal processes to streamline your response and increase efficacy of communications.  This is a coordinated disclosure program.Some questions to get you started:Do you have a written policy on your website detailing how your organization's position on working with researchers, as well as a description of how your company receives and processes security feedback? Do you have an email address where researchers can send you their vulnerability findings? We recommend having a PGP key or some kind of encryption in place so people can submit this sensitive information to you privately and securely. Once a researcher sends you vulnerability information, do you have an internal system in place that ensures the right people get this information, and are those people empowered to take action? Have you considered a bug bounty program?You don't have to re-invent the wheel here if you don't have the internal resources to get all of the above steps up and running. There are a number of websites dedicated to helping your organization work with the security community, such as HackerOne and BugCrowd, where researchers can submit their findings safely. As we mentioned in last week's blog post, the key thing to remember is that you are a custodian of your customers' data, not an owner. There are various levels of engagement an organization can have with the security community and you will need to find the right level to suit your resources and needs. Even as a bare minimum, positive interactions with the security community and vulnerability disclosures demonstrates that you care about your customers, and it reflects well on your organization as a whole.-- @mvarmazis

National Cyber Security Awareness Month: The Value of Vigilance

Today is the last day of October 2013, and so sadly, this is our last NCSAM primer blog. We're hitting on a number of potential threats in this one to help drive the core point home – users need to be vigilant, not just with…

Today is the last day of October 2013, and so sadly, this is our last NCSAM primer blog. We're hitting on a number of potential threats in this one to help drive the core point home – users need to be vigilant, not just with regards to their physical security, but also the security of their information and the systems used to access and store it.For those that are new to this series, a quick recap – every week this month we have created a short primer piece that could be copied and pasted into an email to send around your organization.  The goal is to promote better security awareness and thoughtfulness amongst your users by educating them on the risks they face and how to protect themselves. So far we've covered phishing, mobile threats, basic password hygiene, and avoiding cloud crises.In all of these posts, the underlying message is the need for vigilance, so I thought we'd really hammer the point home with this final post.  I considered creating a primer that would just say BE VIGILANT in enormous, flashing letters, but I figured you probably already have one of those, right?So, here's your 5th and final primer.  Thank you to those that have followed the entire series.  We hope they have been valuable/ useful.Be Vigilant!Since you were a child you have likely been taught about obvious risks to your person, and encouraged to adopt certain types of vigilant behavior to protect yourself, to the point that it's become second nature.  You look both ways before crossing the road.  You don't stick your hand into boiling water. You wear a seatbelt when driving.  And of course we tell children not to talk to strangers. We are encouraged at every point to take our physical safety seriously and to protect ourselves.Yet we do not widely exercise the same degree of vigilance when it comes to our online safety, despite having reached a point where we all practically exist online. Your bank is online, your friends are online, and all of your data is online, from photos and videos, to birth records and mortgage agreements.  Your level of exposure is immense, both on a personal and professional level, and while you may never see your attacker or feel their knife at your back, they certainly have the potential to cause you serious injury.We need to adopt the same kind of vigilance to protect ourselves when we're engaging with technology as we would walking down a dark street late at night.  Be aware, consider the risks, and limit your vulnerability.How specifically?The main ways to do this were covered in the previous emails we sent, concerning phishing, mobile threats, passwords and cloud applications.  Here are a few other things to consider:Don't visit shady websites! Sometimes it is tempting to visit a site that promises to show you how to see who is looking at your Facebook profile, or how to make money while you sleep, or how to find love right now, tonight, but when faced with such an opportunity, resist it. Sites like this probably won't deliver, and are likely to lead to other shady parts of the internet, or include malicious software.Don't give information out to strangers. This should be a familiar one – beware “Stranger Danger”.  Don't simply trust that people are who they say they are, and don't give information out without first verifying that it's OK to do so. For example, if someone calls in to the company and asks you for information, it could seem pretty innocent, but you could be arming them with what they need to launch a phishing attack that may provide an entry point to compromising our systems.Don't connect to untrusted networks.  If you're working in a public place, be suspicious of the wifi.  Connect to the VPN as soon as possible.  Ensure your internet at home is password protected and change the password regularly.Don't accept flashdrives/ USB keys from people you don't know. They could carry some kind of malicious software that could infect your computer.Turn off Bluetooth when you're not using it. An attacker can use it to connect to your device and access your information without your knowledge.Lock your computer when you step away. This goes for any environment, but especially when you are in public. Leaving your laptop on a table while you are logged in is analogous to leaving your car keys in the door.Above all – remember that the IT and security team is here if you have any questions or concerns. And BE VIGILANT!

National Cyber Security Awareness Month: Avoiding Cloud Crisis

As you'll know if you've been following our National Cyber Security Awareness Month blog series, we're focusing on user awareness.  We belief that these days every user in your environment represents a point on your perimeter; any may be targeted by attackers and any…

As you'll know if you've been following our National Cyber Security Awareness Month blog series, we're focusing on user awareness.  We belief that these days every user in your environment represents a point on your perimeter; any may be targeted by attackers and any could create a security issue in a variety of ways, from losing their phone to clicking on a malicious link.Each week through October we've provided a simple email primer on a topic affecting users' security. We hope these emails can be easily copied and pasted to send around your organization to help educate users on the risk. We've already covered some of the big obvious topics – phishing, passwords and mobile risks.  That brings us to cloud applications.According to the Ponemon Institute, 35% of security leaders say SaaS applications are not evaluated for security prior to deployment.  And those are the security leaders that know it's happening – chances are there are more that don't. A few years ago, Rapid7 ran its own internal audit to see where we stood with cloud applications; we found more deployed than expected by a factor of about 10x. It's easy for individual departments to sign up for an app and start using it without needing IT support, and that's exactly what had been happening, potentially exposing us to unknown and unmanaged risk. We now have policies to ensure the security team is included in any vendor selection, and that all vendors meet our security requirements.  If you don't have policies in place, we strongly recommend you do your own internal audit and determine how you will manage the risk.In the meantime, here's the email for your users…What is the Cloud?“Cloud” basically means a technological solution you're subscribing to online. That covers an incredibly diverse range of things. For example: online data storage like Dropbox, marketing automation and tracking like Marketo, and customer relationship management like Salesforce.com. Cloud applications are designed to be very quick to deploy and easy to manage, and as a result, the chances are that your department is already using some kind of cloud service. The challenge here is that you don't know how good the security of the solution you're buying may be. That can be a big problem if any corporate information is being handled by the service. For example, if you use an online data storage service like Dropbox, SugarSynch or GoogleDrive, and that service gets compromised by an attacker, that attacker could get access to any information you stored on the site. Likewise, if you use an online human resources tool such as TribeHR, BambooHR or iEmployee, and it gets compromised, your employees' personally identifiable information (PII) could be at risk,Not only is this a problem for those directly affected, but the company as a whole is impacted. It is a legal requirement that PII for both employees and customers be protected, so any incident exposing it could result in fines or other penalties.  And there are also reputational implications and the loss of trust.  Other types of corporate data, such as any intellectual property, are also valuable and need to be protected to defend the way we do business.   How can you protect yourself?No one expects you to be an expert on security, but we do request that you be vigilant, familiarize yourself with company policies, and if in doubt, reach out to the IT or security team. In the case of cloud applications, bear in mind that although they may seem very polished and professional, you have no way of knowing what level of risk they are actually exposing you and our company to. Here are some basic ways of minimizing that risk:Work with IT/SecurityWhen you start to think about using a new service, bring the IT and security team into the process.  We can work with you to identify potential options based on your needs and budget, and then we can vet the candidates for you. We know the questions to ask and what to look for to ensure you get all the benefits without a lot of extra risk.Don't store information online without permissionWhen you use a cloud solution you may find that you start putting data in there as a matter of course. This is how you get value out of the solution, but have you considered what kind of data you're storing there?  Or how the vendor is storing and protecting that data? We have a responsibility to keep that data safe, but a 3rd party vendor may not feel they share that responsibility. Check with IT and we will tell you whether it's safe to store information online.Don't use personal cloud storage for workIt's very tempting; you use an online storage service for your media and documents at home. You already have an account set up, and you need to be able to access company information so you can work wherever you are.  Using your personal account seems like an obvious solution, but it isn't. Ask IT for a solution and we will suggest some company-approved approaches and get you up and running.Don't share permissions for company files It's a standard practice to restrict who can access certain types of information in the company based on role.  This helps keep the information safe.  In the same way, you should check with a manager or IT before sharing access to files that are stored in the cloud.Don't share passwords and other access credentials It's very common for teams to share credentials for cloud services. This is an inherently insecure behavior and can encourage other equally insecure behavior such as emailing credentials, writing them down, or using very weak, easy to guess passwords. All of these activities increase the risk associated with using cloud services and should be avoided.  Please familiarize yourself with our email on basic password hygiene if you have not already done so.If you are considering a cloud purchase, or are already using some cloud services we may not know about, please do contact the IT team. And stay vigilant team!

National Cyber Security Awareness Month: Basic Password Hygiene

Throughout October, we're creating basic emails you send to the users in your company to help educate them on information security issues that could affect them in the workplace. Each email provides some information on the issue itself, and some easy steps on how to…

Throughout October, we're creating basic emails you send to the users in your company to help educate them on information security issues that could affect them in the workplace. Each email provides some information on the issue itself, and some easy steps on how to protect themselves. Check out the first two posts, providing primers on phishing and mobile security.This week, we're tackling passwords.  Many view passwords as a sub-optimal means of securing information in a day and age where we do everything from banking to dating online, and need to protect hundreds of different applications and services using strings of symbols that we need to remember. It isn't just our personal lives that are highly connected; many people in your organization likely have more than 5 passwords just to protect services accessed for work. I guarantee that some of them are reusing passwords across different applications, writing them down, and even sharing them with each other.Unfortunately, passwords are a part of our reality, and quite possibly the best option when it comes to providing a manageable, enforceable control for all users. As we've recently seen with Apple Touch ID, biometrics aren't necessarily a better option. Tokens get lost or stolen and aren't practical for the majority of services. And at least passwords offer some basic legal protections.So we're stuck with passwords for the time being, and your users need to understand why they're important and how to use them to best effect. Here's your email primer…Why are passwords important?Having a password is the most basic level of protection you can have for the information you are storing in services or applications, be it your personal Facebook account, your online banking site, or your company's customer tracking system.  The problem is that everything is online now, and everything needs a password. So it's tempting to make your password simple and easy to remember. Perhaps you have a go-to password that you've used for everything since college.  Or maybe you write your password down so you don't forget it.If you do any of those things, you're probably in the majority, not the minority.  Creating long, complex passwords that are unique for every service you use is a challenge, and remembering them all is near impossible.  The problem is that simple, easy to remember passwords are also easy to “crack”. That's probably why a major study found that 76% of network intrusions (aka breaches) in 2012 involved weak or stolen passwords.Once attackers have your password, they have access to your account and any information stored in it. From there, they may be able to do all sorts of things, and what was intended as a form of protection could become a threat in itself. For example, if you use the same password across multiple sites, once an attacker has compromised your information on an unimportant one, they can turn around and use it on a site you do care about. Or say you use different passwords, but the same security questions. They could find the information for your security questions and then set up a fake “change password” request using your information and actually lock you out of an important account.Bottom line: passwords are an important security measure for every aspect of your life, including work.How can you protect yourself?There are a number of things you can do to reduce your risk and increase the protection offered by passwords.Make passwords long and complex. Try to make your password more than 12 characters long and use at least one lower case character, one upper case character, one number, and one special character.  Shamefully, not all sites have enabled this yet, so it may not always be possible, but do it where you can.  Try stringing unconnected words together and mixing up the letters, numbers and special characters to make them extra hard to guess.Don't reuse passwords. It is very difficult to remember unique passwords across everything.  You can tackle this by using a service like KeePass and LastPass, which securely stores your passwords.  All you need to remember is the password for your KeePass account! If you do reuse passwords across sites, be vigilant for any suspicious activity and at the first sign of trouble, change the password on any other sites where it was used.Regularly change your password. Passwords should be changed every 8-12 weeks. Yes it's a hassle, but if an attacker has gained access without you knowing, it stops them from being able to keep coming back over and over again.Two-factor authentication. Where possible, favor services that offer two-factor authentication and enable it.  The way this typically works is that it combines something you know (your password) with something you have (e.g. a generated code sent to your phone) to provide a double layer of protection.Never use a default password. Many devices and applications come with default passwords set up.  You need to change these as soon as possible during your set up process. Using a default password is the same as using no password at all.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now