Rapid7 Blog

Cloud Infrastructure  

Announcing Microsoft Azure Asset Discovery in InsightVM

Almost every security or IT practitioner is familiar with the ascent and continued dominance of Amazon Web Services (AWS). But you only need to peel back a layer or two to find Microsoft Azure growing its own market share and establishing its position as the…

Almost every security or IT practitioner is familiar with the ascent and continued dominance of Amazon Web Services (AWS). But you only need to peel back a layer or two to find Microsoft Azure growing its own market share and establishing its position as the most-used, most-likely-to-renew public cloud provider. Azure is a force to be reckoned with. Many organizations benefit from this friendly competition and not only adopt Azure but increasingly use both Azure and AWS. In this context, security teams are often caught on the swinging end of the rope. A small shake at the top of the rope triggers big swings at the bottom. A credit card is all that is needed to spin up new VMs, but as security teams know, the effort to secure the resulting infrastructure is not trivial. Built for modern infrastructure One way you can keep pace is by using a Rapid7 Scan Engine from the Azure Marketplace. You can make use of a pre-configured Rapid7 Scan Engine within your Azure infrastructure to gain visibility to your VMs from within Azure itself. Another way is to use the Rapid7 Insight Agent on your VM images within Azure. With Agents, you get visibility into your VMs as they spin up. This sounds great in a blog post, but since assets in Microsoft Azure are virtual, they come and go without much fanfare. Remember the bottom-of-the-rope metaphor? You're there now. Security needs visibility to identify vulnerabilities in infrastructure to get on the path to remediation, but this is complicated by a few questions: Do you know when a VM is spun up? How can you assess risk if the VM appears outside your scan window? Do you know when a VM is decommissioned? Are you reporting on VMs that no longer exist? Do you know what a VM is used for? Is your reporting simply a collection of VMs, or do those VMs mean something to your stakeholders? You might struggle with answering these questions if you employ tools that weren't designed with the behavior of modern infrastructure in mind. Automatically discover and manage assets in Azure InsightVM and Nexpose, our vulnerability management solutions offer a new discovery connection to communicate directly to Microsoft Azure. If you know about our existing discovery connection to AWS you'll find this familiar, but we've added new powers to fit the behavior of modern infrastructure: Automated discovery: Detect when assets in Azure are spun up and trigger visibility when you need it using Adaptive Security. Automated cleanup: When VMs are destroyed in Azure, automatically remove them from InsightVM/Nexpose. Keep your inventory clean and your license consumption cleaner. Automated tag synchronization: Synchronize Azure tags with InsightVM/Nexpose to give meaning to the assets discovered in Azure. Eliminate manual efforts to keep asset tags consistent. Getting started First, you'll need to configure Azure to allow InsightVM/Nexpose to communicate with it directly. Follow this step-by-step guide in Azure Resource Manager docs. Specifically, you will need the following pieces of information to set up your connection: Application ID and Application Secret Key Tenant ID Once you have this information, navigate to Administration > Connections > Create Select Microsoft Azure from the dropdown menu. Enter a Connection name, your Tenant ID, Application ID and Application Secret key (a.k.a. Authentication Key). Next, we'll select a Site we want to use to contain the assets discovered from Azure. We can control which assets we want to import with Azure tags. Azure uses a : format for tags. If you want to enter multiple tags, use as a delimiter, e.g., Class:DatabaseType:Production. Check Import tags to import all tags from Azure. If you don't care to import all tags in Azure, you can specify exactly which ones to import. The tags on the VM in Azure will be imported and associated automatically with Assets as they are discovered. When there are changes to tag assignment in Azure, InsightVM/Nexpose will automatically synchronize tag assignments. Finally, as part of the synchronization when VMs are destroyed within Azure, the corresponding asset in InsightVM/Nexpose will be deleted automatically, ensuring your view remains as fresh and current as your modern infrastructure. Great success! Now what...? If you've made it this far, you're at the point where you have your Azure assets synchronized with InsightVM/Nexpose, and you might even have a handful of tags imported. Here are a few ideas to consider when looking to augment your kit: Create an Azure Liveboard: Use Azure tags as filtering criteria to create a tailored dashboard. Scan the site or schedule a scan of a subset of the site. Create Dynamic Asset Groups using tags to subdivide and organize assets. Create an automated action to trigger a scan on assets that haven't been assessed. All of our innovations are built side-by-side with our customers through the Rapid7 Voice program. Please contact your Rapid7 CSM or sales representative if you're interested in helping us make our products better. Not a customer of ours? Try a free 30- day trial of InsightVM today.

[Cloud Security Research] Cross-Cloud Adversary Analytics

Introducing Project Heisenberg CloudProject Heisenberg Cloud is a Rapid7 Labs research project with a singular purpose: understand what attackers, researchers and organizations are doing in, across and against cloud environments. This research is based on data collected from a new, Rapid7-developed honeypot framework called Heisenberg…

Introducing Project Heisenberg CloudProject Heisenberg Cloud is a Rapid7 Labs research project with a singular purpose: understand what attackers, researchers and organizations are doing in, across and against cloud environments. This research is based on data collected from a new, Rapid7-developed honeypot framework called Heisenberg along with internet reconnaissance data from Rapid7's Project Sonar.Internet-scale reconnaissance with cloud-inspired automationHeisenberg honeypots are a modern take on the seminal attacker detection tool. Each Heisenberg node is a lightweight, highly configurable agent that is centrally deployed using well-tested tools, such as terraform, and controlled from a central administration portal. Virtually any honeypot code can be deployed to Heisenberg agents and all agents send back full packet captures for post-interaction analysis.One of the main goals of Heisenberg it to understand attacker methodology. All interaction and packet capture data is synchronized to a central collector and all real-time logs are fed directly into Rapid7's Logentries for live monitoring and historical data mining.Insights into cloud configs and attacker methodologyRapid7 and Microsoft deployed multiple Heisenberg honeypots in every "zone" of six major cloud providers: Amazon, Azure, Digital Ocean, Rackspace, Google and Softlayer, and examined the service diversity in each of these environments, the type of connection attackers, researchers and organizations are initiating within, against and across these environments.To paint a picture of the services offered in each cloud provider, the research teams used Sonar data collected during Rapid7's 2016 National Exposure study. Some highlights include:The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.Web services are prolific, with 53-80 of nodes in each provider exposing some type of web service.Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate - 86% and 74%, respectively - than the other four cloud providers in this study.A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).Our honeypots caught "data mashup" businesses attempting to use the cloud to mask illegal content scraping activity.Read MoreFor more detail on our initial findings with Heisenberg Cloud, please click here to download our report or here for slides from our recent UNITED conference presentation. AcknowledgementsWe would like to thank Microsoft and Amazon for engaging with us through the initial stages of this research effort, and as indicated above, we hope they, and other cloud hosting providers will continue to do so as we move forward with the project.

Overcome Nephophobia - Don't be a Shadow IT Ostrich!

Overcome Nephophobia - Don't be a Shadow IT Ostrich! Every cloud….. When I was much younger and we only had three TV channels, I used to know a lot of Names of Things. Lack of necessity and general old age has meant I've now long…

Overcome Nephophobia - Don't be a Shadow IT Ostrich! Every cloud….. When I was much younger and we only had three TV channels, I used to know a lot of Names of Things. Lack of necessity and general old age has meant I've now long since forgotten most of them (but thanks to Google, my second brain, I can generally “remember” them again as long as there's data available). Dinosaurs, trees, wild flowers, and clouds were all amongst the subject matters in which my five-year-old self was a bit of an expert. I would point at the sky and wow my parents with my meteorological prowess, all learnt from the pages of a book. Good times. These days I can manage about three cloud names off the top of my head before reaching for the Internet. Cirrus, stratus, cumulonimbus (OK I had to double check the last one).  Failing memory aside, I still love clouds, and frankly there's little that beats a decent sunset – which wouldn't be anywhere near as good without some clouds. So assuming you're still reading and not googling cloud names (because it can't just be me), I'd like you to think of a cloud please, an actual one, not a digital one. Chances are it's all fluffy and white, the cumulus (oh yeah) type. Of all the words I could use to describe a cumulus cloud “scary” isn't one of them. But did you know that Nephophobia - the irrational fear of clouds - is a real condition? Nephophobics struggle to look up into the sky, and in some cases won't even look at a picture of a cloud. Any phobia by its very nature is debilitating, leaving the sufferer feeling anxious at best, or totally unable to function at worst. I live with a six-foot strapping arachnophobe who is reduced to a gibbering wreck at anything larger than a money spider. Digital Nephophobia Nephophobia exists in our digital world too. Use of the cloud is written off and immediately written in to policy. “We don't use the cloud” is something I've heard far too frequently. And sometimes “don't” is more “can't” (blocked from doing so by government regulation) or “won't” (we just don't want to, we don't trust it), but actually “do…but don't know it” is more often the reality. This is where anxiety caused by the cloud is at its most valid – lack of visibility into the cloud services your users are already using (aka Shadow IT) is frankly terrifying for anyone concerned with data privacy or data security. I recently met with an IT Security Manager of a global network, who rightly said “if you're not providing the services your users need and expect, then whether you like it or not you are probably being exposed to Shadow IT”. Pretending it's not happening won't make it go away either, as many a mauled ostrich will merrily testify. Digital Therapy Many phobia therapies involve facing the fear head on. Now I'm not suggesting that the best medicine to cure digital nephophobia is to burn the “we don't use the cloud” policy and open up your network to every cloud service available, far from it. First of all, it's vital to understand what is really happening within your environment now – which cloud services your users have using without your knowledge. From there you can work out which cloud services you should be formally provisioning, which you should be monitoring, and which you should be locking down. Perform the due diligence – any cloud vendor worth their salt will be able to provide you with the reassurance that their service is secured, with in-depth details of how it is secured, what happens to your data in transit and at rest, how it is segmented from other organisations' data, who has access, and more. Set yourself free Once you've worked out what you need, and are confident in the service provider's security processes (which are likely going to be on par or indeed even better than those in your own network), the weight of digital nephophobia will begin to lift. The benefits of using the cloud are huge – a huge reduction in provisioning, administration, and maintenance overheads for a start. The speed in which you can provide new services compared to the old world of doing it all in-house is staggering – how many times have you heard users moan about how long it takes IT to bring in a new service? Speaking of moaning – how about those 79 bajillion helpdesk tickets and IMs and calls that come in because The Server's Down….Again? Distant memories – uptime is another benefit to embracing cloud services.  You'll be in good company too - organisations from every vertical are using the cloud – financial institutions, governments, healthcare, defense, manufacturing, charities, the list goes on and on. Tackling Shadow IT is the first step in the journey from Nephophobe to Nephophile Our aforementioned ostrich friend wants to be a lesson to you. If you can't see where your problems are, you can't begin to do something about them, and if you bury your head in the sand you are in dire risk of becoming lion lunch. Visibility into cloud services, whether they are sanctioned or shadow IT services, is a string that every IT Security professional needs to have in their bow. InsightIDR gives you that string (and a whole bunch more too!) – at the tips of your fingers lies a wealth of information on which cloud apps are being accessed, who is using them, when they are being used, and how frequently. And you don't have to code a bunch of complex queries to access this information – the interactive dashboard has it all: Want to learn more about how InsightIDR gives organisations insight into cloud services, user behaviour, and accelerates incident investigations by over 20x (told you there were more bow strings available!)? We'd love to show you a demo. And if you would like to know more about our approach to cloud platform security you can read all about here right here.

Weekly Metasploit Wrapup

Silence is golden Taking screenshots of compromised systems can give you a lot of information that might otherwise not be readily available. Screenshots can also add a bit of extra spice to what might be an otherwise dry report. For better or worse, showing people…

Silence is golden Taking screenshots of compromised systems can give you a lot of information that might otherwise not be readily available. Screenshots can also add a bit of extra spice to what might be an otherwise dry report. For better or worse, showing people that you have a shell on their system often doesn't have much impact. Showing people screenshots of their desktop can evoke a visceral reaction that can't be ignored. Plus, it's always hilarious seeing Microsoft Outlook open to the phishing email that got you a shell. In OSX, this can be accomplished with the module post/osx/capture/screenshot. Prior to this week's update, doing so would trigger that annoying "snapshot" sound, alerting your victim to their unfortunate circumstances. After a small change to that module, the sound is now disabled so you can continue hacking on your merry way, saving the big reveal for some future time when letting them know of your presence is acceptable. Check your sums before you wreck your sums Sometimes you just want to know if a particular file is the same as what you expect or what you've seen before. That's exactly what checksums are good at. Now you can run several kinds of checksums from a meterpreter prompt with the new checksum command. Its first argument is the hash type, e.g. "sha1" or "md5", and the rest are remote file names. Metadata is best data, everyone know this As more and more infrastructure moves to the cloud, tools for dealing with the various cloud providers become more useful. If you have a session on an AWS EC2 instance, the new post/multi/gather/aws_ec2_instance_metadata can grab EC2 metadata, which "can include things like SSH public keys, IPs, networks, user names, MACs, custom user data and numerous other things that could be useful in EC2 post-exploitation scenarios." Of particular interest in that list is custom user data. People put all kinds of ridiculous things in places like that and I would guess that there is basically 100% probability that the EC2 custom field has been used to store usernames and passwords. Magical ELFs For a while now, msfvenom has been able to produce ELF library (.so) files with the elf-so format option. Formerly, these only worked with the normal linking system, i.e., it works when an executable loads it from /usr/lib or whatever but due to a couple of otherwise unimportant header fields, it didn't work with LD_PRELOAD. For those who are unfamiliar with LD_PRELOAD, it's a little bit of magic that allows the linker to load up a library implicitly rather than as a result of the binary saying it needs that library. This mechanism is often used for debugging, so you can stub out functions or make them behave differently when you're trying to track down a tricky bug. It's also super useful for hijacking functions. This use case provides lots of fun shenanigans you can do to create a userspace rootkit, but for our purposes, it's often enough simply to run a payload so a command like this: LD_PRELOAD=./mettle.so /bin/true will result in a complete mettle session running inside a /bin/true process. New Modules Exploit modules (1 new) Windows Capcom.sys Kernel Execution Exploit (x64 only) by OJ Reeves, and TheWack0lian Auxiliary and post modules _(3 new)_s ColoradoFTP Server 1.3 Build 8 Directory Traversal Information Disclosure by RvLaboratory, and h00die MYSQL Directory Write Test by AverageSecurityGuy Gather AWS EC2 Instance Metadata by Jon Hart Get it As always, you can update to the latest Metasploit Framework with a simple msfupdate and the full diff since the last blog post is available on GitHub: 4.12.28...4.12.30 To install fresh, check out the open-source-only Nightly Installers, or the binary installers which also include the commercial editions.

[Q&A] User Behavior Analytics as Easy as ABC Webcast

Earlier this week, we had a great webcast all about User Behavior Analytics (UBA). If you'd like to learn why organizations are benefiting from UBA, including how it works, top use cases, and pitfalls to avoid, along with a demo of Rapid7 InsightIDR, check out…

Earlier this week, we had a great webcast all about User Behavior Analytics (UBA). If you'd like to learn why organizations are benefiting from UBA, including how it works, top use cases, and pitfalls to avoid, along with a demo of Rapid7 InsightIDR, check out on-demand: User Behavior Analytics: As Easy as ABC or the UBA Buyer's Tool Kit. During the InsightIDR demo, which includes top SIEM, UBA, and EDR capabilities in a single solution, we had a lot of attendee questions (34!). We grouped the majority of questions into key themes, with seven Q&A listed below. Want more? Leave a comment!1. Is [InsightIDR] a SIEM?Yes. We call InsightIDR the SIEM you've always wanted, armed with the detection you'll always need. Built hand-in-hand with incident responders, our focus is to help you reliably find intruders earlier in the attack chain. This is accomplished by integrating with your existing network and security stack, including other log aggregators. However, unlike traditional SIEMs, we require no hardware, come prebuilt with behavior analytics and intruder traps, and monitor endpoints and cloud solutions – all without having to dedicate multiple team members to the project.2. Is InsightIDR a cloud solution?Yes. InsightIDR was designed to equip security teams with modern data processing without the significant overhead of managing the infrastructure. Your log data is aggregated on-premise through an Insight Collector, then securely sent to our multi-tenant analytics cloud, hosted on Amazon Web Services. More information on the Insight Platform cloud architecture.3. Does InsightIDR assist with PCI or SOX compliance, or would I need a different Rapid7 solution?Not with every requirement, but many, including tricky ones. As InsightIDR helps you detect and investigate attackers on your network, it can help with many unique compliance requirements. The underlying user behavior analytics will save you time retracing user activity (who had what IP?), as well as increase the efficiency of your existing stack (over the past month, which users generated the most IPS alerts?). Most notably, you can aggregate, store, and create dashboards out of your log data to solve tricky requirements like, “Track and Monitor Access to Network Resources and Cardholder Data.” More on how InsightIDR helps with PCI Compliance.4. Is it possible to see all shadow cloud SAAS solutions used by our internal users?Yes. InsightIDR gets visibility into cloud services in two ways: (1) direct API integrations with leading services, such as Office 365, Salesforce, and Box, and (2) analyzing Firewall, Web Proxy, and DNS traffic. Through the latter, InsightIDR will identify hundreds of cloud services, giving your team visibility into what's really happening on the network.5. Where does InsightUBA leave off and InsightIDR begin?InsightIDR includes everything in InsightUBA, along with major developments in three key areas:Fully Searchable Data SetEndpoint Interrogation and HuntingCustom Compliance DashboardsFor a deeper breakdown, check out “What's the difference between InsightIDR & InsightUBA?”6. Can we use InsightIDR/UBA with Nexpose?Yes! Nexpose and InsightIDR integrate to provide visibility and security detection across assets and the users behind them. With this combination, you can see exactly which users have which vulnerabilities, putting a face and context to the vuln. If you dynamically tag assets in Nexpose as critical, such as those in the DMZ or containing a software package unique to domain controllers, those are automatically tagged in InsightIDR as restricted assets. Restricted assets in InsightIDR come with a higher level of scrutiny – you'll receive an alert for notable behavior like lateral movement, endpoint log deletion, and anomalous admin activity.7. If endpoint devices are not joined to the domain, can the agents collect endpoint information to send to InsightIDR?Yes. From working with our pen testers and incident response teams, we realize it's essential to have coverage for the endpoint. We suggest customers deploy the Endpoint Scan for the main network, which provides incident detection without having to deploy and manage an agent. For remote workers and critical assets not joined to the domain, our Continuous Agent is available, which provides real-time detection, endpoint interrogation, and even a built-in Intruder Trap, Honey Credentials, to detect pass-the-hash and other password attacks.Huge thanks to everyone that attended the live or on-demand webcast – please share your thoughts below. If you want to discuss if InsightIDR is right for your organization, request a free guided demo here.

Incident Detection Needs to Account for Disruptive Technologies

Since InsightIDR was first designed, there has been a noteworthy consistency: it collects data from your legacy networking infrastructure, the mobile devices accessing your resources, and your cloud infrastructure. This is because we believe that you need to monitor users wherever they have access to…

Since InsightIDR was first designed, there has been a noteworthy consistency: it collects data from your legacy networking infrastructure, the mobile devices accessing your resources, and your cloud infrastructure. This is because we believe that you need to monitor users wherever they have access to the network to accurately detect misuse and abuse of company resources, be they malicious or negligent in origin. This doesn't mean tiptoeing around employee privacy, but it does mean that you have to assume that a productive workforce is going to continually adopt new technology. Monitor the established No monitoring or detection solution can help your organization contain attacks if it isn't flexible enough to collect data from your organization's more traditional infrastructure. This means processing syslog, but it also means creative methods of collecting data from Microsoft applications that don't support syslog output, and it means creating both low-bandwidth dissolving agents and continuous agents to collect data from the many endpoints that attackers and malware target. It is not about collecting all of the data, but rather collecting all of the relevant data. While your organization has likely virtualized a great deal of its server infrastructure, it is probably not approaching a point where servers and endpoints have been abandoned, so effective detection needs to monitor it appropriately. Once InsightIDR is set up to do this (in a few hours), a lot of value is added via attributing all activity back to the actual users responsible and using this to separate the normal from the anomalous. Monitor the disruptive We have almost reached a time when it feels strange to label BYOD and cloud as "disruptive", but there are a great deal of organizations that are still coming to terms with the tremendous value they inevitably bring. There are a lot of point solutions out there that help you monitor one or the other in isolation and there is good reason: attackers are looking for any way into your network. Whether it is by leveraging WordPress to launch malware, cloning a user's mobile device to gain access, or exfiltrating stolen data to Dropbox, attackers have embraced disruptive technologies, so effective detection needs to do the same. Each vendor is different, but as customers of enterprise cloud applications, we need to demand administrative tools like auditing, at a minimum. As an employee, I don't want to go back to a time when I had to put down my mobile phone and get on the VPN from a PC to securely share a file with a coworker. Given my experience in the security market, I am guessing that a great deal of your workforce feels the same, minus the "securely" qualifier. InsightIDR was built to monitor activity on connected mobile devices and in your organization's cloud applications. Every major cloud solution vendor has recognized the need for its customers to monitor their cloud infrastructure just as they would their internal servers and devices. Combine for the full picture Point user behavior analytics solutions to monitor your legacy infrastructure or mobile devices or cloud application bring value, but they also bring more work. We believe that you need to monitor and correlate activity across all of them if you want to reduce the noise and effectively detect today's attacks. Collecting data in separate silos leads to more alerts and longer investigations to close them. Start with something as simple as recognizing where in the world a user is: VPN data can tell you this and some solutions tout the ability to detect a user with simultaneous VPN session from two points on the globe, but isn't that an edge case? If you see where a user was when connecting to Office 365 and where their mobile device was recently connecting, you can detect a problem when the first VPN session is established from the other side of the globe. Expand this model to scenarios other than geo-location and you can detect incidents faster and apply enough context to quickly understand what caused them to significantly shorten your response times. If you work in a modern organization and want to effectively detect incidents across its resources, please contact us to schedule an InsightIDR demo. I think you'll find it very flexible.

Positive Secondary Effects: Incident Response Teams Benefit From Cloud Applications

We primarily hear the term "secondary effects" after natural disasters: "an earthquake causes a gas line to rupture and a fire ensues" or "a volcano erupts and the sulfur cloud shuts down all flights across the Atlantic", but there…

We primarily hear the term "secondary effects" after natural disasters: "an earthquake causes a gas line to rupture and a fire ensues" or "a volcano erupts and the sulfur cloud shuts down all flights across the Atlantic", but there are a lot of positive secondary effects out there. If developed properly, cloud applications bring with them secondary effects of singular events to benefit the customer community. Since I work for a security company, I cannot write a blog post about cloud applications without addressing the obvious concern so many people have. Cloud Security The first question that every security professional needs to ask a vendor when considering a cloud application is "how have you secured your cloud?" because it is just proper due diligence. The first step that Rapid7 took when building a security product in Amazon Web Services (AWS) was to lay out the strategy to make it secure and scalable, so we did a great deal of research and recognized that if you go to the necessary lengths (as we did), a cloud application can be just as secure as an on-premise app. Despite this forethought, I still run into a great deal of security pros that would prefer to have the data reside on one of their servers so they can truly control it. This is a natural feeling that we have in the security industry because we have been taught to be paranoid and trust no one, but I liken it to why people feel safer driving a car than flying on a plane controlled by someone else, despite the fact that a pilot specializes and agonizes over flying the way that we do over securing your data. The next time you discover that a rogue server was set up in your organization without informing the security team, remember that security is the primary focus of our InsightIDR development team. Please always ask your vendors to walk you through the steps they have taken to secure your data in the cloud, so that you can take advantage of the secondary effects I will describe below. Secondary Effect #1 - Quick reaction to change One of the most common pains that I hear from incident response teams (internal or MSSP) is keeping the data flowing into their SIEM. The SIEM vendors are not to blame here; whether you are talking about firewalls, VPNs, or whatever other devices containing valuable information, the export formats vary widely and change without notice. The worst possible scenario here is that a concerning event occurs and the investigation warrants a look at firewall logs to quickly learn what happened, but those logs have not been reaching the SIEM since the last firewall software update (silently) changed the logging format. Now, InsightIDR does not have the Nostradamus-like power to predict which vendor will change formats without notice, but it can take your team out of the silo that makes every team feel the same pain. Let me explain: once a single InsightIDR customer has set up, say, a Fortinet Fortigate firewall as an event source, the parser and collection method are there for all new customers to simply choose from a drop-down list and connect in seconds via syslog, a shared folder, or otherwise. This means that your team does not have to remember how they set up parsing of Fortinet logs at their last organizations or ever even see the logs in raw form. This helps in initial setup time, but it really helps when the firewall data suddenly stops reaching the InsightIDR cloud because the parsing rules have changed. One of Rapid7's customers will get a notification that data has stopped flowing, but the quick reaction of the InsightIDR team to update the parsing logic for this event source means that we will typically support the new format before other IR teams end up with a gap in data. The more organizations in the community, the lower the likelihood that your team even notices that log formats have changed. Secondary Effect #2 - Detection that learns faster than any single person Though it is not a pain as frequently voiced as changing log formats, I have heard a great deal of security leaders describe the challenge to find seasoned incident response experts. Typically, it comes in statements like "I worked at company X when they were breached and helped the team build detection for the techniques used there" or "I have a fantastic ArcSight guy and I don't know what I would do without him". Neither of these statements is a bad thing, but not every organization with security concerns can make those claims. In fact, I consistently hear that it is challenging for CSOs to fill the openings that are already approved. This is where InsightIDR's customer base gets the benefit of what I call a collective mind. We generate a lot of our detection capabilities with our customers, whether it is simply a customer saying "I ran X with WCE and didn't get an alert" or Rapid7 creating a new alert and finding a few customers that want to test it out and make sure it brings value. If attackers used a certain trick to stay hidden in 2011, we will build a way to detect that, but more importantly, if they use a new technique in 2014 and a single person (Rapid7 employee, a customer, or even a friend) learns of it, the entire InsightIDR customer base will get the benefit of detecting that action in the future. Attackers are sharing techniques with one another, so our only possible way to keep up is to do the same in our detection. Secondary Effect #3 - User behavior analytics across a larger dataset There are a lot of "security analytics" companies emerging these days, so I feel it important to point out the biggest challenge you will have with the on-premise solutions: they only analyze the data in the silos I mentioned above. Reducing noise and correlating the right disparate data for detection across various sources grows slowly when there is no collective mind on which research and proactive analysis can be performed. It assumes that a security analytics solution already knows what data is important before install and it is not important to rapidly learn new use cases and adapt. This is why we believe it was necessary to build in the cloud, despite the natural resistance that some security teams still have. If you want to take advantage of the secondary effects at your organization, please start the process and request a demo. We will happily step through the way by which we have secured our cloud.

Nexpose Receives AWS Certification

Rapid7's Nexpose just became the first Threat Exposure Management solution to complete AWS' new rigorous pre-authorized scanning certification process!Normally, a customer must request permission from AWS support to perform vulnerability scans. This request must be made for each vulnerability scan engine or penetration testing…

Rapid7's Nexpose just became the first Threat Exposure Management solution to complete AWS' new rigorous pre-authorized scanning certification process!Normally, a customer must request permission from AWS support to perform vulnerability scans. This request must be made for each vulnerability scan engine or penetration testing tool and renewed every 90 days. The new pre-authorized Nexpose scan engine streamlines the process. When a pre-authorized scan engine is launched from the AWS Marketplace, permission is instantly granted.This AWS certification effort is a proof point of our continued dedication to securing organizations' data and reducing their risk, and to ensuring our solutions address real customer needs and market trends.Cloud is increasingly an essential part of the today's modern business networks and an area in which our customers invest. In October 2015 IDC reported that spend on public cloud IT infrastructure was on track to increase by 29.6% year over year, totaling $20.5 billion(1).The new AWS certification underscores our commitment to ease of use and provides customers with assets in AWS the same level of security and experience as an on-premise deployment.Organizations can easily gain visibility of their entire attack surface – regardless where their asset sits. The new Nexpose certifications means that customers can simply use our pre-authorized AMI to scan their AWS assets without any of the authorization or permissions required for non-authorized solutions.Learn more:How to use and set-up: Nexpose Scan Engine on the AWS Marketplace Pre-authorized AMI: Nexpose Scan Engine (Pre-authorized) on AWS Marketplace (1) IDC's Worldwide Quarterly Cloud IT Infrastructure Tracker, October 2015.

Nexpose Scan Engine on the AWS Marketplace

Update September 2017: For even more enhanced capabilities, check out the AWS Web Asset Sync Discovery Connection. Rapid7 is excited to announce that you can now find a Nexpose Scan Engine AMI on the Amazon Web Services Marketplace making it simple to deploy a pre-authorized…

Update September 2017: For even more enhanced capabilities, check out the AWS Web Asset Sync Discovery Connection. Rapid7 is excited to announce that you can now find a Nexpose Scan Engine AMI on the Amazon Web Services Marketplace making it simple to deploy a pre-authorized Nexpose Scan Engine from the AWS Marketplace to scan your AWS assets! What is an AMI ? An Amazon Machine Image (AMI) allows you to launch a virtual server in the cloud. This means you can deploy Nexpose Scan Engines via the Amazon marketplace without having to go through the process of configuring and installing it yourself. What are the benefits ? The Marketplace includes a specially configured Nexpose Scan Engine that is pre-authorized for scanning AWS assets. This provides Rapid7 customers the ability to scan AWS assets immediately, or on a recurring schedule without having to contact Amazon in advance for permission – a process that can take a number of days.  Using a Nexpose Scan Engine deployed within the AWS network also allows you to scan private IP addresses and collect information which may not be available with public IP addresses (such as internal databases).  Additionally, scanning private IPs eliminates the need to pay for elastic IP's. How do I deploy a pre-authorized Scan Engine ? Current Nexpose customers can deploy the pre-authorized Nexpose Scan Engine as a remote scan engine for scanning AWS assets only.  When creating your AWS discovery connection simply check the box denoting that your scan engine is in the AWS network. You'll need a set of IAM credentials with permission to list assets in your AWS account.  A minimal IAM policy to allow this looks like: { "Version": "2012-10-17", "Statement": [{ "Sid": "NexposeScanEngine", "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeImages", "ec2:DescribeAddresses" ], "Resource": [ "*" ] }] } The pre-authorized scan engine must use the "engine-to-console" communication direction.  This means the Scan Engine will initiate communication with the Nexpose Console.  Preparing your Nexpose Console to pair with a pre-authorized Scan Engine is simple: Ensure the pre-authorized Scan Engine can communicate with your Nexpose Console on port 40815.  You may need to open a firewall port to allow this. Generate a temporary shared secret on your console.  This is used to authorize the Scan Engine.  A shared secret can be generated from the Administration -> Scan Options -> Engines -> manage screen.  Scroll to the bottom and use the Generate button.  Keep this page open, you'll need the secret when launching your Scan Engine. Now you are ready to deploy your pre-authorized Nexpose Scan Engine.  Sign into your AWS console and navigate to the Nexpose Scan Engine (Pre-authorized) AWS Marketplace listing.  You must use EC2 user data to tell your engine how to pair with your console.  Follow these steps to launch the engine: Click Continue on the AWS Marketplace listing. Accept the terms using the Accept Software Terms button. It can take up to 10 minutes for Amazon to process your request.  You'll receive an email from Amazon when you can launch the AMI. After you receive the email, refresh the marketplace page.  You should see several blue "Launch with EC2 Console" buttons. Click the Launch with EC2 Console button in your desired AWS region. Proceed with the normal process of launching an EC2 instance.  When you get to the Instance Details screen, expand the Advanced Details section.  Provide the following EC2 user data.  Replace the bracketed sections with information about your Nexpose Console: NEXPOSE_CONSOLE_HOST=<hostname or ip of your console> NEXPOSE_CONSOLE_PORT=40815 NEXPOSE_CONSOLE_SECRET=<shared secret generated earlier> Finish launching the EC2 instance. Once the instance boots, it can take 10-15 minutes to pair with the console. Verify the engine pairs with the console via the engine listing in the console (Administration -> Scan Options -> Engines -> manage). With this one-time configuration set, you can create a schedule to scan your AWS assets.

#IoTSec and the Business Impact of Hacked Baby Monitors

By now, you've probably caught wind of Mark Stanislav's ten newly disclosed vulnerabilities last week, or seen our whitepaper on baby monitor security – if not, head on over to the IoTSec resources page.You may also have noticed that Rapid7 isn't really a Consumer…

By now, you've probably caught wind of Mark Stanislav's ten newly disclosed vulnerabilities last week, or seen our whitepaper on baby monitor security – if not, head on over to the IoTSec resources page.You may also have noticed that Rapid7 isn't really a Consumer Reports-style testing house for consumer gear. We're much more of an enterprise security services and products company, so what's the deal with the baby monitors? Why spend time and effort on this?The Decline of Human DominanceWell, this whole “Internet of Things” is in the midst of really taking off, which I'm sure doesn't come as news. According to Gartner, we're on track to see 25 billion-with-a-B of these Things operating in just five years, or something around three to four Things for every human on Earth.Pretty much every electronic appliance in your home is getting a network stack, an operating system kernel, and a cloud-backed service, and it's not like they have their own network full of routers and endpoints and frequencies to do all this on. They're using the home's WiFi network, hopping out to the Internet, and talking to you via your most convenient screen.Pwned From HomeIn the meantime, telecommuting increasingly blurs the lines between the “work” network and the “home” network. From my home WiFi, I check my work e-mail, hop on video conferences, commit code to GitHub (both public and private), and interact with Rapid7's assets directly or via a cloud service pretty much every day. I know I'm not alone on this. The imaginary line between the “internal” corporate network and the “external” network has been a convenient fiction for a while, and it's getting more and more porous as traversing that boundary makes more and more business sense. After all, I'm crazy productive when I'm not in the office, thanks largely to my trusty 2FA, SSO, and VPN.So, we're looking at a situation where you have a network full of Things that haven't been IT-approved (as if that stopped anyone before) all chattering away, while we're trying to do sensitive stuff like access and write sensitive and proprietary company data, on the very same network.Oh, and if the aftermarket testing we've seen (and performed) is to be believed, these devices haven't had a whole lot of security rigor applied.Compromising a network starts with knocking over that low-hanging fruit, that one device that hasn't seen a patch in forever, that doesn't keep logs, that has a silly password on an administrator-level account – pretty much, a device that has all of the classic misfeatures common to video baby monitors and every other early market IoT device.Let's Get HackingIndependent research is critical in getting the point across that this IoT revolution is not just nifty and useful. It needs to be handled with care. Otherwise, the IoT space will represent a mountain of shells, pre-built vulnerable platforms, usable by bad guys to get footholds in every home and office network on Earth.If you're responsible for IT security, maybe it's time to take a survey of your user base and see if you can get a feel for how many IoT devices are one hop away from your critical assets. Perhaps you can start an education program on password management that goes beyond the local Active Directory, and gets people to take all these passwords seriously. Heck, teach your users how to check and change defaults on their new gadgets, and how to document their changes for when things go south.In the meantime, check out our webinar tomorrow for the technical details of Mark's research on video baby monitors, and join us over on Reddit and “Ask Me Anything” about IoT security and what we can do to get ahead of these problems.

The real challenge behind asset inventory

As the IT landscape evolves, and as companies diversify the assets they bring to their networks - including on premise, cloud and personal assets - one of the biggest challenges becomes maintaining an accurate picture of which assets are present on your network. Furthermore, while…

As the IT landscape evolves, and as companies diversify the assets they bring to their networks - including on premise, cloud and personal assets - one of the biggest challenges becomes maintaining an accurate picture of which assets are present on your network. Furthermore, while the accurate picture is the end goal, the real challenge becomes optimizing the means to obtain and maintain that picture current. The traditional discovery paradigm of continuous discovery sweeps of your whole network by itself is becoming obsolete. As companies grow, sweeping becomes a burden on the network. In fact, in a highly dynamic environment, traditional sweeping approaches pretty quickly become stale and irrelevant.Our customers are dealing with networks made up of thousands of connected assets. Lots of them are decommissioned and many others brought to life multiple times a day from different physical locations on their local or virtual networks. In a world where many assets are not 'owned' by their organization, or unauthorized/unmanaged assets connect to their network (such as mobile devices or personal computers), understanding the risk those assets introduce to their network is paramount to the success of their security program.Rapid7 believes this very process of keeping your inventory up to date should be automated and instantaneous. Our technology allows our customers to use non-sweeping technologies like monitoring DHCP, DNS, Infoblox, and other relevant servers/applications. We also enable monitoring through technology partners such as vSphere or AWS for virtual infrastructure, and mobile device inventory with ActiveSync.. In addition, Rapid7's research team through its Sonar project technology (this topic deserves it's own blog) is able to scan the internet and understand our customer's external presence. All of these automated techniques provide great visibility and complements the traditional approaches such that our customer's experiences on our products revolves around taking action and reducing risk as opposed to configuring the tool.Why should you care? It really comes down to good hygiene and good security practices. It is unacceptable not to know about the presence of a machine that is exfiltrating data off of your network or rogue assets listening on your network. And beyond being unacceptable, it can take you out of business. Brand damage, legal and compliance risks are great concerns that are not mitigated by an accurate inventory alone, however, without knowing those assets exists in your network in a timely manner it is impossible to assess the risk they bring and take action.SANS Institute has this topic rated as the Top security control https://www.sans.org/critical-security-controls/control/1. They bring up key questions that companies should be asking to their security teams: How long does it take to detect new assets on their networks? How long does it take their current scanner to detect unauthorized assets? How long does it take to isolate/remove unauthorized assets from the network? What details (location, department) can the scanner identify on unauthorized devices? and plenty more.Let Rapid7 technology worry about inventory. Once you've got asset inventory covered, then you can move to remediation, risk analysis, and other much more fun security topics with peace of mind that if it's in your network then you will detect it in a timely manner.

Join us at Camp Rapid7: Free Security Learnings All Summer Long

This summer, Rapid7 is hosting a ton of free, educational security content at the Rapid7 Security Summer Camp. Camp Rapid7 is a place where security professionals of all ages (Girls AND Boys Allowed!) can gain knowledge and skill in incident detection and response, cloud security,…

This summer, Rapid7 is hosting a ton of free, educational security content at the Rapid7 Security Summer Camp. Camp Rapid7 is a place where security professionals of all ages (Girls AND Boys Allowed!) can gain knowledge and skill in incident detection and response, cloud security, phishing, threat exposure management, and more. A few of the exciting activities for visitors at Camp Rapid7 – Take a load off at the Webcast Fire Pit and listen to the Counselors (Security Experts!) share on “Campfire Horror Stories: 5 Most Common Findings in Pen Tests”, “Detecting the Bear in Camp: How to Find your True Vulnerabilities”, “Storming the Breach: Uncovering Attacker Tracks” and more in live broadcasts throughout the summer! Scale Security Maturity Mountain for resources on how to ensure your security program is strong and measuring up Hit the beach and learn how to monitor for Phishing out on the water, and how to discover even more threats from the Incident Detection Lifeguard Hut Gaze into the skies to up your Cloud Security knowledge Stop by the CISO's Cabin to learn how to transform your organization's security program to be relevant, actionable, and sustainable Start Exploring the Rapid7 Security Summer Camp Today!

Top 3 Takeaways from the "Getting One Step Ahead of the Attacker: How to Turn the Tables" Webcast

For too long, attackers have been one step (or leaps) ahead of security teams. They study existing security solutions in the market and identify gaps they can use to their advantage. They use attack methods that are low cost and high return like stolen credentials…

For too long, attackers have been one step (or leaps) ahead of security teams. They study existing security solutions in the market and identify gaps they can use to their advantage. They use attack methods that are low cost and high return like stolen credentials and phishing, which works more often than not. They bank on security teams being too overwhelmed by security alerts to be able to sift through the noise to detect their presence. In this week's webcast, Matt Hathaway explored what security professionals need to do to get ahead of attackers whether by increasing the cost of attacks, catching attackers in their favorite hiding spots, or knowing how to recognize tools and techniques all attackers use. Read on for the top 3 takeaways from “Getting One Step Ahead of the Attacker: How to Turn the Tables”: 1) Attackers Have Gotten Creative – Defenders have progressed malware detection to the point where even newer and more innovative malware can get detected and blocked with a high success rate, which is great. However, success in this area pushes attackers to adopt more stealthy and creative tactics, often involving social engineering and user impersonation. Attackers study their targets, and will use spear phishing to get a foothold on an organization's network through its users. Once in, they can move from system to system by continuing to impersonate user activity. Attackers also understand things like how the average network is laid out, gaps they may be able to take advantage of, and where people generally have monitoring in place. Attackers don't even necessarily have to be too sophisticated to be successful, sometimes persistence will be enough. 2) Anomalous Activity is the Answer – Alliteration aside, it really is crucial for security professionals to be able to recognize what kind of user activity on their network is normal, and what is not. How many systems should and does each individual usually access? What data is typically transmitted internally and externally from different groups in your organization? Have a baseline, simple measurement of what constitutes normal access for the average user. The ability to access and review all the data for an individual, account, or system is also important for when something abnormal occurs and you need more context to determine whether the alert is valid. If you aren't monitoring for anomalous user behavior, it becomes harder and harder to detect an attack early enough to prevent data loss. 3) Don't Neglect Endpoints Nor The Cloud – The majority of user activity is happening on endpoints and in the cloud, and often this information isn't getting logged in a centralized place. The cloud provides a lot of convenience and productivity, but making things easier for users introduces more opportunities for attackers. If you don't know what cloud services your company is using or what people are doing in them, attackers have a way to get data out of an organization without even reaching the network. You must analyze behavior across cloud services and your endpoints so you don't miss any suspicious changes. Failure to monitor user behavior on endpoints and in the cloud creates major blind spots for security professionals. Sometimes an indication of attack will tend towards the obvious, for example a vulnerability getting exploited or a port scan. However, a great deal of attacker behavior will be much more nuanced and stealthy. For the more in-depth discussion of how to spot attacker behavior and increase the cost of attacks to reduce risk: view the on-demand webcast now.

Securing the Shadow IT: How to Enable Secure Cloud Services for Your Business

You may fear that cloud services jeopardize your organization's security. Yet, your business relies on cloud services to increase its productivity. Introducing a policy to forbid these cloud services may not be a viable option. The better option is to get visibility into your shadow…

You may fear that cloud services jeopardize your organization's security. Yet, your business relies on cloud services to increase its productivity. Introducing a policy to forbid these cloud services may not be a viable option. The better option is to get visibility into your shadow IT and to enable your business to use it securely to increase productivity and keep up with the market.Step one: Find out which cloud services your organization is usingFirst, you'll want to figure out what is actually in use in your organization. Most IT departments we talk to underestimate how many cloud services are being used by a factor of 10. That's shocking. The easiest way to detect what services are commonly in use is by leveraging Rapid7 UserInsight, a solution for detecting and investigating security incidents from the endpoint to the cloud. For this step, UserInsight analyzes your web proxy, DNS, and firewall logs to outline exactly what services are in use and which users are subscribing to them. This is much easier than sifting through raw log files and identifying which cloud service may be behind a certain entry.Step two: Have a conversation with employees using these servicesKnowing who uses which services enables you to identify the users and have a conversation with them about why they use the service and what data is shared with this service. UserInsight makes it easy to correlate web proxy, DNS, and firewall activity to a user because it keeps track of which user had which IP address on the corporate LAN, WiFi, and VPN, All of this information is just one click away.Based on this information, you can:Move the users to a comparable but more secure service (e.g. from Dropbox to Box.com), Talk with users about why a certain service is not suitable for use on the corporate network (e.g. eDonkey), andEnable higher security on existing services by consolidating accounts under corporate ownership and enabling stronger monitoringStep three: Detect compromised accounts through geolocation of cloud and on-premise accountsCompromised credentials are leveraged in three out of four breaches, yet many organizations have no way to detect how credentials are being used. UserInsight can detect credential compromise in on-premise systems and in the cloud. One way to do this is through geolocation. If a user's mobile device accesses email in New York and then a cloud service is accessed from Germany within a time span of 20 minutes, this indicates a security incident that should be investigated.UserInsight integrates with dozens of cloud services, including Salesforce.com, Box.com, and Google Apps to geolocate authentications even if they happen outside of the corporate network. The solution correlates not only cloud-to-cloud authentications but also cloud-to-on-premise authentications, giving you much faster and higher quality detection of compromised credentials. With Amazon Web Services (AWS), UserInsight can even detect advanced changes, such as changed passwords, changes to groups, and removed user policies. Read more about UserInsight's ability to detect compromises of AWS accounts.Step four: Investigate potential exfiltration to cloud servicesIf attackers compromise your corporate network, they often use cloud storage services to exfiltrate information, even if the company is not even using a particular service. When investigating an incident that involves a certain compromised user, you can review that user's transmission volume to figure out if and how much data was exfiltrated this way. UserInsight makes this exceedingly easy, breaking volume down by user and enabling you to see the volume on a timeline.If you would like to learn more about how UserInsight can help you get more visibility into your organization's cloud service usage, enabling productive conversations and better cloud security, sign up for a free, guided UserInsight demo on the Rapid7 website.

Federal Friday - 11.7.14 - Up in the Clouds...

Happy Friday, Federal friends! I hope everyone had a festive Halloween! According to the commercials I've been seeing on starting on 11/1 I guess we're skipping Thanksgiving this year and jumping right into the Holiday Season... So the time has finally come, Fed is…

Happy Friday, Federal friends! I hope everyone had a festive Halloween! According to the commercials I've been seeing on starting on 11/1 I guess we're skipping Thanksgiving this year and jumping right into the Holiday Season... So the time has finally come, Fed is starting to embrace the cloud (slowly). Within the last week we've seen NIST push out a road map for Cloud Infrastructure & Computing, as well as an article highlighting tips for evaluating the cloud prior to any migration. This is an interesting change of thought, as it seems as they have ramped-up their efforts while realizing that migration into the cloud is inevitable. As we have seen on the commercial side of the world, moving to the cloud has some very real advantages in terms of employee productivity. Not to be overlooked though are the fears of what happens to your data when it's up there, especially given the well hyped Hollywood leaks. That being said industry has taken a focus on making sure the cloud is secure, some of which stems from additional regulations and compliance standards.   Change is scary, but change is also necessary. If you're already planning, or just beginning to think about moving into the cloud there are now some federally sanctioned pieces of information for you to digest. Additionally here are the 8 steps highlighted by GCN:Define and documentEvaluate the cloud architecturePerform due diligenceNegotiate contractual arrangementsDefine continuous monitoringDefine measures to ensure privacyEnsure that the cloud solution provides necessary record-keeping functionsEnsure compliance with FISMA standardsThe Dude abides, even in the cloud(s)...

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now