Rapid7 Blog

AWS  

AWS power-up: Tag import, asset cleanup, AssumeRole, ad-hoc scan

AWS instances present many challenges to security practitioners, who must manage the spikes and dips of resources in infrastructures that deal in very short-lived assets. Better and more accurate syncing of when instances are spun up or down, altered, or terminated directly impacts the quality…

AWS instances present many challenges to security practitioners, who must manage the spikes and dips of resources in infrastructures that deal in very short-lived assets. Better and more accurate syncing of when instances are spun up or down, altered, or terminated directly impacts the quality of security data. A New Discovery Connection Today we’re excited to announce better integration between the Security Console and Amazon Web Services with the new Amazon Web Services Asset Sync discovery connection in InsightVM and Nexpose. This new connection is the result of customer feedback and we would like to thank everyone who submitted ideas through our idea portal. This new integration has some notable and exciting improvements over our existing AWS discovery connection that we can’t wait for you to take advantage of. Automatic Syncing with the Security Console as AWS assets are spun up and spun down As assets are created and decommissioned in AWS, the new Amazon Web Services Asset Sync discovery connection will update your Security Console. This means that users will no longer have to worry about their Security Console data being stale or inaccurate. That means no more chasing down assets in AWS for remediation only to find that the instances no longer exist or carving out time to clean up decommissioned AWS assets from the Security Console. Import AWS Tags and Filtering by AWS Tags One feature that we’ve gotten a lot of requests for is importing tags from AWS. With the Amazon Web Services Asset Sync discovery connection, you can now synchronize AWS tags and even use them to filter what assets get imported. You can also filter tags themselves so you only see tags that are important to you. Once the tags are synced, they can be used just like any other tag within Nexpose—that includes using them to filter assets, create dynamic asset groups, and even create automated actions. Remove a tag in AWS? Nexpose will detect the change and automatically remove it as well. Use AssumeRole to Fine-Tune Adding to Sites Users can now leverage AWS AssumeRole to decide which of their assets across all of their AWS accounts to include in a single site without having to configure multiple AWS discovery connections in their Security Console. Coupled with tag-based filtering, this makes managing your AWS assets much more straightforward. AssumeRole is now also available to Security Consoles outside of the AWS environment. Ad-Hoc Scans with the Pre-Authorized Engine Another feature users have requested is more flexibility in selectively scanning sites that contain AWS assets. As part of the Amazon Web Services Asset Sync discovery connection, users will now be able to select which assets they wish to scan with the AWS pre-authorized engine within a site. Use the Security Console Proxy Proxy support is also available for the Amazon Web Services Asset Sync discovery connection. If users already have a proxy server configured and enabled via their Security Console settings, they do not have to change their firewall settings to take advantage of this new discovery connection. Simply check the “Connect to AWS via proxy” box during configuration and the connection will use the configured proxy. Existing AWS Discovery Connections The previous AWS discovery connection will still be available; we recommend users transition to this new, more powerful and flexible the Amazon Web Services Asset Sync discovery connection for managing their AWS assets. Next Steps To take advantage of this new capability, you will need version 6.4.55 of the Security Console for Nexpose and InsightVM. Not already using InsightVM? Get a free trial here.

Announcing Microsoft Azure Asset Discovery in InsightVM

Almost every security or IT practitioner is familiar with the ascent and continued dominance of Amazon Web Services (AWS). But you only need to peel back a layer or two to find Microsoft Azure growing its own market share and establishing its position as the…

Almost every security or IT practitioner is familiar with the ascent and continued dominance of Amazon Web Services (AWS). But you only need to peel back a layer or two to find Microsoft Azure growing its own market share and establishing its position as the most-used, most-likely-to-renew public cloud provider. Azure is a force to be reckoned with. Many organizations benefit from this friendly competition and not only adopt Azure but increasingly use both Azure and AWS. In this context, security teams are often caught on the swinging end of the rope. A small shake at the top of the rope triggers big swings at the bottom. A credit card is all that is needed to spin up new VMs, but as security teams know, the effort to secure the resulting infrastructure is not trivial. Built for modern infrastructure One way you can keep pace is by using a Rapid7 Scan Engine from the Azure Marketplace. You can make use of a pre-configured Rapid7 Scan Engine within your Azure infrastructure to gain visibility to your VMs from within Azure itself. Another way is to use the Rapid7 Insight Agent on your VM images within Azure. With Agents, you get visibility into your VMs as they spin up. This sounds great in a blog post, but since assets in Microsoft Azure are virtual, they come and go without much fanfare. Remember the bottom-of-the-rope metaphor? You're there now. Security needs visibility to identify vulnerabilities in infrastructure to get on the path to remediation, but this is complicated by a few questions: Do you know when a VM is spun up? How can you assess risk if the VM appears outside your scan window? Do you know when a VM is decommissioned? Are you reporting on VMs that no longer exist? Do you know what a VM is used for? Is your reporting simply a collection of VMs, or do those VMs mean something to your stakeholders? You might struggle with answering these questions if you employ tools that weren't designed with the behavior of modern infrastructure in mind. Automatically discover and manage assets in Azure InsightVM and Nexpose, our vulnerability management solutions offer a new discovery connection to communicate directly to Microsoft Azure. If you know about our existing discovery connection to AWS you'll find this familiar, but we've added new powers to fit the behavior of modern infrastructure: Automated discovery: Detect when assets in Azure are spun up and trigger visibility when you need it using Adaptive Security. Automated cleanup: When VMs are destroyed in Azure, automatically remove them from InsightVM/Nexpose. Keep your inventory clean and your license consumption cleaner. Automated tag synchronization: Synchronize Azure tags with InsightVM/Nexpose to give meaning to the assets discovered in Azure. Eliminate manual efforts to keep asset tags consistent. Getting started First, you'll need to configure Azure to allow InsightVM/Nexpose to communicate with it directly. Follow this step-by-step guide in Azure Resource Manager docs. Specifically, you will need the following pieces of information to set up your connection: Application ID and Application Secret Key Tenant ID Once you have this information, navigate to Administration > Connections > Create Select Microsoft Azure from the dropdown menu. Enter a Connection name, your Tenant ID, Application ID and Application Secret key (a.k.a. Authentication Key). Next, we'll select a Site we want to use to contain the assets discovered from Azure. We can control which assets we want to import with Azure tags. Azure uses a : format for tags. If you want to enter multiple tags, use as a delimiter, e.g., Class:DatabaseType:Production. Check Import tags to import all tags from Azure. If you don't care to import all tags in Azure, you can specify exactly which ones to import. The tags on the VM in Azure will be imported and associated automatically with Assets as they are discovered. When there are changes to tag assignment in Azure, InsightVM/Nexpose will automatically synchronize tag assignments. Finally, as part of the synchronization when VMs are destroyed within Azure, the corresponding asset in InsightVM/Nexpose will be deleted automatically, ensuring your view remains as fresh and current as your modern infrastructure. Great success! Now what...? If you've made it this far, you're at the point where you have your Azure assets synchronized with InsightVM/Nexpose, and you might even have a handful of tags imported. Here are a few ideas to consider when looking to augment your kit: Create an Azure Liveboard: Use Azure tags as filtering criteria to create a tailored dashboard. Scan the site or schedule a scan of a subset of the site. Create Dynamic Asset Groups using tags to subdivide and organize assets. Create an automated action to trigger a scan on assets that haven't been assessed. All of our innovations are built side-by-side with our customers through the Rapid7 Voice program. Please contact your Rapid7 CSM or sales representative if you're interested in helping us make our products better. Not a customer of ours? Try a free 30- day trial of InsightVM today.

DevOps: Vagrant with AWS EC2 & Digital Ocean

The Benefits of Vagrant Plugins Following on from my recent DevOps blog posts, The DevOps Tools We Use & How We Use Them and Vagrant with Chef-Server, we will take another step forward and look into provisioning our servers in the cloud. There are many…

The Benefits of Vagrant Plugins Following on from my recent DevOps blog posts, The DevOps Tools We Use & How We Use Them and Vagrant with Chef-Server, we will take another step forward and look into provisioning our servers in the cloud. There are many cloud providers out there, most who provide some sort of APIs. Dealing with the different APIs and scripts can become cumbersome and confusing when your main focus is a fault tolerant, scalable system. This is where Vagrant and many of its plugins shine. Vagrant has a wide range of plugins, from handling Chef and Puppet to provisioning servers on many different cloud providers. We're going to focus on 3 plugins in specific: vagrant-aws, vagrant-digitalocean and vagrant-omnibus. The AWS and Digital Ocean plugins allow us to utilize both our Chef-server and the public infrastructure provided by both Amazon and Digital Ocean. The omnibus is used for installing a specified version of Chef on your servers. Installation of Vagrant Plugins To install the Vagrant plugins, run the following commands: vagrant plugin install vagrant-aws vagrant plugin install vagrant-digitalocean vagrant plugin install vagrant-omnibus Running Vagrant plugin list should give you the following output: vagrant-aws (0.4.1) vagrant-digitalocean (0.5.3) vagrant-omnibus (1.3.1) More information on the plugins can be found here: https://github.com/mitchellh/vagrant-aws https://github.com/smdahlen/vagrant-digitalocean https://github.com/schisamo/vagrant-omnibus Amazon AWS The first cloud provider we will look at is Amazon AWS. If you don't already have an account I suggest you sign up for one here, and have a quick read of their EC2_GetStarted docs. Once signed up you will need to generate an account access and secret key, to do this follow the instruction below: Go to the IAM console. From the navigation menu, click Users. Select your IAM user name. Click User Actions, and then click Manage Access Keys. Click Create Access Key. Your keys will look something like this:     Access key ID example: ABCDEF0123456789ABCD     Secret access key example: abcdef0123456/789ABCD/EF0123456789abcdef Click Download Credentials, and store the keys in a secure location. Vagrant AWS with Chef-Server Now we have everything to get started. Again we will create a Vagrant file similar to what was done in my last blog post DevOps: Vagrant with Chef-Server, however, this time we will omit a few things like config.vm.box and config.vm.boxurl. The reason for this is that we are going to point our Vagrant file to use an Amazon AMI instead. # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.require_plugin 'vagrant-aws' Vagrant.require_plugin 'vagrant-omnibus' Vagrant.configure("2") do |config| config.omnibus.chef_version = :latest config.vm.synced_folder '.', '/vagrant', :disabled => true config.vm.box = "dummy" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/ dummy.box" # Provider config.vm.provider :aws do |aws, override| aws.access_key_id = "ABCDEF0123456789ABCD" aws.secret_access_key = "abcdef0123456/789ABCD/EF0123456789abcdef" aws.keypair_name = "awskey" aws.ami = "ami-9de416ea" #Ubuntu 12.04 LTS aws.region = "eu-west-1" aws.instance_type = "t1.micro" aws.security_groups = ["default"] override.ssh.username = "ubuntu" override.ssh.private_key_path = "path/to/your/awskey.pem" aws.tags = { 'Name' => 'Java' } end # Provisioning config.vm.provision :chef_client do |chef| chef.chef_server_url = 'https://api.opscode.com/organizations/logentries' chef.validation_key_path = '../../.chef/logentries-validator.pem' chef.validation_client_name = 'logentries-validator' chef.log_level = 'info' chef.add_recipe 'java_wrapper' end end You should replace the above bold/italics options with your own settings. The only difference this time in running Vagrant is that you need to pass a provider argument, so your command should look like this: vagrant up --provider=aws Once your Chef run has completed you will have a new instance in Amazon running your specified version of Java. Your output should look similar to this: root@ip-172-31-28-193:~# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) Digital Ocean Another cloud provider which has been gaining a lot of attention lately is Digital Ocean. If you don't already have a Digital Ocean account you can sign up for here. Also have a look at their getting getting started guide if you're new to Digital Ocean. The main difference between this Vagrant file and the AWS Vagrant file is that the provider block is different. # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.require_plugin 'vagrant-digitalocean' Vagrant.require_plugin 'vagrant-omnibus' Vagrant.configure('2') do |config| config.vm.provider :digital_ocean do |provider, override| provider.client_id = 'abcdef0123456789ABCDEF' provider.api_key = 'abcdef0123456789abcdef0123456789' provider.image = 'Ubuntu 12.04.3 x64' provider.region = 'Amsterdam 1' provider.size = '512MB' provider.ssh_key_name = 'KeyName' override.ssh.private_key_path = '/path/to/your/key' override.vm.box = 'digital_ocean' override.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/ master/box/digital_ocean.box" provider.ca_path = "/usr/local/opt/curl-ca-bundle/share/ca-bundle.crt" end config.vm.provision :chef_client do |chef| chef.chef_server_url = 'https://api.opscode.com/organizations/logentries' chef.validation_key_path = '../../.chef/logentries-validator.pem' chef.validation_client_name = 'logentries-validator' chef.log_level = 'info' chef.node_name = 'do-java' chef.add_recipe 'java_wrapper' end end You should replace the above bold/italics options with your own settings. This time when we run Vagrantup we will pass a provider parameter of digital_ocean vagrant up --provider=digital_ocean Once the Vagrant run has complete, SSHing into your server and running java -version should give you the following output. root@do-java:~# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) If you do run into issues it might be that your cookbooks are out of date, try running the following librarian-chef update knife cookbook upload -a Now with the above scripts at your disposal you can use Chef-server with Vagrant and as many different cloud providers as you wish to keep your systems up to date, in sync and running to keep your customers happy! Ready to bring Vagrant to your production and development? Check out our free log management tool and get started today.

Weekly Metasploit Wrapup

Silence is golden Taking screenshots of compromised systems can give you a lot of information that might otherwise not be readily available. Screenshots can also add a bit of extra spice to what might be an otherwise dry report. For better or worse, showing people…

Silence is golden Taking screenshots of compromised systems can give you a lot of information that might otherwise not be readily available. Screenshots can also add a bit of extra spice to what might be an otherwise dry report. For better or worse, showing people that you have a shell on their system often doesn't have much impact. Showing people screenshots of their desktop can evoke a visceral reaction that can't be ignored. Plus, it's always hilarious seeing Microsoft Outlook open to the phishing email that got you a shell. In OSX, this can be accomplished with the module post/osx/capture/screenshot. Prior to this week's update, doing so would trigger that annoying "snapshot" sound, alerting your victim to their unfortunate circumstances. After a small change to that module, the sound is now disabled so you can continue hacking on your merry way, saving the big reveal for some future time when letting them know of your presence is acceptable. Check your sums before you wreck your sums Sometimes you just want to know if a particular file is the same as what you expect or what you've seen before. That's exactly what checksums are good at. Now you can run several kinds of checksums from a meterpreter prompt with the new checksum command. Its first argument is the hash type, e.g. "sha1" or "md5", and the rest are remote file names. Metadata is best data, everyone know this As more and more infrastructure moves to the cloud, tools for dealing with the various cloud providers become more useful. If you have a session on an AWS EC2 instance, the new post/multi/gather/aws_ec2_instance_metadata can grab EC2 metadata, which "can include things like SSH public keys, IPs, networks, user names, MACs, custom user data and numerous other things that could be useful in EC2 post-exploitation scenarios." Of particular interest in that list is custom user data. People put all kinds of ridiculous things in places like that and I would guess that there is basically 100% probability that the EC2 custom field has been used to store usernames and passwords. Magical ELFs For a while now, msfvenom has been able to produce ELF library (.so) files with the elf-so format option. Formerly, these only worked with the normal linking system, i.e., it works when an executable loads it from /usr/lib or whatever but due to a couple of otherwise unimportant header fields, it didn't work with LD_PRELOAD. For those who are unfamiliar with LD_PRELOAD, it's a little bit of magic that allows the linker to load up a library implicitly rather than as a result of the binary saying it needs that library. This mechanism is often used for debugging, so you can stub out functions or make them behave differently when you're trying to track down a tricky bug. It's also super useful for hijacking functions. This use case provides lots of fun shenanigans you can do to create a userspace rootkit, but for our purposes, it's often enough simply to run a payload so a command like this: LD_PRELOAD=./mettle.so /bin/true will result in a complete mettle session running inside a /bin/true process. New Modules Exploit modules (1 new) Windows Capcom.sys Kernel Execution Exploit (x64 only) by OJ Reeves, and TheWack0lian Auxiliary and post modules _(3 new)_s ColoradoFTP Server 1.3 Build 8 Directory Traversal Information Disclosure by RvLaboratory, and h00die MYSQL Directory Write Test by AverageSecurityGuy Gather AWS EC2 Instance Metadata by Jon Hart Get it As always, you can update to the latest Metasploit Framework with a simple msfupdate and the full diff since the last blog post is available on GitHub: 4.12.28...4.12.30 To install fresh, check out the open-source-only Nightly Installers, or the binary installers which also include the commercial editions.

Nexpose Receives AWS Certification

Rapid7's Nexpose just became the first Threat Exposure Management solution to complete AWS' new rigorous pre-authorized scanning certification process!Normally, a customer must request permission from AWS support to perform vulnerability scans. This request must be made for each vulnerability scan engine or penetration testing…

Rapid7's Nexpose just became the first Threat Exposure Management solution to complete AWS' new rigorous pre-authorized scanning certification process!Normally, a customer must request permission from AWS support to perform vulnerability scans. This request must be made for each vulnerability scan engine or penetration testing tool and renewed every 90 days. The new pre-authorized Nexpose scan engine streamlines the process. When a pre-authorized scan engine is launched from the AWS Marketplace, permission is instantly granted.This AWS certification effort is a proof point of our continued dedication to securing organizations' data and reducing their risk, and to ensuring our solutions address real customer needs and market trends.Cloud is increasingly an essential part of the today's modern business networks and an area in which our customers invest. In October 2015 IDC reported that spend on public cloud IT infrastructure was on track to increase by 29.6% year over year, totaling $20.5 billion(1).The new AWS certification underscores our commitment to ease of use and provides customers with assets in AWS the same level of security and experience as an on-premise deployment.Organizations can easily gain visibility of their entire attack surface – regardless where their asset sits. The new Nexpose certifications means that customers can simply use our pre-authorized AMI to scan their AWS assets without any of the authorization or permissions required for non-authorized solutions.Learn more:How to use and set-up: Nexpose Scan Engine on the AWS Marketplace Pre-authorized AMI: Nexpose Scan Engine (Pre-authorized) on AWS Marketplace (1) IDC's Worldwide Quarterly Cloud IT Infrastructure Tracker, October 2015.

Nexpose Scan Engine on the AWS Marketplace

Update September 2017: For even more enhanced capabilities, check out the AWS Web Asset Sync Discovery Connection. Rapid7 is excited to announce that you can now find a Nexpose Scan Engine AMI on the Amazon Web Services Marketplace making it simple to deploy a pre-authorized…

Update September 2017: For even more enhanced capabilities, check out the AWS Web Asset Sync Discovery Connection. Rapid7 is excited to announce that you can now find a Nexpose Scan Engine AMI on the Amazon Web Services Marketplace making it simple to deploy a pre-authorized Nexpose Scan Engine from the AWS Marketplace to scan your AWS assets! What is an AMI ? An Amazon Machine Image (AMI) allows you to launch a virtual server in the cloud. This means you can deploy Nexpose Scan Engines via the Amazon marketplace without having to go through the process of configuring and installing it yourself. What are the benefits ? The Marketplace includes a specially configured Nexpose Scan Engine that is pre-authorized for scanning AWS assets. This provides Rapid7 customers the ability to scan AWS assets immediately, or on a recurring schedule without having to contact Amazon in advance for permission – a process that can take a number of days.  Using a Nexpose Scan Engine deployed within the AWS network also allows you to scan private IP addresses and collect information which may not be available with public IP addresses (such as internal databases).  Additionally, scanning private IPs eliminates the need to pay for elastic IP's. How do I deploy a pre-authorized Scan Engine ? Current Nexpose customers can deploy the pre-authorized Nexpose Scan Engine as a remote scan engine for scanning AWS assets only.  When creating your AWS discovery connection simply check the box denoting that your scan engine is in the AWS network. You'll need a set of IAM credentials with permission to list assets in your AWS account.  A minimal IAM policy to allow this looks like: { "Version": "2012-10-17", "Statement": [{ "Sid": "NexposeScanEngine", "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeImages", "ec2:DescribeAddresses" ], "Resource": [ "*" ] }] } The pre-authorized scan engine must use the "engine-to-console" communication direction.  This means the Scan Engine will initiate communication with the Nexpose Console.  Preparing your Nexpose Console to pair with a pre-authorized Scan Engine is simple: Ensure the pre-authorized Scan Engine can communicate with your Nexpose Console on port 40815.  You may need to open a firewall port to allow this. Generate a temporary shared secret on your console.  This is used to authorize the Scan Engine.  A shared secret can be generated from the Administration -> Scan Options -> Engines -> manage screen.  Scroll to the bottom and use the Generate button.  Keep this page open, you'll need the secret when launching your Scan Engine. Now you are ready to deploy your pre-authorized Nexpose Scan Engine.  Sign into your AWS console and navigate to the Nexpose Scan Engine (Pre-authorized) AWS Marketplace listing.  You must use EC2 user data to tell your engine how to pair with your console.  Follow these steps to launch the engine: Click Continue on the AWS Marketplace listing. Accept the terms using the Accept Software Terms button. It can take up to 10 minutes for Amazon to process your request.  You'll receive an email from Amazon when you can launch the AMI. After you receive the email, refresh the marketplace page.  You should see several blue "Launch with EC2 Console" buttons. Click the Launch with EC2 Console button in your desired AWS region. Proceed with the normal process of launching an EC2 instance.  When you get to the Instance Details screen, expand the Advanced Details section.  Provide the following EC2 user data.  Replace the bracketed sections with information about your Nexpose Console: NEXPOSE_CONSOLE_HOST=<hostname or ip of your console> NEXPOSE_CONSOLE_PORT=40815 NEXPOSE_CONSOLE_SECRET=<shared secret generated earlier> Finish launching the EC2 instance. Once the instance boots, it can take 10-15 minutes to pair with the console. Verify the engine pairs with the console via the engine listing in the console (Administration -> Scan Options -> Engines -> manage). With this one-time configuration set, you can create a schedule to scan your AWS assets.

The real challenge behind asset inventory

As the IT landscape evolves, and as companies diversify the assets they bring to their networks - including on premise, cloud and personal assets - one of the biggest challenges becomes maintaining an accurate picture of which assets are present on your network. Furthermore, while…

As the IT landscape evolves, and as companies diversify the assets they bring to their networks - including on premise, cloud and personal assets - one of the biggest challenges becomes maintaining an accurate picture of which assets are present on your network. Furthermore, while the accurate picture is the end goal, the real challenge becomes optimizing the means to obtain and maintain that picture current. The traditional discovery paradigm of continuous discovery sweeps of your whole network by itself is becoming obsolete. As companies grow, sweeping becomes a burden on the network. In fact, in a highly dynamic environment, traditional sweeping approaches pretty quickly become stale and irrelevant.Our customers are dealing with networks made up of thousands of connected assets. Lots of them are decommissioned and many others brought to life multiple times a day from different physical locations on their local or virtual networks. In a world where many assets are not 'owned' by their organization, or unauthorized/unmanaged assets connect to their network (such as mobile devices or personal computers), understanding the risk those assets introduce to their network is paramount to the success of their security program.Rapid7 believes this very process of keeping your inventory up to date should be automated and instantaneous. Our technology allows our customers to use non-sweeping technologies like monitoring DHCP, DNS, Infoblox, and other relevant servers/applications. We also enable monitoring through technology partners such as vSphere or AWS for virtual infrastructure, and mobile device inventory with ActiveSync.. In addition, Rapid7's research team through its Sonar project technology (this topic deserves it's own blog) is able to scan the internet and understand our customer's external presence. All of these automated techniques provide great visibility and complements the traditional approaches such that our customer's experiences on our products revolves around taking action and reducing risk as opposed to configuring the tool.Why should you care? It really comes down to good hygiene and good security practices. It is unacceptable not to know about the presence of a machine that is exfiltrating data off of your network or rogue assets listening on your network. And beyond being unacceptable, it can take you out of business. Brand damage, legal and compliance risks are great concerns that are not mitigated by an accurate inventory alone, however, without knowing those assets exists in your network in a timely manner it is impossible to assess the risk they bring and take action.SANS Institute has this topic rated as the Top security control https://www.sans.org/critical-security-controls/control/1. They bring up key questions that companies should be asking to their security teams: How long does it take to detect new assets on their networks? How long does it take their current scanner to detect unauthorized assets? How long does it take to isolate/remove unauthorized assets from the network? What details (location, department) can the scanner identify on unauthorized devices? and plenty more.Let Rapid7 technology worry about inventory. Once you've got asset inventory covered, then you can move to remediation, risk analysis, and other much more fun security topics with peace of mind that if it's in your network then you will detect it in a timely manner.

Securing the Shadow IT: How to Enable Secure Cloud Services for Your Business

You may fear that cloud services jeopardize your organization's security. Yet, your business relies on cloud services to increase its productivity. Introducing a policy to forbid these cloud services may not be a viable option. The better option is to get visibility into your shadow…

You may fear that cloud services jeopardize your organization's security. Yet, your business relies on cloud services to increase its productivity. Introducing a policy to forbid these cloud services may not be a viable option. The better option is to get visibility into your shadow IT and to enable your business to use it securely to increase productivity and keep up with the market.Step one: Find out which cloud services your organization is usingFirst, you'll want to figure out what is actually in use in your organization. Most IT departments we talk to underestimate how many cloud services are being used by a factor of 10. That's shocking. The easiest way to detect what services are commonly in use is by leveraging Rapid7 UserInsight, a solution for detecting and investigating security incidents from the endpoint to the cloud. For this step, UserInsight analyzes your web proxy, DNS, and firewall logs to outline exactly what services are in use and which users are subscribing to them. This is much easier than sifting through raw log files and identifying which cloud service may be behind a certain entry.Step two: Have a conversation with employees using these servicesKnowing who uses which services enables you to identify the users and have a conversation with them about why they use the service and what data is shared with this service. UserInsight makes it easy to correlate web proxy, DNS, and firewall activity to a user because it keeps track of which user had which IP address on the corporate LAN, WiFi, and VPN, All of this information is just one click away.Based on this information, you can:Move the users to a comparable but more secure service (e.g. from Dropbox to Box.com), Talk with users about why a certain service is not suitable for use on the corporate network (e.g. eDonkey), andEnable higher security on existing services by consolidating accounts under corporate ownership and enabling stronger monitoringStep three: Detect compromised accounts through geolocation of cloud and on-premise accountsCompromised credentials are leveraged in three out of four breaches, yet many organizations have no way to detect how credentials are being used. UserInsight can detect credential compromise in on-premise systems and in the cloud. One way to do this is through geolocation. If a user's mobile device accesses email in New York and then a cloud service is accessed from Germany within a time span of 20 minutes, this indicates a security incident that should be investigated.UserInsight integrates with dozens of cloud services, including Salesforce.com, Box.com, and Google Apps to geolocate authentications even if they happen outside of the corporate network. The solution correlates not only cloud-to-cloud authentications but also cloud-to-on-premise authentications, giving you much faster and higher quality detection of compromised credentials. With Amazon Web Services (AWS), UserInsight can even detect advanced changes, such as changed passwords, changes to groups, and removed user policies. Read more about UserInsight's ability to detect compromises of AWS accounts.Step four: Investigate potential exfiltration to cloud servicesIf attackers compromise your corporate network, they often use cloud storage services to exfiltrate information, even if the company is not even using a particular service. When investigating an incident that involves a certain compromised user, you can review that user's transmission volume to figure out if and how much data was exfiltrated this way. UserInsight makes this exceedingly easy, breaking volume down by user and enabling you to see the volume on a timeline.If you would like to learn more about how UserInsight can help you get more visibility into your organization's cloud service usage, enabling productive conversations and better cloud security, sign up for a free, guided UserInsight demo on the Rapid7 website.

Detecting Compromised Amazon Web Services (AWS) Accounts

As you move more of your critical assets to Amazon Web Services (AWS), you'll need to ensure that only authorized users have access. Three out of four breaches use compromised credentials, yet many companies struggle to detect their use. UserInsight enables organizations to detect compromised…

As you move more of your critical assets to Amazon Web Services (AWS), you'll need to ensure that only authorized users have access. Three out of four breaches use compromised credentials, yet many companies struggle to detect their use. UserInsight enables organizations to detect compromised credentials, from the endpoint to the cloud. Through its AWS integration, Rapid7 UserInsight monitors all administrator access to Amazon Web Services, so you can detect compromised credentials before they turn into a data breach.Specifically, UserInsight helps you detect these security incidents:Geolocating AWS authentications with other user authentications to detect compromised credentialsUserInsight tracks from where in the world your AWS administrators are logging in, even when they are outside the corporate network. You will be alerted when a user is logging in from two locations in a short period of time, indicating a compromised account. This even works when only one of the authentications is to AWS. For example, it will tell you if Maria logs into Amazon Web Services from Beijing within only 20 minutes of logging onto the VPN from New York.Alerting on AWS access by users whose corporate accounts have been disabledMost companies have great processes for deprovisioning Windows accounts if a user leaves the organization, but cloud accounts are often overlooked. UserInsight alerts you if a user whose LDAP account has been disabled still logs into Amazon Web Services, even if the AWS account is accessed from outside your corporate network.Visbility into users with administrative privileges, whether on-premise or in the cloudKeeping track on which employees have administrative privileges can be a challenge. UserInsight keeps a running list of any user who has administrative privileges. If users log into AWS, they are automatically added to the administrators' list, giving you full visibility.Full logging of all AWS administrative activityUserInsight monitors all administrative access changes to your AWS account, including adding or removing a user from a group, creating or changing passwords, modifying or removing a user policy, and deleting access keys. You can correlate this activity on a graph and zoom into periods that show suspicious activities.Detecting which employees use AWS accounts not provisioned by ITKeeping track of shadow IT is tough. UserInsight gives you instant visibility into which users use which web service, including AWS accounts. This enables you to quickly and easily identify non-sanctioned accounts, helping you to consolidate AWS activities. This not only helps your security posture but also enables you to get volume pricing instead of paying list prices for smaller pockets across the organization.Amazon Web Services is only one of more than a hundred cloud applications for which UserInsight detects compromises. If you'd like to hear more about how Rapid7 UserInsight can detect incidents from the endpoint to the cloud, visit us at Amazon re:invent in Vegas, Booth #637 in Sands Expo Hall C, or request a free guided UserInsight demo on the Rapid7 website.Not ready? See how Rapid7 products and services help you detect attacks leveraging compromised credentials here.

Federal Friday - 8.22.14 - A Sensitive Cloud and Some Additional Strategy

Happy Friday, Federal Friends! Do you hear that? That sound you're hearing is the collective high-five every adult with children just gave each other in celebration of "Back to School." For those of you who's summah is coming to a close, I hope it has…

Happy Friday, Federal Friends! Do you hear that? That sound you're hearing is the collective high-five every adult with children just gave each other in celebration of "Back to School." For those of you who's summah is coming to a close, I hope it has been a great couple of months. For those of you that don't have to worry about that, I'll see ya at the empty beach in September.I read a great article this week about another take on cyber strategy. Piggy--backing on my last post this article harkened back to antiquity and turned to the seas for some answers. The article on ISN used the theories of maritime strategist, Sir Julian Corbett, as an outline. Below are the six points the author highlights. I highly suggest reading the article for the full description of each point.Cyber is tied to national powerCyber operations are independent with other operationsCyber Lines of Communications (CLOCs)Offensive StrategyDefensive StrategyDispersal and ConcentrationAnother little ditty that caught my eye this week came from GCN. This article hones in on the word of the decade; Cloud. DISA just approved Amazon Web Services to be the first cloud vendor approved under the DOD's Cloud Security Model. This is major step forward for the DOD, and the advancement of Government infrastructure as a whole. Should AWS be successful in "security impact levels" 3-5  it could lead to a wider adoption throughout the Fed. Additionally, as AWS commercial customers have seen, this should lead to additional efficiencies and lower costs for the DOD. A win-win for a sector that is continually facing the brunt of manpower deficiencies and budgetary restrictions... remember the Sequester?Anyone else remember Tweeter's endzone dance?

How To Run Penetration Tests From The Amazon Cloud - Without Getting Into Trouble

Metasploit Pro is available as an Amazon Machine Image (AMI) so it can easily be run in the Amazon cloud to conduct external penetration tests. This is especially useful since several team members can use the same instance of Metasploit Pro in the cloud at…

Metasploit Pro is available as an Amazon Machine Image (AMI) so it can easily be run in the Amazon cloud to conduct external penetration tests. This is especially useful since several team members can use the same instance of Metasploit Pro in the cloud at the same time through Metasploit Pro's web-based user interface, even if team members are working on different projects at the same time. Before you start a penetration test, there are a few things to notice so you don't violate the Amazon policies. Amazon requires customers to obtain authorization for penetration testing (or vulnerability assessments) both from or to their AWS resources. Amazon offers a form (linked from this page - requires login) to streamline this process. AWS Security will then add your source and destination addresses to a white list for the duration of your penetration test. As always, you need to have the legal permission from the owners of the assets your are conducting your security tests on. AWS Security will revoke your white list privileges if they receive any complaints or reports about DoS attacks. Amazon currently doesn't permit testing m1.small or t1.micro instance types to prevent performance impacts on the resources shared with others.If you'd like to find out more, read Amazon's full policies about penetration testing and vulnerability management . Have you conducted penetration tests on or from the Amazon cloud? Please comment on this blog to share your experience! To try out Metasploit Pro, get the free trial!

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now