Rapid7 Blog

DevOps  

Introducing InsightOps: A New Approach to IT Monitoring and Troubleshooting

Today we are announcing the general availability of a brand new solution: Rapid7 InsightOps. This latest addition to the Insight platform continues our mission to transform data into answers, giving you the confidence and control to act quickly. InsightOps is Rapid7's first IT-specific solution, enabling…

Today we are announcing the general availability of a brand new solution: Rapid7 InsightOps. This latest addition to the Insight platform continues our mission to transform data into answers, giving you the confidence and control to act quickly. InsightOps is Rapid7's first IT-specific solution, enabling users to centralize data from infrastructure, assets and applications, so they can monitor and troubleshoot operational issues. Getting in with the IT crowd Every day, IT and security teams work hand-in-hand towards keeping their organizations secure, optimized and operational. Yet today's IT environment is more complex than ever. Infrastructure is hosted across physical servers, virtual machines, Docker containers and cloud services. The corporate network is accessed by internal and remote employees, from a mix of known and unknown devices that are all using applications, both internally hosted and cloud-based. This complexity creates enormous amounts of data, dispersed across the modern IT environment. Managing this data is critical, but for most resource constrained IT and security teams, it's simply too complex or too expensive to monitor it all. And unmonitored IT data creates risk. That's where Rapid7 comes in. Today, our customers leverage the Rapid7 Insight platform to collect data from across their entire IT environment for identifying security vulnerabilities with Rapid7 InsightVM and catching attackers in the act with Rapid7 InsightIDR. InsightOps builds on this, enabling them to manage and optimize IT operations across their technology landscape. Introducing Rapid7 InsightOps We built InsightOps to be easy to set up and scale. It requires no infrastructure to run, no configuration of indexers to search, and you can collect data in any format from anywhere in your environment. With your data centralized in one place, it's easier to then monitor for known issues or anomalous trends. Monitoring with InsightOps helps you proactively address issues before they become widespread. Ultimately, InsightOps was built for turning IT data into answers. With features like Visual Search and Endpoint Interrogator, it's easier to get answers from your data without ever even having to type a search query. And log data is just the beginning. Sometimes you need answers directly from your IT assets, like what software is running on an employee workstation or which servers are over 75% disk utilization. InsightOps combines log management with IT asset visibility and interrogation, enabling you to trace issues all the way from discovery to resolution. Ready to transform your unmonitored IT data into answers? Start your free 30-day trial of InsightOps today.

DevOps: Vagrant with AWS EC2 & Digital Ocean

The Benefits of Vagrant Plugins Following on from my recent DevOps blog posts, The DevOps Tools We Use & How We Use Them and Vagrant with Chef-Server, we will take another step forward and look into provisioning our servers in the cloud. There are many…

The Benefits of Vagrant Plugins Following on from my recent DevOps blog posts, The DevOps Tools We Use & How We Use Them and Vagrant with Chef-Server, we will take another step forward and look into provisioning our servers in the cloud. There are many cloud providers out there, most who provide some sort of APIs. Dealing with the different APIs and scripts can become cumbersome and confusing when your main focus is a fault tolerant, scalable system. This is where Vagrant and many of its plugins shine. Vagrant has a wide range of plugins, from handling Chef and Puppet to provisioning servers on many different cloud providers. We're going to focus on 3 plugins in specific: vagrant-aws, vagrant-digitalocean and vagrant-omnibus. The AWS and Digital Ocean plugins allow us to utilize both our Chef-server and the public infrastructure provided by both Amazon and Digital Ocean. The omnibus is used for installing a specified version of Chef on your servers. Installation of Vagrant Plugins To install the Vagrant plugins, run the following commands: vagrant plugin install vagrant-aws vagrant plugin install vagrant-digitalocean vagrant plugin install vagrant-omnibus Running Vagrant plugin list should give you the following output: vagrant-aws (0.4.1) vagrant-digitalocean (0.5.3) vagrant-omnibus (1.3.1) More information on the plugins can be found here: https://github.com/mitchellh/vagrant-aws https://github.com/smdahlen/vagrant-digitalocean https://github.com/schisamo/vagrant-omnibus Amazon AWS The first cloud provider we will look at is Amazon AWS. If you don't already have an account I suggest you sign up for one here, and have a quick read of their EC2_GetStarted docs. Once signed up you will need to generate an account access and secret key, to do this follow the instruction below: Go to the IAM console. From the navigation menu, click Users. Select your IAM user name. Click User Actions, and then click Manage Access Keys. Click Create Access Key. Your keys will look something like this:     Access key ID example: ABCDEF0123456789ABCD     Secret access key example: abcdef0123456/789ABCD/EF0123456789abcdef Click Download Credentials, and store the keys in a secure location. Vagrant AWS with Chef-Server Now we have everything to get started. Again we will create a Vagrant file similar to what was done in my last blog post DevOps: Vagrant with Chef-Server, however, this time we will omit a few things like config.vm.box and config.vm.boxurl. The reason for this is that we are going to point our Vagrant file to use an Amazon AMI instead. # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.require_plugin 'vagrant-aws' Vagrant.require_plugin 'vagrant-omnibus' Vagrant.configure("2") do |config| config.omnibus.chef_version = :latest config.vm.synced_folder '.', '/vagrant', :disabled => true config.vm.box = "dummy" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/ dummy.box" # Provider config.vm.provider :aws do |aws, override| aws.access_key_id = "ABCDEF0123456789ABCD" aws.secret_access_key = "abcdef0123456/789ABCD/EF0123456789abcdef" aws.keypair_name = "awskey" aws.ami = "ami-9de416ea" #Ubuntu 12.04 LTS aws.region = "eu-west-1" aws.instance_type = "t1.micro" aws.security_groups = ["default"] override.ssh.username = "ubuntu" override.ssh.private_key_path = "path/to/your/awskey.pem" aws.tags = { 'Name' => 'Java' } end # Provisioning config.vm.provision :chef_client do |chef| chef.chef_server_url = 'https://api.opscode.com/organizations/logentries' chef.validation_key_path = '../../.chef/logentries-validator.pem' chef.validation_client_name = 'logentries-validator' chef.log_level = 'info' chef.add_recipe 'java_wrapper' end end You should replace the above bold/italics options with your own settings. The only difference this time in running Vagrant is that you need to pass a provider argument, so your command should look like this: vagrant up --provider=aws Once your Chef run has completed you will have a new instance in Amazon running your specified version of Java. Your output should look similar to this: root@ip-172-31-28-193:~# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) Digital Ocean Another cloud provider which has been gaining a lot of attention lately is Digital Ocean. If you don't already have a Digital Ocean account you can sign up for here. Also have a look at their getting getting started guide if you're new to Digital Ocean. The main difference between this Vagrant file and the AWS Vagrant file is that the provider block is different. # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.require_plugin 'vagrant-digitalocean' Vagrant.require_plugin 'vagrant-omnibus' Vagrant.configure('2') do |config| config.vm.provider :digital_ocean do |provider, override| provider.client_id = 'abcdef0123456789ABCDEF' provider.api_key = 'abcdef0123456789abcdef0123456789' provider.image = 'Ubuntu 12.04.3 x64' provider.region = 'Amsterdam 1' provider.size = '512MB' provider.ssh_key_name = 'KeyName' override.ssh.private_key_path = '/path/to/your/key' override.vm.box = 'digital_ocean' override.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/ master/box/digital_ocean.box" provider.ca_path = "/usr/local/opt/curl-ca-bundle/share/ca-bundle.crt" end config.vm.provision :chef_client do |chef| chef.chef_server_url = 'https://api.opscode.com/organizations/logentries' chef.validation_key_path = '../../.chef/logentries-validator.pem' chef.validation_client_name = 'logentries-validator' chef.log_level = 'info' chef.node_name = 'do-java' chef.add_recipe 'java_wrapper' end end You should replace the above bold/italics options with your own settings. This time when we run Vagrantup we will pass a provider parameter of digital_ocean vagrant up --provider=digital_ocean Once the Vagrant run has complete, SSHing into your server and running java -version should give you the following output. root@do-java:~# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) If you do run into issues it might be that your cookbooks are out of date, try running the following librarian-chef update knife cookbook upload -a Now with the above scripts at your disposal you can use Chef-server with Vagrant and as many different cloud providers as you wish to keep your systems up to date, in sync and running to keep your customers happy! Ready to bring Vagrant to your production and development? Check out our free log management tool and get started today.

The Ransomware Chronicles: A DevOps Survival Guide

NOTE: Tom Sellers, Jon Hart, Derek Abdine and (really) the entire Rapid7 Labs team made this post possible. On the internet, no one may know if you're of the canine persuasion, but with a little time and just a few resources they can easily determine…

NOTE: Tom Sellers, Jon Hart, Derek Abdine and (really) the entire Rapid7 Labs team made this post possible. On the internet, no one may know if you're of the canine persuasion, but with a little time and just a few resources they can easily determine whether you're running an open “devops-ish” server or not. We're loosely defining devops-ish as: MongoDB CouchDB Elasticsearch for this post, but we have a much broader definition and more data coming later this year. We use the term “devops” as these technologies tend to be used by individuals or shops that are emulating the development and deployment practices found in the “DevOps” — https://en.wikipedia.org/wiki/DevOps — communities. Why are we focusing on about devops-ish servers? I'm glad you asked! The Rise of Ransomware If you follow IT news, you're likely aware that attackers who are focused on ransomware for revenue generation have taken to the internet searching for easy marks to prey upon. In this case the would-be victims are those running production database servers directly connected to the internet with no authentication. Here's a smattering of news articles on the subject: MongoDB mauled! http://www.zdnet.com/article/mongodb-ransacked-now-27000-databases-hit-in-mass-r ansom-attacks/ Elasticsearch exposed! http://www.pcworld.com/article/3157417/security/after-mongodb-ransomware-groups- hit-exposed-elasticsearch-clusters.html CouchDB crushed! http://www.pcworld.com/article/3159527/security/attackers-start-wiping-data-from -couchdb-and-hadoop-databases.html The core reason why attackers are targeting devops-ish technologies is that most of these servers have a default configurations which have tended to be wide open (i.e. they listen on all IP addresses and have no authentication) to facilitate easy experimentation  exploration. Said configuration means you can give a new technology a test on your local workstation to see if you like the features or API but it also means that — if you're not careful — you'll be exposing real data to the world if you deploy them the same way on the internet. Attackers have been ramping up their scans for these devops-ish services. We've seen this activity in our network of honeypots (Project Heisenberg): We'll be showing probes for more services, including CouchDB, in an upcoming post/report. When attackers find targets, they often take advantage of these open configurations by encrypting the contents of the databases and leaving little “love notes” in the form of table names or index names with instructions on where to deposit bitcoins to get the keys back to your data.  In other cases, the contents of the databases are dumped and kept by the attacker but wiped from the target, then demanding a ransom for the return of the kidnapped data. In other cases, the data is wiped from the target and not kept by the attackers, making anyone who gives in to these demands in for a double-whammy – paying the ransom and not getting any data in return. Not all exposed and/or ransomed services contain real data, but attackers have automated the process of finding and encrypting target systems, so it doesn't matter if they corrupt test databases which will just get deleted as it hasn't cost them any more time or money to do so. And, because the captive systems are still wide open, there have been cases where multiple attacker groups have encrypted systems — at least they fight amongst themselves as well as attack you. Herding Servers on the Wide-Open Range Internet Using Project Sonar — http://sonar.labs.rapid7.com — we surveyed the internet for these three devops databases. NOTE: we have a much larger ongoing study that includes a myriad of devops-ish and “big data” technologies but we're focusing on these three servers for this post given the timeliness of their respective attacks. We try to be good Netizens, so we have more rules in place when it comes to scanning than others do. For example, if you ask us not to scan your internet subnet, we won't. We will also never perform scans requiring credentials/authentication. Finally, we're one of the more profound telemetry gatherers which means many subnets choose to block us. I mention this, first, since many readers will be apt to compare our numbers with the results from their own scans or from other telemetry resources. Scanning the Internet is a messy bit of engineering, science and digital alchemy so there will be differences between various researchers. We found: ~56,000 MongoDB servers ~18,000 Elasticsearch servers ~4,500 CouchDB servers Of those 50% MongoDB servers were captive, 58% of Elasticsearch were captive and 10% of CouchDB servers were captive: A large percentage of each of these devops-ish databases are in “the cloud”: and several of those listed do provide secure deployment guides like this one for MongoDB from Digital Ocean: https://www.digitalocean.com/community/tutorials/how-to-securely-configure-a-pro duction-mongodb-server. However, others have no such guides, or have broken links to such guides and most do not offer base images that are secure by default when it comes to these services. Exposed and Unaware If you do run one of these databases on the internet it would be wise to check your configuration to ensure that you are not exposing them to the internet or at the very least have authentication enabled and rudimentary network security groups configured to limit access. Attackers are continuing to scan for open systems and will continue to encrypt and hold systems for ransom. There's virtually no risk in it for them and it's extremely easy money for them, since the reconnaissance for and subsequent attacking of exposed instances likely often happens from behind anonymization services or from unwitting third party nodes compromised previously. Leaving the configuration open can cause other issues beyond the exposure of the functionality provided by the service(s) in question. Over 100 of the CouchDB servers are exposing some form of PII (going solely by table/db name) and much larger percentages of MongoDB and Elasticsearch open databases seem to have some interesting data available as well. Yes, we can see your table/database names. If we can, so can anyone who makes a connection attempt to your service. We (and attackers) can also see configuration information, meaning we know just how out of date your servers, like MongoDB, are: So, while you're checking how secure your access configurations are, it may also be a good time to ensure that you are up to date on the latest security patches (the story is similarly sad for CouchDB and Elasticsearch). What Can You Do? Use automation (most of you are deploying in the cloud) and within that automation use secure configurations. Each of the three technologies mentioned have security guides that “come with” them: CouchDB: http://docs.couchdb.org/en/2.0.0/intro/security.html Elasticsearch: https://www.elastic.co/blog/found-elasticsearch-security MongoDB: https://docs.mongodb.com/manual/security/ It's also wise to configure your development and testing environments the same way you do production (hey, you're the one who wanted to play with devops-ian technologies so why not go full monty?). You should also configure your monitoring services and vulnerability management program to identify and alert if your internet-facing systems are exposing an insecure configuration. Even the best shops make deployment mistakes on occasion. If you are a victim of a captive server, there is little you can do to recover outside restoring from backups. If you don't have backups, it's up to you do decide just how valuable your data is/was before you consider paying a ransom. If you are a business, also consider reporting the issue to the proper authorities in your locale as part your incident response process. What's Next? We're adding more devops-ish and data science-ish technologies to our Sonar scans and Heisenberg honeypots and putting together a larger report to help provide context on the state of the exposure of these services and to try to give you some advance notice as to when attackers are preying on new server types. If there are database or server technologies you'd like us to include in our more comprehensive study, drop a note in the comments or to research@rapid7.com. Burning sky header image by photophilde used CC-BY-SA

Honing Your Application Security Chops on DevSecOps

Integrating Application Security with Rapid Delivery Any development shop worth its salt has been honing their chops on DevOps tools and technologies lately, either sharpening an already practiced skill set or brushing up on new tips, tricks, and best practices. In this blog, we'll examine…

Integrating Application Security with Rapid Delivery Any development shop worth its salt has been honing their chops on DevOps tools and technologies lately, either sharpening an already practiced skill set or brushing up on new tips, tricks, and best practices. In this blog, we'll examine how the rise of DevOps and DevSecOps have helped to speed application development while simultaneously enabling teams to embed application security earlier into the software development lifecycle in automatic ways that don't delay development timeframes or require major time investments from developers and QA teams. What is DevOps? DevOps is a set of methodologies (people, process, and tools) that enable teams to ship better code – faster.  DevOps enables cross-team collaboration that is designed to support the automation of software delivery and decrease the cost of deployment. The DevOps movement has established a culture of collaboration and an agile relationship that unites the Development, Quality Engineering, and Operations teams with a set of processes that fosters high-levels of communication and collaboration. Collaboration between these three groups is critical because of the inherent conflict of development organizations being pressured to ship new features faster while operations groups are encouraged to slow things down to be sure that performance and security are up to snuff. DevSecOps and Application Security Getting new code out to production faster is a great goal that often drives new business, however in today's world, that goal needs to be balanced with addressing security. DevSecOps is really an extension of the DevOps concept. According to DevSecOps.org, “It builds on the mindset that "everyone is responsible for security" with the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the safety required. Web application attacks continue to be the most common breach pattern confirming what we have known for some time- that web applications are a preferred vector for malicious actors and they are difficult to protect and secure. According the 2016 Verizon Data Breach Report, 40% of the breaches analyzed for the 2016 DBIR were web app attacks. Today's web and mobile applications pose risk to organizational security that must be addressed. There are several well-known classes of vulnerabilities that can be present in applications; SQL Injection, Cross-Site Scripting, Cross Site Request Forgery, and Remote Code Execution are some of the most common. Why are Applications a Primary Target? Applications have become a primary target for attackers for the following reasons: 1. They are open for business and easily accessible: Companies rely on firewalls and network segmentation to protect critical assets.  Applications are exposed to the internet in order to be used by customers. Therefore, they are easy to reach when compared to other critical infrastructure and malicious attackers are often masked as legitimate desired traffic. 2. They hold the keys to the data kingdom: Web Applications frequently communicate with databases, file shares, and other critical information.  Because they are close, if they are compromised it is easier to reach this data which can often times be some of the most valuable.  Credit Card, PII, SSN, and proprietary information can be just a few steps away from the application. 3. Penetrating applications is relatively easy. There are tools available to attackers that allow them to point-and-shoot at a web application to discover exploitable vulnerabilities. Embed Application Security Early in the SDLC - A Strategic Approach So, we know that securing applications is critical. We also know that most application vulnerabilities are found in the source code. So, it stands to reason that application vulnerabilities are really just application defects and should be treated as such. Dynamic Application Security Testing (DAST) is one primary methods for scanning web applications in their running state to find vulnerabilities which are usually security defects that require remediation in the source code. These DAST scans help developers identify real exploitable risks and improve security. Typically, speed and punctiliousness don't go and in hand, so why would you go about mixing two things that might be thought of as having a natural polarity? There are several reasons that implementing a web application scan early in the SDLC as part of DevOps can be beneficial and there are ways to do it so that it doesn't take additional time for developers or for testers, it can be baked in as part of your SDLC and part of your Continuous Integration process. When dynamic application security testing first became popular, security experts often conducted the tests at the end of the software development lifecycle. That only served to frustrate developers, increase costs and delay timelines. We have known for some time now that the best solution is to drive application security testing early into the lifecycle along with secure coding training. Microsoft was one of the early pioneers of this with their introduction of the Secure Development Lifecycle (SDL) which was one of the first well-known programs that explicitly stated that security must be baked into the software development lifecycle early and at every stage of development not bolted on at the end. The benefits of embedding application security earlier into the SDLC are well understood. If you treat security vulnerabilities like any other software defect, you save money and time by finding them earlier when developers and testers are working on the release. Reduced Risk Exposure -The faster you find and fix vulnerabilities in your web applications mean less exposure to risk. If you can find a vulnerability before it hits production you've prevented a potential disaster, and the faster you remove vulnerabilities from production, the exposure you are faced with. Reduced Remediation Effort - If a vulnerability is found earlier in the SDLC then it's going to be easier and less expensive to fix for several reasons. The code is fresh, the developer is familiar with it and can jump in and fix it without have to dig up old skeletons in the code. There is less context switching (context switching is bad) when we find security defects during the development process. Additionally, if a vulnerability is found early then it is much more likely that there won't be other code relying on it so it can be changed more safely.  Finally, new code will be less likely burdened with tech debt and therefore be easier to fix. Reduced schedule delays - Security experts are well aware that development teams don't want to be slowed down. By embedding application security earlier in the SDLC, we can avoid they time delays that come with testing during later stages. These factors should help explain why incorporating application security into a DevOps mentality makes sense.  So how can a security-focused IT staff member help the developers get excited about this? Adopting a DevSecOps Mindset for Application Security - 8 Best Practices Build a Partnership Partnership and collaboration is what DevOps is all about. Sit down with your development team and explain that you aren't trying to slow them down at all. You simply want to help them secure the awesome stuff they are building. Help them learn by explaining the risk.  The ubiquitous “ALERT(XSS)” doesn't do a good enough job of pointing out the significance of a cross-site scripting vulnerability. Talk your developers through the real-world impact and risks.   Conduct Secure Code Training Schedule some “Lunch-n-Learn”s or similar session to explain how these vulnerabilities can emerge in code.  Discuss parameterization and data sanitization so developers are familiar with these topics.  The more aware of secure coding practices the developers are, the less likely they are to introduce vulnerabilities into the application's code-base. Know the Applications It helps when the security expert understands the code base. Try to work with your developers to learn the code base so you can help highlight serious vulnerabilities and can clearly capture risk levels. Security Test Early, Fail Fast. Failure isn't typically a good word, but failing fast and early is an agile development mindset that is applicable to application security. If you test early and often you can find and fix vulnerabilities faster and easier. The earlier new code is tested for security vulnerabilities the easier it is to fix. Security Test Frequently Test your code when new changes are introduced so that critical risks don't make it past staging.  Fixing issues is easier when they are fresh. Scan new code in staging before it hits production to reduce risk and speed remediation of issues. Integrate Security with Existing Tools Find opportunities a solution that to embed dynamic security testing early into your software development lifecycle by integrating with your existing tools. Seamlessly integrating security into the development lifecycle will make it easier to adopt. Here are some of the most effective ways of integration security testing into the SDLC: Continuous Integration - Many organizations achieve early SDLC security testing by integrating their DAST solutions into their Continuous Integration solutions (Hudson, Jenkins, etc) to ensure security testing is conducting easily and automatically before the application goes into production. This requires a application security scanner that works well in “point and shoot” mode and includes an open API's for running scans. Ask your vendor how their scanner would fit into your CI environment. Issue Tracking - Another effective strategy for building application security early into the SDLC is ensuring your application security solution automatically sends security defects to the issue tracking solution, like Jira, that is used by your development and QA teams. Test Automation - Many QA teams are having success by leveraging their pre-built automated functional tests to help drive security testing to make security tests even more effective. This can be done through browser automation solutions like Selenium. Rapid7's AppSpider is built with this in mind and includes a broad range of integrations to suit your team's needs. Learn more about how AppSpider helps drive application security earlier into the SDLC in this video. AppSpider is a DAST solution designed to help application security folks test applications both as part of DevOps and as part of a scheduled scanning program. Thanks for reading and have a great day.

Top 3 Takeaways from the "Skills Training: How to Modernize your Application Security Software" Webcast

In a recent webcast, Dan Kuÿkendall, Senior Director of Application Security Products at Rapid7, gave his perspective on how security professionals should respond to applications, attacks, and attackers that are changing faster than security technology. What should you expect for your application security solutions and…

In a recent webcast, Dan Kuÿkendall, Senior Director of Application Security Products at Rapid7, gave his perspective on how security professionals should respond to applications, attacks, and attackers that are changing faster than security technology. What should you expect for your application security solutions and what are some of the strategies you can use to effectively update your program? Read on for the top takeaways from the webcast “Skills Training: How to Modernize your Application Security Software”: Expect more from automation – It's important to leverage as much automation as possible. Make sure the tools you are using are covering the newer and more difficult technologies like AJAX, JSON and shopping cart. REST support is essential – You must start understanding RESTful interfaces and JSON in particular. The world is moving in this direction on web and mobile and leaving defensive tools in the dust. Most web app firewalls don't know how to deal with JSON and it takes them a long time to parse and validate content there, derailing them. Adopt a DevOps mindset – Partner with your development teams to understand how you can integrate security testing into their continuous integration and testing processes. To be successful and truly support change and growth in application security programs, DevOps must plug into what development teams are already doing and become part of their existing process. Bridge the gap between security and DevOps by running tests during nightly builds. Perform checks, report vulnerability findings into the existing bug system, and there will be more acceptance and progress from both sides. For the in-depth look at how to modernize your application security software: view the on-demand webinar now.

Securing DevOps: Monitoring Development Access to Production Environments

A big factor for securing DevOps environment is that engineers should not have access to the production environment. This is especially true if the production environment contains sensitive data, such as payment card data, protected health information, or personally identifiable information because compromised engineering credentials…

A big factor for securing DevOps environment is that engineers should not have access to the production environment. This is especially true if the production environment contains sensitive data, such as payment card data, protected health information, or personally identifiable information because compromised engineering credentials could expose sensitive data and lead to a breach. While this requirement is a security best practice and has found its way into many compliance regulations, it can be hard to enforce the strict division of church and state when you are running a high velocity operation with many releases per day and frequent changes to code and systems. Set up alerts for zone policy violations One way to help manage this risk is to set up zone policies and monitor if they are being violated. For example, you define a certain zone as the production zone and then create a network policy that the engineering team is not authorized to access this part of the network. Implementing this may be challenging in some environments, but it's actually very easy in UserInsight, Rapid7's user behavior analytics and incident response solution. How to monitor for zone policy violations in UserInsight Setting up a network zone policy in UserInsight is very easy. From the UserInsight dashboard, choose Settings in the top menu and then select Network Zones in the left menu. Click the Add Zone button and define the zone you'd like to monitor. Next, click on Network Policies in the left menu. Enter the name of the Active Directory group for your developers and define that they cannot access the production environment zone. If anyone violates this rule, you'll be alerted on the UserInsight dashboard, on the Incidents page, and you'll receive a notification by email. In this example, we see that user vgonzales violated this policy 100 times. Simply click on this incident alert or on the name to dig in deeper and get more context around this user. If you'd like to get a personal, guided demo of UserInsight or set up a proof of concept in your environment, please provide your details on the demo request page.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now