Rapid7 Blog

Rapid7  

AUTHOR STATS:

254

The Legal Perspective of a Data Breach

The following is a guest post by Christopher Hart, an attorney at Foley Hoag and a member of Foley Hoag’s cybersecurity incident response team. This is not meant to constitute legal advice; instead, Chris offers helpful guidance for building an incident preparation and breach…

The following is a guest post by Christopher Hart, an attorney at Foley Hoag and a member of Foley Hoag’s cybersecurity incident response team. This is not meant to constitute legal advice; instead, Chris offers helpful guidance for building an incident preparation and breach response framework in your own organization. A data breach is a business crisis that requires both a quick and a careful response. From my perspective as a lawyer, I want to provide the best advice and assistance I possibly can to help minimize the costs (and stress) that arise from a security incident. When I get a call from someone saying that they think they’ve had a breach, the first thing I’m often asked is, “What do I do?” My response is often something like, “Investigate.” The point is that normally, before the legal questions can be answered and the legal response can be crafted, as full a scope of the incident as possible first needs to be understood. I typically think of data breaches as having three parts: Planning, managing, and responding. Planning is about policy-making and incident preparation. Sometimes, the calls that I get when there is a data breach involve conversations I’m having for the first time—that is, the client has not yet thought ahead of time about what would happen in a breach situation, and how she might need to respond. But sometimes, they come from clients with whom I have already worked to develop an incident response plan. In order to effectively plan for a breach, think about the following questions: What do you need to do to minimize the possibility of a breach? What would you need to do if and when a breach occurs? Developing a response plan allows you to identify members of a crisis management team—your forensic consultant, your legal counsel, your public relations expert—and create a system to take stock of your data management. I can’t emphasize enough how important this stage is. Often, clients still think of data breaches as technical, IT issues. But the trend I am seeing now, and the advice I often give, is to think of data security as a risk management issue. That means not confining the question of data security to the tech staff, but having key players throughout the organization weigh in, from the boardroom on down. Thinking about data security as a form of managing risk is a powerful way of preparing for and mitigating against the worst case scenario. Managing involves investigating the breach, patching it and restoring system security, notifying affected individuals, notifying law enforcement authorities as necessary and appropriate, and taking whatever other steps might be necessary to protect anyone affected. A good plan will lead to better management. When people call me (or anyone at my firm’s Cybersecurity Incident Response Team, a group of lawyers at Foley Hoag who specialize in data breach response) about data breaches, they are often calling me about how to manage this step. But this is only one part of a much broader and deeper picture of data breach response. Responding can involve investigation and litigation. If you’ve acted reasonably and used best practices to minimize the possibility of a breach; and if you’ve quickly and reasonably complied with your legal obligations; and if you’ve done all you can to protect consumers, then not only have you minimized the damage from a breach—which is good for your company and for the individuals affected by a breach—but you’ve also minimized your risks in possible litigation. In any event, this category involves responding to inquiries and investigation demands from state and federal authorities, responding to complaints from individuals and third parties, and generally engaging in litigation until the disputes have been resolved. This can be a frustratingly time-consuming and expensive process. This should give you a good overall picture of how I, or any lawyer, thinks about data security incidents. I hope it helps give you a framework for thinking about data security in your own organizations. Need assistance? Check out Rapid7's incident response services if you need assistance developing or implementing an incident response plan at your organization.

Keeping it simple at UNITED

The following post is a guest blog by Bo Weaver, Senior Penetration Tester at CompliancePoint. If you're attending UNITED, you can catch Bo's talk at 11:45 AM on Thursday, September 14 in the Phish, Pwn, and Pivot track. Hi! I’m Bo. I’ll…

The following post is a guest blog by Bo Weaver, Senior Penetration Tester at CompliancePoint. If you're attending UNITED, you can catch Bo's talk at 11:45 AM on Thursday, September 14 in the Phish, Pwn, and Pivot track. Hi! I’m Bo. I’ll be speaking at Rapid7’s UNITED Summit in Boston this week, and Rapid7's community manager asked me to write a little blog about my talk. I marvel how on the net we make up new words for a common digital thing—even spell check says "blog" isn’t a word! I know what a "bog" is and I know in our line of work a "blob" is a large chunk of data in a database table. Living in the mountains makes finding bogs kinda hard, but the chunk of word data below is swampy enough to qualify. I’ve worked in the security field for over twenty years. Long before the Internet I worked in private security, mostly undercover on corporate and industrial espionage. This was back in the day when you actually had to physically steal stuff. I also did a lot of work in Executive Protection. My Internet career started even before that when I was in the Navy: I studied as an Electronic Technician while in school; we all worked on a little R&D project called ARPANET. While working on this I never thought that it would turn into what it’s become! In the 90s I did a lot of work with BBSes and then dialup ISP in the Southeast—mostly securing these networks. Since then I’ve had about every network security job there is. I've learned a lot over the years, and I'll be sharing some of that knowledge at UNITED. My passion has always been hacking. For roughly the last 5 years I have been working for Compliancepoint, an Atlanta-based security consulting company as a senior penetration tester and security researcher. The thing I love most about my job here is that we test everything from Mom and Pop companies running an online business to major corporate and government networks. We get to see it all. My talk at UNITED is about reducing complexity and how even big problems can have relatively simple solutions. Sometimes organizations think they need to throw millions at a problem when some time, some knowledge, and little expense can fix even major issues. I learned about KISS in engineering school and have never forgotten: “Keep It Simple, Stupid”. Doesn’t matter if you’re building a toaster or a world network. Kiss it! See the full UNITED agenda here.

What is BDD Testing: Practical Examples of Behavior Driven Development Testing

The Need for Behavior Driven Development (BDD) Testing Tools It should come as no surprise to learn that testing is at the heart of our engineers' daily activities. Testing is intrinsic to our development process, both in practical terms and in our thinking. Our engineers…

The Need for Behavior Driven Development (BDD) Testing Tools It should come as no surprise to learn that testing is at the heart of our engineers' daily activities. Testing is intrinsic to our development process, both in practical terms and in our thinking. Our engineers work with complex systems that are made up of complex components. Individual components may have many external dependencies. When testing, the scope of what is to be tested is important – it can be system wide, focused on a particular feature or down deep into the methods and classes of the code. To be able to focus our testing, we want to be able to mimic or ‘mock' the behavior of external dependencies using a BDD testing tool. The purpose of this post is to walk through a couple of simple code examples and provide an overview of and explain the need for Behavior Driven Development (BDD) testing. BDD, Acceptance Tests, and Automation At Rapid7 we apply the BDD methodology which is an extension of Test Driven Development (TDD). BDD and TDD both advocate that tests should be written first, which for BDD this means acceptance tests (ATs), followed by unit tests driven by the ATs. For now, let's say that at the outset of any task, BDD focus is on capturing the required behavior in User Stories, and from these acceptance tests (ATs) are written. Over time a large number of ATs are generated. Therefore not only is the methodology important but also the supporting tools to automate and manage our work. For some background on this, another colleague, Vincent Riou has described theautomated testing, continuous integration and code-quality control toolsthat we use. BDD Testing Example: Ubiquitous Language and AT Scenarios To borrow from Vincent's post, “The idea with acceptance testing is to write tests (or behavioral specifications) that describe the behavior of your software in a language which is not code but is more precise than standard English." Doing this allows people who are not software engineers, but have knowledge of the requirements, such as Product Management or Marketing, to write the scenarios that make up our ATs. This is a powerful thing when it comes to capturing the required behavior. People in the BDD community sometimes refer to this as a ‘Ubiquitous Language'. Again borrowing from what Vincent states “Additionally, those tests can be run using a parser which will allow you to easily match your language to functions in the programming language of your choice." Here is an example AT scenario – in this case following a template and language constructs used by the Cucumber / Gherkin parser. Given the customer has logged into their current account And the balance is shown to be 100 euros When the customer transfers 75 euros to their savings account Then the new current account balance should be 25 euros Example step definition in Python The following is an example of mapping a step definition to a Python function. In this case, the final step Then is shown. This is where an ‘assert' is used to verify if the AT will pass or fail, depending on the final account balance. @Then('^the new current account balance should be "([^"]*)" euros$') def the_new_current_account_balance_should_be(self, expected_bal): expected_bal = int(expected_bal) assert expected_bal >= 0, "Balance cannot be negative" new_bal = get_balance(account_id) assert int(new_bal) == expected_bal, "Expected to get %d euros. Instead got %d euros" % (new_bal, expected_bal) Since we are writing our tests before the actual implementation of the behavior, the AT will fail – so it's important that the error message thrown by the ‘assert' is meaningful. Remember also that an AT may fail at a future date if some behavior of the ‘system under test' (SUT) is modified, intentionally or not – this is part of the value of having a body of automated ATs. Mocking Behavior of External Dependencies The components and sub-systems that we work with have many external dependencies that can be complex. When running an AT against a particular component, it may be necessary to mock the external dependencies of that component. This is different from using a framework as described below in unit testing. Instead this is about trying to mimic the behavior of a second black-box so we can test the behavior of the first black-box. In our work we encounter this all the time, especially where a SUT has a dependency on the behavior of an external server. One approach for example is to build a simple mock server in Python using the Bottle module, that gives us a basic server to build on. We mock the behavior that is required to meet the needs of the SUT. Note that this is not building a duplicate of an existing component – we are trying to mimic the behavior as seen by the SUT to complete our testing. BDD Testing Example: Unit Testing After completing the acceptance tests come the unit tests. These are more closely coupled with the code of the final implementation, although at this stage we still do not start our implementation until the required unit tests are in place. This approach of acceptance tests and unit tests are applicable to GUIs. Unit Testing Example: Mocking with some JSON The following example is a combination of using the Junit framework with the Mockito library to create mock objects. In this example we want to show in a simple way a technique to mock a response that contains data in JSON format from a GET request on some external server. The test data, in JSON format, can be an actual sample captured in a live production scenario. We are also going to use a Google library to help with handling the JSON file. In this simple example we are testing a method ‘getCountOfStudents', found in a data access class, that is used by our imaginary application to get the number of students on a course using that course ID. The actual details for that course is held on some database externally – for the purposes of testing we don't care about this database. What we are interested in, however, is that the method ‘getCountOfStudents' will have a dependency on another piece of code – it will call ‘jsonGetCourseDetails' which is found in an object called ‘HttpClient' – as the name implies this object is responsible for handling HTTP traffic to some external server – and it is from this server our application gets course data. For our test to work we therefore need to mimic the response from the server – which returns the data in JSON format – which means we want to mock the response of the ‘jsonGetCourseDetails'. The following code snippets come from a Junit Test Class, that is testing the various methods found in the class that defines our data access object. Note the required imports for the Mockito and Google libraries are added. import static org.mockito.Matchers.any; import static org.mockito.Matchers.eq; import static org.mockito.Mockito.*; import com.google.common.io.Resources; import com.google.common.base.Charsets; view raw Prior to running the test a mock object of the HttpClient is created using the test class ‘setup()' method, and tidied up afterwards with ‘teardown()'. private HttpClient httpClient; @Before public void setup() { apiClient = mock(HttpClient.class); ... ... } @After public void teardown() { reset(httpClient); ... ... } view raw For the test method itself, we use the Mockito when, so when the ‘jsonGetCourseDetails' on the mock ‘httpClient' object is called with the ‘course_id', it then returns a mock response. We create the mock response using some test data, in JSON, we have in a file ‘course_details.json'. @Test public void testGetCountOfStudentsWithCourseID() throws IOException { private String course_id = "CS101"; when(httpClient.jsonGetCourseDetails(eq(course_id)) .thenReturn(getMockResponse("./tests/course/course_details.json")); Integer count = dao.getCountOfStudents(course_id); Assert.assertEquals(10, count); } view raw To create the mock response there is a utility method we have written that uses the Google library ‘Resources' class. For this example the method simply returns a mock response as the String from the ‘Resources.toString'. private String getMockResponse(String jsonResource){ String MockResponse = null; try { MockResource = Resources.toString(Resources.getResource(jsonResource), Charsets.UTF_8); }catch(IOException ex){ ex.printStackTrace(); } return MockResponse; } view raw At this stage we have a unit test with a mock object and we can use data in JSON format. Of course, also at this stage the test method will fail. Why? Because the implementation of the ‘dao.getCountOfStudents(course_id)' has not yet been done! We are writing our tests first, mocking the external dependencies (behavior) our code is reliant on. When writing the code for the implementation, we will know we are finished when all the tests are passing. Ready to Implement BDD Testing? Try our free log management tool today.

5 Ways to Use Log Data to Analyze System Performance

Analyzing System Performance Using Log Data Recently we examined some of the most common behaviors that our community of 25,000 users looked for in their logs, with a particular focus on web server logs. In fact, our research identified the top 15 web server…

Analyzing System Performance Using Log Data Recently we examined some of the most common behaviors that our community of 25,000 users looked for in their logs, with a particular focus on web server logs. In fact, our research identified the top 15 web server tags and alerts created by our customers—you can read more about these in our https://logentries.com/doc/community-insights/ section—and you can also easily create tags or alerts based on the patterns to identify these behaviors in your systems log data. This week we are focusing on system performance analysis using log data. Again we looked across our community of over 25,000 users and identified five ways in which people use log data to analyze system performance. As always, customer data was anonymized to protect privacy. Over the course of the next week, we will be diving into each of these areas in more detail and will feature customers' first-hand accounts of how they are using log data to help identify and resolve issues in their systems and analyze overall system performance. Our research looked at more than 200,000 log analysis patterns from across our community to identify important events in log data. With a particular focus on system performance and IT operations related issues, we identified the following five areas as trending and common across our user base. 5 Key Log Analysis Insights 1. Slow Response Times Response times are one of the most common and useful system performance measures available from your log data. They give you an immediate understanding of how long a request is taking to be returned. For example, web server log data can give you insight into how long a request takes to return a response to a client device. This can include time taken for the different components behind your web server (application servers, DBs) to process the request, so it can offer an immediate view of how well your application is performing. Recording response times from the client device/browser can give you an even more complete picture since this also captures page load time in the app/browser as well as network latency. A good rule of thumb when measuring response times is to follow the 3 response time limits as outlined by Jakob Nielsen in his publication on ‘Usability Engineering' back in 1993 (still relevant today). 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, and 10 seconds is about the limit for keeping the user's attention focused on the dialogue. Slow response time patterns almost always follow the pattern below: response_time>X In this context, response_time is the field value representing the server or client's response, and ‘X' is a threshold. If this threshold is exceeded, you want the event to be highlighted or to receive a notification so that you and your team are aware that somebody is having a poor user experience. 2. Memory Issues and Garbage Collection Outofmemory errors can be pretty catastrophic, as they often result in the application's crashing due to lack of resources. Thus, you want to know about these when they occur; creating tags and generating notifications via alerts when these events occur is always recommended. Your garbage collection behavior can be a leading indicator of outofmemory issues. Tracking this and getting notified if heap used vs free heap space is over a particular threshold, or if garbage collection is taking a long time, can be particularly useful and may point you toward memory leaks. Identifying a memory leak before an outofmemory exception can be the difference between a major system outage and a simple server restart until the issue is patched. Furthermore, slow or long garbage collection can also be a reason for a user's experiencing slow application behavior. During garbage collection, your system can slow down; in some situations it blocks until garbage collection is complete (e.g. with ‘stop the world' garbage collection). Below are some examples of common patterns used to identify the memory related issues outlined above: Out of memory exceeds memory limit memory leak detected java.lang.OutOfMemoryError System.OutOfMemoryException memwatch:leak: Ended heapDiff GC AND stats 3. Deadlocks and Threading Issues Deadlocks can occur in many shapes and sizes and can have a range of negative effects—from simply slowing your system down to bringing it to a complete halt. In short, a deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does. For example, we say that a set of processes or threads is deadlocked when each thread is waiting for an event that only another process in the set can cause. Not surprisingly, deadlocks feature as one of our top 5 system performance related issues that our users write patterns to detect in their systems. Most deadlock patterns simply contain the keyword ‘deadlock', but some of the common patterns follow the following structure: ‘deadlock' ‘Deadlock found when trying to get lock' ‘Unexpected error while processing request: deadlock;' 4. High Resource Usage (CPU/Disk/ Network) In many cases, a slow down in system performance may not be as a result of any major software flaw, but instead is a simple case of the load on your system increasing without increased resources available to deal with this. Tracking resource usage can allow you to see when you require additional capacity such that you can kick off more server instances (for example). Example patterns used when analyzing resource usage: metric=/CPUUtilization/ AND minimum>X cpu>X disk>X disk is at or near capacity not enough space on the disk java.io.IOException: No space left on device insufficient bandwidth 5. Database Issues and Slow Queries Knowing when a query failed can be useful, since it allows you to identify situations when a request may have returned without the relevant data and thus helps you identify when users are not getting the data they need. There can be more subtle issues, however, such as when a user is getting the correct results but the results are taking a long time to return. While technically the system may be fine (and bug-free), a slow user experience hurts your top line. Tracking slow queries allows you to track how your DB queries are performing. Setting acceptable thresholds for query time and reporting on anything that exceeds these thresholds can help you quickly identify when your user's experience is being affected. Example patterns: SqlException SQL Timeout Long query Slow query WARNING: Query took longer than X Query_time > X Continued Log Data Analysis As always, let us know if you think we have left out any important issues that you like to track in your log data. To start tracking your own system performance, sign up for our free log management tool and include these patterns listed above to automatically create tags and alerts relevant for your system. Ready to start getting insights from your applications? Try InsightOps today.

DevOps: Vagrant with AWS EC2 & Digital Ocean

The Benefits of Vagrant Plugins Following on from my recent DevOps blog posts, The DevOps Tools We Use & How We Use Them and Vagrant with Chef-Server, we will take another step forward and look into provisioning our servers in the cloud. There are many…

The Benefits of Vagrant Plugins Following on from my recent DevOps blog posts, The DevOps Tools We Use & How We Use Them and Vagrant with Chef-Server, we will take another step forward and look into provisioning our servers in the cloud. There are many cloud providers out there, most who provide some sort of APIs. Dealing with the different APIs and scripts can become cumbersome and confusing when your main focus is a fault tolerant, scalable system. This is where Vagrant and many of its plugins shine. Vagrant has a wide range of plugins, from handling Chef and Puppet to provisioning servers on many different cloud providers. We're going to focus on 3 plugins in specific: vagrant-aws, vagrant-digitalocean and vagrant-omnibus. The AWS and Digital Ocean plugins allow us to utilize both our Chef-server and the public infrastructure provided by both Amazon and Digital Ocean. The omnibus is used for installing a specified version of Chef on your servers. Installation of Vagrant Plugins To install the Vagrant plugins, run the following commands: vagrant plugin install vagrant-aws vagrant plugin install vagrant-digitalocean vagrant plugin install vagrant-omnibus Running Vagrant plugin list should give you the following output: vagrant-aws (0.4.1) vagrant-digitalocean (0.5.3) vagrant-omnibus (1.3.1) More information on the plugins can be found here: https://github.com/mitchellh/vagrant-aws https://github.com/smdahlen/vagrant-digitalocean https://github.com/schisamo/vagrant-omnibus Amazon AWS The first cloud provider we will look at is Amazon AWS. If you don't already have an account I suggest you sign up for one here, and have a quick read of their EC2_GetStarted docs. Once signed up you will need to generate an account access and secret key, to do this follow the instruction below: Go to the IAM console. From the navigation menu, click Users. Select your IAM user name. Click User Actions, and then click Manage Access Keys. Click Create Access Key. Your keys will look something like this:     Access key ID example: ABCDEF0123456789ABCD     Secret access key example: abcdef0123456/789ABCD/EF0123456789abcdef Click Download Credentials, and store the keys in a secure location. Vagrant AWS with Chef-Server Now we have everything to get started. Again we will create a Vagrant file similar to what was done in my last blog post DevOps: Vagrant with Chef-Server, however, this time we will omit a few things like config.vm.box and config.vm.boxurl. The reason for this is that we are going to point our Vagrant file to use an Amazon AMI instead. # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.require_plugin 'vagrant-aws' Vagrant.require_plugin 'vagrant-omnibus' Vagrant.configure("2") do |config| config.omnibus.chef_version = :latest config.vm.synced_folder '.', '/vagrant', :disabled => true config.vm.box = "dummy" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/ dummy.box" # Provider config.vm.provider :aws do |aws, override| aws.access_key_id = "ABCDEF0123456789ABCD" aws.secret_access_key = "abcdef0123456/789ABCD/EF0123456789abcdef" aws.keypair_name = "awskey" aws.ami = "ami-9de416ea" #Ubuntu 12.04 LTS aws.region = "eu-west-1" aws.instance_type = "t1.micro" aws.security_groups = ["default"] override.ssh.username = "ubuntu" override.ssh.private_key_path = "path/to/your/awskey.pem" aws.tags = { 'Name' => 'Java' } end # Provisioning config.vm.provision :chef_client do |chef| chef.chef_server_url = 'https://api.opscode.com/organizations/logentries' chef.validation_key_path = '../../.chef/logentries-validator.pem' chef.validation_client_name = 'logentries-validator' chef.log_level = 'info' chef.add_recipe 'java_wrapper' end end You should replace the above bold/italics options with your own settings. The only difference this time in running Vagrant is that you need to pass a provider argument, so your command should look like this: vagrant up --provider=aws Once your Chef run has completed you will have a new instance in Amazon running your specified version of Java. Your output should look similar to this: root@ip-172-31-28-193:~# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) Digital Ocean Another cloud provider which has been gaining a lot of attention lately is Digital Ocean. If you don't already have a Digital Ocean account you can sign up for here. Also have a look at their getting getting started guide if you're new to Digital Ocean. The main difference between this Vagrant file and the AWS Vagrant file is that the provider block is different. # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.require_plugin 'vagrant-digitalocean' Vagrant.require_plugin 'vagrant-omnibus' Vagrant.configure('2') do |config| config.vm.provider :digital_ocean do |provider, override| provider.client_id = 'abcdef0123456789ABCDEF' provider.api_key = 'abcdef0123456789abcdef0123456789' provider.image = 'Ubuntu 12.04.3 x64' provider.region = 'Amsterdam 1' provider.size = '512MB' provider.ssh_key_name = 'KeyName' override.ssh.private_key_path = '/path/to/your/key' override.vm.box = 'digital_ocean' override.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/ master/box/digital_ocean.box" provider.ca_path = "/usr/local/opt/curl-ca-bundle/share/ca-bundle.crt" end config.vm.provision :chef_client do |chef| chef.chef_server_url = 'https://api.opscode.com/organizations/logentries' chef.validation_key_path = '../../.chef/logentries-validator.pem' chef.validation_client_name = 'logentries-validator' chef.log_level = 'info' chef.node_name = 'do-java' chef.add_recipe 'java_wrapper' end end You should replace the above bold/italics options with your own settings. This time when we run Vagrantup we will pass a provider parameter of digital_ocean vagrant up --provider=digital_ocean Once the Vagrant run has complete, SSHing into your server and running java -version should give you the following output. root@do-java:~# java -version java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) If you do run into issues it might be that your cookbooks are out of date, try running the following librarian-chef update knife cookbook upload -a Now with the above scripts at your disposal you can use Chef-server with Vagrant and as many different cloud providers as you wish to keep your systems up to date, in sync and running to keep your customers happy! Ready to bring Vagrant to your production and development? Check out our free log management tool and get started today.

How to Combine D3 with AngularJS

The Benefits and Challenges of D3 Angular Combination Today we'll be focusing on how to combine D3 with the AngularJS framework. As we all know, Angular and D3 frameworks are very popular, and once they work together they can be very powerful and helpful when…

The Benefits and Challenges of D3 Angular Combination Today we'll be focusing on how to combine D3 with the AngularJS framework. As we all know, Angular and D3 frameworks are very popular, and once they work together they can be very powerful and helpful when creating dashboards. But, they can also be challenging and confusing especially when new to these frameworks. The right way to incorporate D3 with Angular is to use custom directives. Directives in Angular are essentially functions that are executed on a DOM element. Let's go through a simple example together. D3 Bar Chart Example Our first step is to create a factory which will enable us to inject the D3 module into our custom directive. angular.module('d3', []) .factory('d3Service', [function(){ var d3; // insert d3 code here return d3; }]); After creating the factory, we can now inject the module into our main app. angular.module('app', ['d3']); For the purpose of this example, our directive will display a simple bar chart. We can call the directive from a HTML tag. <divblog-bar-chartbar-height="20"bar-padding="5"></div> We inject our d3 module into our custom directive. angular.module('myApp.directives', ['d3']) .directive('blogBarChart', ['d3Service', function(d3Service) { return { restrict: 'EA', // directive code } }]); In order to use the D3 library, we need to include our service. You can either copy and paste the entire D3 library or just reference a CDN. scriptTag.src='http://d3js.org/d3.v3.min.js'; In the service, we will use the notion of a promise. We need to wait for the promise to resolve by using the “then” method of our d3Service. For simplicity, we will use a link function which registers listeners and sets up binding. angular.module('myApp.directives',['d3']) .directive('blogBarChart', ['d3Service', function(d3Service) { return { restrict: 'EA', scope: {}, link: function(scope, element, attrs) { d3Service.d3().then(function(d3){ // d3 is the raw d3 object }); } } }]); We are now ready to write actual D3 code inside our AngularJS directive. The bar chart can also be made dynamic for the developer. We can pass attributes such as width of the bar chart to the directive and then use them with the D3 code. We will in our example set up an event listener to track changes in the size of the window. window.onresize = function() { scope.$apply(); }; After doing that we can focus on the D3 code for the bar chart. var svg = d3.select(ele[0]) .append('svg') .style('width', '100%'); One important thing to remember before the previous statement is that if SVG already existed, it would need to be emptied. Here is how you would remove all the preexisting SVG elements. svg.selectAll('*').remove(); D3 and Angular: Easy Implementation This example only shows a very small subset of the D3 code that could go into our directive. As well, we only worked with hardcoded data whereas in a real world scenario we would use data from a back-end service. Thanks to AngularJS, we are able to easily integrate that data into our custom D3 directive by using bi-directional binding. All in all, combining D3 with AngularJS now looks straight forward and relatively easy.  Let us know how it goes for you! Combine D3 and Angular Today Ready to see what combining D3 and Angular can do for you? Check out our free log management tool.

What is Syslog?

This post has been written by Dr. Miao Wang, a Post-Doctoral Researcher at the Performance Engineering Lab at University College Dublin. This post is the first in a multi-part series of posts on the many options for collecting and forwarding log data from different platforms…

This post has been written by Dr. Miao Wang, a Post-Doctoral Researcher at the Performance Engineering Lab at University College Dublin. This post is the first in a multi-part series of posts on the many options for collecting and forwarding log data from different platforms and the pros and cons of each. In this first post we will focus on Syslog, and will provide background on the Syslog protocol. What is Syslog? Syslog has been around for a number of decades and provides a protocol used for transporting event messages between computer systems and software applications. The Syslog protocol utilizes a layered architecture, which allows the use of any number of transport protocols for transmission of Syslog messages. It also provides a message format that allows vendor-specific extensions to be provided in a structured way. Syslog is now standardized by the IETF in RFC 5424 (since 2009), but has been around since the 80's and for many years served as the de facto standard for logging without any authoritative published specification. Syslog has gained significant popularity and wide support on major operating system platforms and software frameworks and is officially supported on almost all versions of Linux, Unix, and MacOS platforms. On Microsoft Windows, Syslog can also be supported through a variety of open source and commercial third-party libraries. Syslog best practices often promote storing log messages on a centralized server that can provide a correlated view on all the log data generated by different system components. Otherwise, analyzing each log file separately and then manually linking each related log message is extremely time-consuming. As a result, forwarding local log messages to a remote log analytics server/service via Syslog has been commonly adopted as a standard industrial logging solution. How does Syslog work? The Syslog standard defines three different layers, namely the Syslog content, the Syslog application and the Syslog transport. The content refers to the information contained in a Syslog event message. The application layer is essentially what generates, interprets, routes, and stores the message while the Syslog transport layer transmits the message via the network. According to the Syslog specification, there is no acknowledgement for message delivery and although some transports may provide status information, the Syslog protocol is described as a pure simplex protocol. Sample deployment scenarios in the spec show arrangements where messages are said to be created by an ‘originator' and forwarded on to a ‘collector' (generally a logging server or service used for centralized storage of log data). Note ‘relays ' can also be used between the originator and the collector and can do some processing on the data before it is sent on (e.g. filtering out events, combining sources of event data). Applications can be configured to send messages to multiple destinations, and individual Syslog components may be running in the same host machine. The Syslog Format Sharing log data between different applications requires a standard definition and format on the log message, such that both parties can interpret and understand each other's information. To provide this, RFC 5424 defines the Syslog message format and rules for each data element within each message. A Syslog message has the following format: A header, followed by structured-data (SD), followed by a message. The header of the Syslog message contains “priority”, “version”, “timestamp”, “hostname”, “application”, “process id”, and “message id”. It is followed by structured-data, which contains data blocks in the “key=value” format enclosed in square brackets “[]”, e.g. [SDID@0 utilization=“high” os=”linux”] [SDPriority@0 class=”medium”]. In the example image below, the SD is simply represented as “-“, which is a null value (nilvalue as specified by RFC 5424). After the SD value, BOM represents the UTF-8 and “su root failed on /dev/pts/7” shows the detailed log message, which should be encoded UTF-8. (For more details of the data elements of SLP, please refer to: http://tools.ietf.org/html/rfc5424) Why Syslog? The complexity of modern application and systems is ever increasing and to understand the behavior of complex systems, administrators/developers/Ops etc. often need to collect and monitor all relevant information produced by their applications. Such information often needs to be analyzed and correlated to determine how their systems are behaving. Consequently, administrators can apply data analytic techniques to either diagnose root causes once problems occur or gain an insight into current system behavior based on statistical analysis. Frequently, logs have been applied as a primary and reliable data source to fulfill such a mission for lots of reasons, some of which I've listed here: Logs can provide transient information for administrators to roll back the system to a proper status after a failure accident. E.g. when a banking system fails, all transactions lost from the main memory can be recorded in the logs. Logs can contain a rich diversity of substantial information produced by individual applications to allow administrators/developers/ops teams to understand system behavior from many aspects such as current system statistics, trend predictions, and troubleshooting. Logs are written externally by the underlying application to hard disks and external services such that by reading these log files, there will not be any direct performance impact on the monitored system. Therefore, in a production environment administrators can safely monitor running applications via their logs without worrying about impacting performance. However, a key aspect of log analysis is to understand the format of the arriving log data, especially in a heterogeneous environment where different applications may be developed using different log formats and network protocols to send these log data. Unless this is well defined, it is quite difficult to interpret log messages sent by an unknown application. To solve this issue Syslog defines a logging standard for different systems and applications to follow in order to easily exchange log information. Based on the logging protocol, Syslog helps applications effectively interpret each log attribute to understand the meaning of the log message. Ready to put Syslog into action? Try our free log management tool today.

What are Javascript Source Maps?

It's generally a good practice to minify and combine your assets (Javascript & CSS) when deploying to production. This process reduces the size of your assets and dramatically improves your website's load time. Source maps create a map from these compressed asset files back to…

It's generally a good practice to minify and combine your assets (Javascript & CSS) when deploying to production. This process reduces the size of your assets and dramatically improves your website's load time. Source maps create a map from these compressed asset files back to the source files. This source map allows you to debug and view the source code of your compressed assets, as if you were actually working with the original CSS and Javascript source code. Take a look at jQuery minified & combined code that was generated from the original source code. The code is practically unreadable and would be difficult to debug. But, as we all know, no matter how thoroughly you test, sometimes bugs will fall through the cracks. This is why it's useful to debug Javascript code in production, and that's when source maps come in handy. How do you use Javascript source maps? With InsightOps we use UglifyJS for minification and Javascript source map generation. UglifyJS is a NodeJS library written in Javascript. To install Uglify JS with NPM: npm install uglify-js -g Minify the files and generate source maps: uglify-js file1.js file2.js -o output.js --source-map output.map.js The code above tells UglifyJS to: Take file1.js and file2.js as input Compress input files and output them to output.js Generate the source map for the compressed file and output it to output.map.js Marrying source maps and Django Compressor Django Compressor is a great Django plugin to mark assets for minification right inside your templates: {% load compress %} {% compress js %} <script src="/static/js/one.js" type="text/javascript" charset="utf-8"></script> <script type="text/ja vascript" charset="utf-8">obj.value = "value";</script> {% endcompress %} Behind the scenes you can develop logic to combine and minify the files with any algorithm or third party tools of your choosing. Browser support Source maps are a new addition to the developer toolbox. Although the source maps spec lives in Google docs (no kidding), they're supported by all major browsers: Chrome, Safari, Firefox, and IE11. By default, source maps are disabled so your users will not incur any unnecessary bandwidth overheads. To enable source maps in Google Chrome, go to Developer Tools, click the little cog icon, and then make sure that “Enable Javascript source maps” is checked. That's it. Now each compressed asset file contains a link pointing to its source map, and we've just told Chrome not to ignore them. See Javascript source maps in action If you'd like to see Javascript source maps in action, check out our free log management tool and take a look at our source code. The files highlighted in green are compressed Javascript files; the folders highlighted in blue are generated from source maps and contain the original source code that's mapped onto the compressed files. We can set breakpoints on mapped code, inspect variables, step through, and do pretty much anything that we can with original code. Pretty cool, huh?

Heroku Dynos Explained

What are Heroku Dynos? If you've ever hosted an application on Heroku, the popular platform as a service, you're likely at least aware of the existence of “Dynos”. But what exactly are Heroku Dynos and why are they important? As explained in Heroku's docs, Dynos…

What are Heroku Dynos? If you've ever hosted an application on Heroku, the popular platform as a service, you're likely at least aware of the existence of “Dynos”. But what exactly are Heroku Dynos and why are they important? As explained in Heroku's docs, Dynos are simply lightweight Linux containers dedicated to running your application processes. At the most basic level, a newly deployed app to Heroku will be supported by one Dyno for running web processes. You then have the option of adding additional Dynos and specifying Dyno processes in your procfile. Dynos actually come in three different flavors: Web Dynos: for handling web processes Worker Dynos: for handling any type of process you declare (like background jobs) One-off Dynos: for handling one-off tasks, such as database migrations One of the great things about Heroku Dynos is how easy they are to scale up and out. Through Heroku's admin portal or via the command line, you can easily add more Dynos or larger Dynos. Adding additional Dynos can help speed up your application's response time by handling more concurrent requests, whereas adding larger Dynos can provide additional RAM for your application. Using Heroku Dynos to get the insights you need Great. I get it. Dynos make it easy to run my application with less hassle. For this reason, I should have to think very little about Heroku Dynos, right? Wrong! As Dynos are individual containers and identified uniquely in your Heroku logs, they can provide some great insight into where issues may be stemming from when things go wrong. When sending Heroku logs to InsightOps, a Dyno's unique ID will automatically be logged in key-value pair format along with information about the process it handled. In the example below, we see web Dynos being identified along with the HTTP requests being handled: An easier way to view this data in InsightOps is to use the Table View: As you can see, the Heroku Dyno is easily identified along other pertinent data. Since I've enabled Heroku's beta log-runtime-metrics from Heroku Labs, I can also see data related to CPU and memory per Dyno, which is particularly useful for identifying issues like too much swap memory being used (perhaps indicating a need to scale up my Dynos). Since Dynos are uniquely identified in key value pairs, you can also use them to visualize data. In the example below, I'm visualizing how much swap memory each Dyno is using over a given period of time: You can also create visualizations to help you monitor how much each Dyno is being used compared to others: Finally, checking the Heroku Dyno related to errors in your logs could hint at Dyno-related issues. In the example below, we see that Dyno web.2 is related to both errors, which happen to be backend connection timeouts. While Heroku Dynos are allowed to fail and automatically restart on a different server, this finding could warrant you manually restarting your Dynos to alleviate the issue. Start logging with Heroku Dynos today Ready to start logging from your Heroku app today? Check out our free log management tool.

Active vs. Passive Server Monitoring

Server monitoring is a requirement, not a choice. It is used for your entire software stack, web-based enterprise suites, custom applications, e-commerce sites, local area networks, etc. Unmonitored servers are lost opportunities for optimization, difficult to maintain, more unpredictable, and more prone to failure. While…

Server monitoring is a requirement, not a choice. It is used for your entire software stack, web-based enterprise suites, custom applications, e-commerce sites, local area networks, etc. Unmonitored servers are lost opportunities for optimization, difficult to maintain, more unpredictable, and more prone to failure. While it is very likely that your team has a log management and analysis initiative, it's important to determine if you are only responding to what the logs tell you about the past, or are you planning ahead based on the valuable log data you are monitoring and analyzing? There are two basic approaches to server monitoring: passive monitoring and active monitoring. They are as much a state of mind as a process. And there are significant differences in the kinds of value each type of server monitoring provides; each type has its own advantages and disadvantages. What is Passive Server Monitoring? Passive server monitoring looks at real-world historical performance by monitoring actual log-ins, site hits, clicks, requests for data, and other server transactions. When it comes to addressing issues in the system, the team will review historical log data, and from there they analyze the logs to troubleshoot and pinpoint issues. This was previously done with a manual pull of logs. While this helps developers identify where issues are, using a powerful modern log analysis service to simply automate an existing process is a waste. Passive monitoring only shows how your server handles existing conditions, but it may not give you much insight into how your server will deal with future ones. For example, if one of the components of the system, a database server, is likely to be overloaded when the load rate of change is reached. This is not going to be clear when server log data has already been recorded, unless your team is willing to stare at a graph in real-time, 24/7…which has been nearly the case in some NOC operations I have witnessed. What is Active Server Monitoring? The most effective way to get past these limits is by using active server monitoring. Active monitoring is the approach that leverages smart recognition algorithms to take current log data and use it to predict future states. This is done by some complex statistics (way over my head) that compare real-time to previous conditions, or past issues. For example it leverages anomaly detection, steady state analysis, and trending capabilities to predict that a workload is about to hit its max capacity. Or there is a sudden decrease in external network-received packets, a sign of public web degradation. Besides finding out what is possibly going to happen, active server monitoring also helps to avoid the time spent on log deep dives. Issues will sometimes still pass you by, and you will still need to take a deeper look, but because information is pushed to you, some of the work is already done, and you can avoid the log hunt. Oh and active monitoring can help the product and dev team from an architectural standpoint. If, for example, a key page is being accessed infrequently, or if a specific link to that page is rarely used, it may indicate a problem with the design of the referring page, or with one of the links leading to that page. A close look at the log can also tell you whether certain pages are being accessed more often than expected  —  which can be a sign that the information on those pages should be displayed or linked more prominently. Any Form of Server Monitoring is Better Than None Log analysis tools are the heart of both server monitoring approaches. Log analysis can indicate unusual activity which might slip past and already overloaded team. Another serious case is security. A series of attempted page hits that produce “page not found” or access denied” errors, for example, could just be coming from a bad external link  —  or they could be signs of an attacker probing your site. HTTP request that are pegging a server process could be a sign that a denial of service attack has begun. It is hard to make the shift from passive to active monitoring. Why? Not because you and your team are not interested in thinking ahead. But more so because many operations are entrenched in existing processes that are also reactive. And sometimes teams are just unaware that their tool can provide this type of functionality. Until one day it does it automatically for you, and you have a pleasant surprise. Active server monitoring can mean the difference between preventing problems before they get a chance to happen, or rushing to catch up with trouble after it happens. And they are the difference between a modern version of an old process, and moving forward to a modern software delivery pipeline. Ready to Make the Shift from Passive to Active Monitoring? Sign up for our free log management tool today.

Happy Holidays from Rapid7

As 2016 comes to a close, we wanted to pause and reflect on what a great year it's been connecting with our customers, partners and the community. We at Rapid7 wanted to reach out and say thank you and best wishes for the holidays and…

As 2016 comes to a close, we wanted to pause and reflect on what a great year it's been connecting with our customers, partners and the community. We at Rapid7 wanted to reach out and say thank you and best wishes for the holidays and have a happy New Year. Please enjoy this special video sharing the tale of "The Hacker Who Stole Christmas," narrated by Bob Rudis.

Thanks and Giving: #Rapid7GivesBack

This time of the year is often seen as a time for giving thanks. At Rapid7, we are continually thankful for our community – the customers, partners, employees, experts and open-source contributors – who engage with us every day. Our community also includes the places…

This time of the year is often seen as a time for giving thanks. At Rapid7, we are continually thankful for our community – the customers, partners, employees, experts and open-source contributors – who engage with us every day. Our community also includes the places where we live and work and, since one way to show thanks is by giving back, we decided that everyone in the Company would take a day in October to show our support and love for our communities.#Rapid7GivesBack Day was on October 20, 2016 and every single Rapid7 office across the globe closed so our amazing employees (our Moose) could participate in service projects within their communities. These projects ranged from fall cleanups to painting to donation drives to charity fundraisers to supporting open source communities to volunteering at animal shelters and providing meals. We do amazing things when we partner together and this allowed our team to share that energy and give back to our communities across the globe. Giving back is our way of saying thank you to our communities.Here's some of the ways we thanked our communities on #Rapid7GivesBack Day and what our Moose had to say about the experience:Boston and Cambridge HeadquartersOur Boston and Cambridge Moose partnered with TUGG to find several different volunteer opportunities across the city. Here are some of the organizations volunteers supported on #Rapid7GivesBack Day:Ethos -  a private, not-for-profit organization that promotes the independence, dignity, and well-being of the elderly and disabled.The Gavin Foundation – an organization that offers specialized adolescent residential, community, educational and diversion programs to respond to the needs of youth affected by drug and alcohol abuse, and their families.Josiah Quincy Elementary School - a Boston public school based in Chinatown serving over 800 kids k-5. Nearly 80 percent of its students are low income and over half are English Language Learners.United South End Settlements – USES works to build a strong community by improving the education, health, safety, and economic security of low-income individuals and families in and around Boston's historic South End/Lower Roxbury.Mass Audubon Society in Mattapan - the mission of Mass Audubon is to protect the nature of Massachusetts for people and wildlife.A BIG thx to @rapid7 & @BuildingImpact 4 volunteering in the auditorium makeover #TechGivesBack2016 #Rapid7GivesBack @BostonSchools #bpspln pic.twitter.com/rI3ofdDzxL— BINcA (@BINcA_BPS) October 20, 2016Fun day with #TechGivesBack! Thank you to our fantastic volunteers! #rapid7givesback #Gratitude pic.twitter.com/fLUO5C61kx— Gavin Foundation (@gavinfoundation) October 20, 2016Alexandria OfficeThe Alexandria office included an amalgam of Moose from the office and those from around the D.C. area. This group helped support New Hope Housing and the Gartlan House which provides permanent supportive housing for chronically homeless adult men. The team assisted in yard clean up at the house and felt the experience was a great way to volunteer in the area.Thanks again to the group from @rapid7 who accomplished so much at Gartlan House on Thursday! #Rapid7GivesBack pic.twitter.com/mMmAQe7u7y— New Hope Housing (@NewHopeHousing) October 23, 2016Austin OfficeHalf of the office went to the Austin Animal Center where volunteers helped walk dogs, play with cats and kittens and through Meals on Wheels to community members. Austin Moose found the experiences great for getting out of the office to give back and have suggested volunteering much more often throughout the year!helped make treats for all the animals. The other half of the office helped deliver nutritious meals and human connection through Meals on Wheels to community members. Austin Moose found the experiences great for getting out of the office to give back and have suggested volunteering much more often throughout the year!#rapid7givesback #animalcenter pic.twitter.com/OFHT9rZcT9— leonardo varela (@leonardovarela) October 20, 2016Belfast OfficeBelfast Moose split into several groups to give back to several local charities including Action Cancer, Cancer Focus, NI Hospice, Simon Community and Assisi Animal Sanctuary. The group gave back by static cycling for charity, organizing donated items, painting and helping with animals, the team kept busy and supported several different organizations in one day.#Rapid7 cycling in #forestside shopping centre, to raise money for #actioncancer #Rapid7GivesBack pic.twitter.com/9uMFKRLXt2— Simon McFerran (@nomismcf) October 20, 2016Dublin OfficeThe Dublin office helped support CoderDojo, an open source, volunteer led community orientated around running free non-profit coding clubs for young people. With impeccable timing, #Rapid7GivesBack day fell during Europe Code Week 2016 and the Dublin team was able to partner with CoderDojo to help further develop the community platform and content including pushing forward some core projects.hackathon with @logentries @rapid7 team great to see these organisations support the open source @CoderDojo community pic.twitter.com/HLQ0bZPP8w— Pete☯ (@peter0shea) October 20, 2016Delighted to be working with the awesome team at @logentries @rapid7 on our community platform & content #Hacktober https://t.co/pDh988chi1 pic.twitter.com/hasE0V03vb— ☯CoderDojo☯ (@CoderDojo) October 20, 2016Los Angeles OfficeThe Los Angeles office volunteered at the LA Food Bank which provides food for children, seniors, families and individuals in need. Together, the team sorted 25,173 pounds of food – the equivalent to 20,893 meals. That prep work helped theFood Bank staff get ready to deliver meals the following week. In sharing their experience, the team noted that it was a very humbling experience with regard to understanding how many don't have food and how difficult it is to prepare donations to make sure those who need the support can get it. It was a long day of physical labor but incredibly rewarding knowing that the work would help someone in need later.Reading OfficeThe Reading office volunteered at Whitley Park Primary School and helped rebuild a play area for kids. The team helped repaint fences and picnic tables, plant gardens, rake leaves and clean up the space. After a physical day of work, the team agreed it was a great use of time and a good experience to do something charitable for the community. The school shared its appreciation and gratitude and invited Rapid7 Moose back any time to help out.Reading ream rebuilding a school play area #rapid7givesback pic.twitter.com/3AsZq1R0Qb— Sam Humphries (@safesecs) October 20, 2016Singapore OfficeMoose in the Singapore office ran a donation drive within the company to give pre-loved belongings to the Salvation Army. Items were collected over the course of two weeks and were delivered on #Rapid7GivesBack Day. The team appreciated being able to re-purpose items that would help someone else.Toronto OfficeToronto Moose supported Free Geek Toronto by collecting donated electronics to either dispose of E-Waste properly without damaging the environment or refurbished to get youth into tech. Part of Free Geek Toronto's mission is to promote social and economic justice, focusing on marginalized populations in the Greater Toronto area by reducing the environmental impact of e-waste through reuse and recycling and increasing access to computing and communications technologies. The team helped collect donations by reaching out to friends, family and other businesses in the area. The team appreciated the opportunity to support this local organization that helps give more youth access to technology and open source software.#SocEnt #EWasteDrive #WasteWarriors @rapid7 set up an E Waste Drive at King West Centre in  last week.Thank you all for your donations pic.twitter.com/yvKEdxC4y8— Free Geek Toronto (@FreeGeekToronto) October 27, 2016Remote MooseOur Moose without an official Rapid7 home base participated as well. Projects included community clean up, running charity races or volunteering with local organizations. The effects of #Rapid7GivesBack Day were felt anywhere our Moose are located.Sasha and I made it!  #firefighter5k #techgivesback @rapid7 pic.twitter.com/RqXEvPG9eW— Spencer Seale (@sseale68) October 20, 2016We may have celebrated a little early in the year with #Rapid7GivesBack Day, but we give thanks every day for the partnerships we have – both for the individuals and the places that make our community.For more photos from #Rapid7GivesBack Day, visit our Facebook album.

Using Kernel.load to speed up exploit dev

Originally Posted by egyptWhen modifying Metasploit library code, you generally need to restart msfconsole to see the changes take effect. Although we've made some improvements in startup time, it's still not great, and waiting for the whole framework to load for a one-line change can…

Originally Posted by egyptWhen modifying Metasploit library code, you generally need to restart msfconsole to see the changes take effect. Although we've made some improvements in startup time, it's still not great, and waiting for the whole framework to load for a one-line change can be frustrating. Fortunately, Ruby has a simple way to reload a file: Kernel.load. Here's a simple example of how to use it:## # $Id$ ## load "./lib/rex/proto/my_new_protocol.rb" load "./lib/msf/core/exploit/my_new_protocol.rb" require 'msf/core' class Metasploit3 < Msf::Exploit::Remote   include Msf::Exploit::Remote::MyNewProtocol   def initialize(info={})     super(update_info(info,       'Name' => "My New Protocol Exploit",       'Description' => %q{ Exploits something in My New Protocol },       # ...     ))   end   def exploit     MyNewProtocol.frobnicate(datastore["RHOST"])   end end If my_new_protocol.rb defines any constants, Ruby will warn that they are being redefined. Generally this is harmless and you can ignore the warnings.This simple technique can greatly decrease development time and works equally well when writing your own lib or modifying an existing one. When you're done with the exploit, simply replace the load lines with appropriate requires and send us a patch!

Metasploit-ation for the Nation

In a couple of weeks, our very own @Mubix (AKA Rob Fuller to those who don't live their life with an @ sign permanently attached to their name!) will be offering Metasploit-ation for the Nation.  Unlike that phrase – which I just made up –…

In a couple of weeks, our very own @Mubix (AKA Rob Fuller to those who don't live their life with an @ sign permanently attached to their name!) will be offering Metasploit-ation for the Nation.  Unlike that phrase – which I just made up – Mubix will actually be talking sense as he walks penetration testers through the delightful world of Metasploit Pro in a 4-hour in-depth training session.Mubix took some time to answer a few questions below to give you a flavor of the training.  If you have any additional questions on this, please post them in the comments section below.[Jen] What's this all about then? [Mubix] This is going to be a 4-hour practical deep dive in which I'll be showing people the essentials of penetration testing, as well as advanced techniques and uses of Metasploit.  The plan is for attendees to walk away with a deeper understanding of penetration testing and how Metasploit can help make their organization more successful with an efficient, effective and unparalleled penetration testing strategy.  [Jen] More specifically what will the course cover?[Mubix] The course will cover the following topics:o    Reconnaissanceo    Network Vulnerability Scanningo    Maintaining Access & Privilege Escalationo    Advanced Techniqueso    Pass the Hash Pivot AttacksThere'll also be an opportunity for people to ask any questions they have about penetration testing and Metasploit.[Jen] Who should attend the Metasploit Pro training?[Mubix] The training is for anyone interested in penetration testing, from novices to pros.  I'm aiming to have something in there for everyone, whether you're hoping to pick up the basics, or looking for some more advanced tricks and tips.[Jen] What are the main details everyone needs to know?[Mubix] Here you go:What: 4-Hour Online Metasploit Pro Training with Mubix (Includes course materials)When: May 26th, 11am-3pm EasternHow Much: $1,000 / per personHow to Register: Please contact your Rapid7 Sales Representative or call 617.247.1717.[Jen] Finally, can you tell us a bit more about yourself and why you're the perfect person to introduce people to Metasploit Pro?![Mubix] I spent time doing Systems Administration, Incident Response, Security Infrastructure Design, and Penetration Testing in the DoD and Department of State. I've learned most of what I know on my own, from friends, and by just googling it. I have fun breaking into places but my passion comes in the constant challenges I'm faced with and the hi-demand problem solving that comes with Penetration Testing. Ultimately it's about getting organizations better prepared for attacks, which may or may not come, but it's better to run faster than the other guy.To register for the training please contact your Rapid7 Sales Representative or call 1 (617) 247 1717.

Rapid7: First ASV having its employees qualified as per the new PCI requirements

Rapid7 is one of the 152 worldwide vendors approved by PCIco (the compliance body) to perform PCI scans of merchants and service providers' external infrastructures.To be considered ASV (Approved Scanning Vendor), a company must pass an annual test consisting of a scan of a…

Rapid7 is one of the 152 worldwide vendors approved by PCIco (the compliance body) to perform PCI scans of merchants and service providers' external infrastructures.To be considered ASV (Approved Scanning Vendor), a company must pass an annual test consisting of a scan of a specific vulnerable infrastructures (Lab) controlled by independent laboratories on behalf of PCIco.As of mid-April 2011, in addition to the above annual testing PCIco requires that two of the ASV employees get qualified as QAE (Qualified ASV Employee). This new certification consists of an online training of 7 modules (237 slides) about everything one could ever know about PCI. Candidates have 14 days to take the course and associated test (60 questions). As usual Rapid7 took the  initiative and immediately registered two candidates.Today we are proud to be the first of the 152 ASVs out there  having completed this process.Having our employees qualified is the best way to serve our customers.Didier Godart

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Upcoming Event

UNITED 2017

Rapid7's annual security summit is taking place September 12-14, 2017 in Boston, MA. Join industry peers for candid talks, focused trainings, and roundtable discussions that accelerate innovation, reduce risk, and advance your business.

Register Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now