Rapid7 Blog

InsightOps  

3 Core Responsibilities for the Modern IT Operations Manager

In the good old days, IT operations managers were responsible for maintaining the infrastructure, meeting service levels agreements, sticking to budget, and keeping employees happy. Life was not easy, but at least it was familiar. You knew your hardware, your software, your employees. You determined…

In the good old days, IT operations managers were responsible for maintaining the infrastructure, meeting service levels agreements, sticking to budget, and keeping employees happy. Life was not easy, but at least it was familiar. You knew your hardware, your software, your employees. You determined services levels based on what you could actually see and touch. You told people what to do and they did it. While IT was perceived to be an expensive cost center, it wasn’t an issue as long as the phones worked and the systems ran according to expectation. That was then, and this is now. Today IT operations managers can still see a lot, but they can touch very little. While some companies still own a significant amount of hardware, a diminishing number are making new hardware purchases. They’re moving toward Infrastructure as a Service (IaaS) solutions. IaaS makes hardware, storage, network, software, and all the work required to support such things opaque to the consumer, very much in the same way that the power generators at Niagara Falls become opaque to those using electricity. All the consumer sees are the power outlets in the wall and the monthly bill sent by the power company. By the same token, IT operations managers only see control panels and dashboards. The days of going over to the data center to inspect capital assets are gone. Now, procurement is about getting the best hourly rate for a given service. It’s a different way of doing business. Not only is the nature of hardware in the enterprise changing—software is too. Given the proliferation of using continuous integration and continuous delivery techniques to deploy a greater number of smaller, container-centric applications and services, today’s IT operations require less human interaction to make software available to end users. Most of the work is done by scripts and in some cases, scripts made by other scripts. Applications are becoming more fine-grain. The monolithic application is giving way to those that are composed of collections of microservices deployed as ephemeral containers configured and controlled via orchestration. Increasing the number of fine-grain deployment units and the rate at which they change decreases the likelihood that one person will know every detail of an application. As applications and services become smaller, the technical authority and responsibility associated with them becomes more decentralized. Whereas in the past a “department” owned an application, in today’s world of containers and microservices, small, autonomous teams are the responsible entities. The role of the IT operations manager is no longer that of the expert gatekeeper, but rather that of the knowledgeable facilitator. So then the question becomes, what are the essential responsibilities of the manager in modern IT operations? There are 3 essential responsibilities: To set realistic budgetary expectations To ensure and maintain IT operations transparency To mitigate the impact of automation on the workforce Allow me to elaborate. Set realistic budgetary expectations One of the benefits of implementing cloud-based IaaS and container-based architectures deployed by way of automation is considerable cost saving. According to David Bray, CIO at the Federal Communication Commission: “Back in late 2013, we were spending 85 percent of our budget just to maintain 10-year old legacy systems. However, after a dramatic shift to public cloud and commercial service provider, we saw our maintenance spend drop down to less than 50 percent of our budget.” That’s right, 50% savings. This is no small amount. It becomes the stuff of legends. It also becomes the standard by which expectations will be set if left unchecked. It’s only natural for stakeholders to think, “We had 50% savings this year. Getting another 50% savings next year is a fair expectation.” Yes, automation will reduce costs dramatically at first, but then over time the saving will flatten out. Big efficiencies and cost savings are typically achieved early on. It’s a typical pattern when moving to cloud-based infrastructures. Yet, many in upper management are accustomed to making an up-front investment in an initiative with the goal of long-term savings. Thus, those in upper management who are new to cloud-based transformations might treat the significant saving incurred initially as an indicator of good things to come, that greater saving are on the way. Yet such an expectation is inferred and unrealistic. Thus, the wise IT operations manager will do well to keep budgetary expectation fact based. The wise IT operations manager will ensure that the immediate and projected costs and benefits of his or her organization's activities are clearly stated and available to appropriate parties easily, on demand. In other words, transparency is everything. So then, how is transparency achieved? Read on. Ensure and maintain IT operations transparency One of the benefits of IaaS is the rise in reporting technology. Gone are the days when it took a day’s worth of labor at the end of the month to create and distribute budgetary information through the management chain. In fact, IaaS companies such as AWS, Microsoft Azure and Google Compute Engine make the latest usage and billing information available to consumers at the click of a button. Today stakeholders can be in the know all the time and should be. As a result, stakeholders throughout the enterprise want to be in charge of their information infrastructure. To quote Laurence Chertoff independent consultant to nonprofits and former Director of IT at Npower Inc.: “If the end user cannot modify or administer the technology quickly, I don't want it.” It doesn’t matter whether a technology is out on AWS or in a server room on the 3rd floor, people want access to their technology. Thus, instead of being a powerbroker of information dissemination, the role of the modern IT operations manager is to make sure that people can get at the information they need, when they need it, in a manner that is easy, appropriate and secure. In addition providing the transparency that allows users to do as much for themselves as they can and want to do, items such as incident tracking, operational costs, service issues, and project workflow data need to be apparent and accurate. At the end of the day, the modern IT operations manager is a facilitator, not a roadblock, to information transparency. Making systems flow with as little effort as possible is one of the the essential responsibilities of modern IT operations management. Mitigate the impact of automation on the workforce Automation has an impact. Always has, always will, ever since the days when Dutch windmills ground wheat into flour faster than a human ever could. The good news is that automation technology made it possible for many more people to have more bread at a cheaper price. The bad news is that people who manually ground wheat into flour had to find other types of work to do in order to survive. The same holds true in IT operations. For better or worse, increasing the use of automation in IT operations means that the way people work has changed and will continue to do so. Steve Mays, CTO of Trizic has an interesting take on the matter: “Where we used to depend a LOT on the ‘art’ of super talented people who had a widely varied background from operations to development to help us build and manage systems and software, we now have automation in the cloud. In the past, [engineers] hand crafted the product to be artisanal and got a lot of satisfaction from a job finely crafted. [Today] we look at IT operations more like an assembly line. My mission is to turn out 100K units vs. beautifully [releasing] only 10. From a product output and quality perspective, this is a good thing. However, the systems are no longer your ‘friends’ that you have a ‘close personal relationship to’.” IT operations will become more automated, not less. The use of human labor will fall into two categories. One category is that work which requires creativity and imagination, for example system design and incident troubleshooting. The other category is work which is predictable and repetitive—requirements gathering and capacity planning. Eventually all predictable and repetitive work will be automated. It’s the nature of the dynamic. If machine intelligence can observe a pattern for a long enough period of time, eventually the machine intelligence can emulate the pattern—think speak recognition. Thus, the modern IT operations manager will be subject to a pressure that is new to the IT operations—how to plan for the continuing obsolescence of a portion of the workforce. To date, IT operations managers worrying about employee obsolescence has been minimal. The conventional thinking is that IT operations employees have always been in demand. If a job is lost in one company, a new job can be found in another. But, as comprehensive automation becomes more pervasive, the notion that there will always be another job might be a faulty assumption. Of course, the IT operations manager can simply not care. The problem is you can’t fake sincerity for a long period of time. As more automation sets in and employees face increasing pressures to exert more creativity or work with new, more difficult technologies in order to stay viable, many will turn to management for help. If that help appears as a lot of talk with no action, eventually employees will figure it out. The result is a workplace in which morale low and personal investment is nonexistent. In such an environment the quality of service degrades and eventually costs increase. Not caring is a solution with a very shallow horizon and poor ROI. The alternative is to take a new approach. Modern IT operations management is about taking the time to understand the impact of automation upon the workforce and having the foresight to take realistic steps to address the impacts early on. At the least, the impact of a given automation event needs to investigated and articulated in a transparent manner, particularly with regard to the human impact. Then, realistic strategies to address the expected impact need to be devised. If such strategies require accepting the substitution of human labor with automation, even if the substitution means the cost of an employee’s job, the enlightened IT operations manager will promote such discussions and foster ways to find solutions for the problems at hand. In the old days, the IT operations manager would just pass the issue of impact of automation on the IT operations workforce onto HR. However, given the decentralization of both authority and expertise brought about by automation itself, HR is no longer the expert in these matters. HR simply does not know enough. The IT operations manager is now the authority and expert, even in matters of personnel. Mitigating the impact of automation on human employment will be a growing concern of managers in IT operations. The issue is new and daunting. Yet, for some managers it will be an exciting opportunity. Those that can succeed in finding ways to mitigate the impact of automation on those they manager will stand at the forefront of the profession. Putting It All Together In the world of IT operations, the old days of manage by edict and command are over. Steve Mays from Trizic sums it up well: “Top down is 100% impossible now. My 30+ year background in tech isn’t that useful anymore. I used to guide folks with my experience. Now I guide them to exercise restraint, build good enough features but with high levels of testing to ensure fewer issues. I let them tell me all about the new frameworks/libraries/tech that they think we should use.” Today, given the continuous demands that enterprises make upon IT operations, IT operations managers need to provide to their stakeholders more complex services, at faster rates of implementation and higher levels of reliability. Automation technologies combined with greater transparency into IT operations are key elements for meeting these demands. Also, given the growing trend to decentralization, IT managers can no longer be expected to be know-it-alls. Real knowledge and authority resides in the small teams providing the solutions needed. The job of the modern IT operations manager is to be a knowledgeable facilitator, one who establishes and maintains an appropriately open and transparent environment from which realistic performance and budgetary expectations can be established. The modern IT operations manager sets the guidelines in which good work can happen. Also, given the growing prevalence of advanced automation technologies on the IT landscape and the impact that such automation will have on human employment, the modern IT operations manager will at the least, articulate the human ramifications of automation initiatives at hand and foster an environment in which reasonable discussion about the impact of automation can take place. The modern IT operations manager understands the dynamics of technical authority within the decentralized enterprise and the importance of providing timely, accurate information to support such authority. He or she knows how to both inspire and guide others to find the tools and techniques necessary to allow the enterprise to compete successfully in the marketplace for the benefit of all it touches—customers, employees, and shareholders. The modern IT operations manager understands that the real power of his or her position is to empower others to act reasonably, affordably, and safely. This is no longer power expressed by edict. Rather it is power that comes from influence. Power expressed as edict has the shelf life of a manager’s tenure. Power that comes from influence lives on well after the manager has left the company. I’ll leave it to the reader to decide which type of manager he or she wishes to be. For information on Rapid7’s IT operations solutions, click here.

What is BDD Testing: Practical Examples of Behavior Driven Development Testing

The Need for Behavior Driven Development (BDD) Testing Tools It should come as no surprise to learn that testing is at the heart of our engineers' daily activities. Testing is intrinsic to our development process, both in practical terms and in our thinking. Our engineers…

The Need for Behavior Driven Development (BDD) Testing Tools It should come as no surprise to learn that testing is at the heart of our engineers' daily activities. Testing is intrinsic to our development process, both in practical terms and in our thinking. Our engineers work with complex systems that are made up of complex components. Individual components may have many external dependencies. When testing, the scope of what is to be tested is important – it can be system wide, focused on a particular feature or down deep into the methods and classes of the code. To be able to focus our testing, we want to be able to mimic or ‘mock' the behavior of external dependencies using a BDD testing tool. The purpose of this post is to walk through a couple of simple code examples and provide an overview of and explain the need for Behavior Driven Development (BDD) testing. BDD, Acceptance Tests, and Automation At Rapid7 we apply the BDD methodology which is an extension of Test Driven Development (TDD). BDD and TDD both advocate that tests should be written first, which for BDD this means acceptance tests (ATs), followed by unit tests driven by the ATs. For now, let's say that at the outset of any task, BDD focus is on capturing the required behavior in User Stories, and from these acceptance tests (ATs) are written. Over time a large number of ATs are generated. Therefore not only is the methodology important but also the supporting tools to automate and manage our work. For some background on this, another colleague, Vincent Riou has described theautomated testing, continuous integration and code-quality control toolsthat we use. BDD Testing Example: Ubiquitous Language and AT Scenarios To borrow from Vincent's post, “The idea with acceptance testing is to write tests (or behavioral specifications) that describe the behavior of your software in a language which is not code but is more precise than standard English." Doing this allows people who are not software engineers, but have knowledge of the requirements, such as Product Management or Marketing, to write the scenarios that make up our ATs. This is a powerful thing when it comes to capturing the required behavior. People in the BDD community sometimes refer to this as a ‘Ubiquitous Language'. Again borrowing from what Vincent states “Additionally, those tests can be run using a parser which will allow you to easily match your language to functions in the programming language of your choice." Here is an example AT scenario – in this case following a template and language constructs used by the Cucumber / Gherkin parser. Given the customer has logged into their current account And the balance is shown to be 100 euros When the customer transfers 75 euros to their savings account Then the new current account balance should be 25 euros Example step definition in Python The following is an example of mapping a step definition to a Python function. In this case, the final step Then is shown. This is where an ‘assert' is used to verify if the AT will pass or fail, depending on the final account balance. @Then('^the new current account balance should be "([^"]*)" euros$') def the_new_current_account_balance_should_be(self, expected_bal): expected_bal = int(expected_bal) assert expected_bal >= 0, "Balance cannot be negative" new_bal = get_balance(account_id) assert int(new_bal) == expected_bal, "Expected to get %d euros. Instead got %d euros" % (new_bal, expected_bal) Since we are writing our tests before the actual implementation of the behavior, the AT will fail – so it's important that the error message thrown by the ‘assert' is meaningful. Remember also that an AT may fail at a future date if some behavior of the ‘system under test' (SUT) is modified, intentionally or not – this is part of the value of having a body of automated ATs. Mocking Behavior of External Dependencies The components and sub-systems that we work with have many external dependencies that can be complex. When running an AT against a particular component, it may be necessary to mock the external dependencies of that component. This is different from using a framework as described below in unit testing. Instead this is about trying to mimic the behavior of a second black-box so we can test the behavior of the first black-box. In our work we encounter this all the time, especially where a SUT has a dependency on the behavior of an external server. One approach for example is to build a simple mock server in Python using the Bottle module, that gives us a basic server to build on. We mock the behavior that is required to meet the needs of the SUT. Note that this is not building a duplicate of an existing component – we are trying to mimic the behavior as seen by the SUT to complete our testing. BDD Testing Example: Unit Testing After completing the acceptance tests come the unit tests. These are more closely coupled with the code of the final implementation, although at this stage we still do not start our implementation until the required unit tests are in place. This approach of acceptance tests and unit tests are applicable to GUIs. Unit Testing Example: Mocking with some JSON The following example is a combination of using the Junit framework with the Mockito library to create mock objects. In this example we want to show in a simple way a technique to mock a response that contains data in JSON format from a GET request on some external server. The test data, in JSON format, can be an actual sample captured in a live production scenario. We are also going to use a Google library to help with handling the JSON file. In this simple example we are testing a method ‘getCountOfStudents', found in a data access class, that is used by our imaginary application to get the number of students on a course using that course ID. The actual details for that course is held on some database externally – for the purposes of testing we don't care about this database. What we are interested in, however, is that the method ‘getCountOfStudents' will have a dependency on another piece of code – it will call ‘jsonGetCourseDetails' which is found in an object called ‘HttpClient' – as the name implies this object is responsible for handling HTTP traffic to some external server – and it is from this server our application gets course data. For our test to work we therefore need to mimic the response from the server – which returns the data in JSON format – which means we want to mock the response of the ‘jsonGetCourseDetails'. The following code snippets come from a Junit Test Class, that is testing the various methods found in the class that defines our data access object. Note the required imports for the Mockito and Google libraries are added. import static org.mockito.Matchers.any; import static org.mockito.Matchers.eq; import static org.mockito.Mockito.*; import com.google.common.io.Resources; import com.google.common.base.Charsets; view raw Prior to running the test a mock object of the HttpClient is created using the test class ‘setup()' method, and tidied up afterwards with ‘teardown()'. private HttpClient httpClient; @Before public void setup() { apiClient = mock(HttpClient.class); ... ... } @After public void teardown() { reset(httpClient); ... ... } view raw For the test method itself, we use the Mockito when, so when the ‘jsonGetCourseDetails' on the mock ‘httpClient' object is called with the ‘course_id', it then returns a mock response. We create the mock response using some test data, in JSON, we have in a file ‘course_details.json'. @Test public void testGetCountOfStudentsWithCourseID() throws IOException { private String course_id = "CS101"; when(httpClient.jsonGetCourseDetails(eq(course_id)) .thenReturn(getMockResponse("./tests/course/course_details.json")); Integer count = dao.getCountOfStudents(course_id); Assert.assertEquals(10, count); } view raw To create the mock response there is a utility method we have written that uses the Google library ‘Resources' class. For this example the method simply returns a mock response as the String from the ‘Resources.toString'. private String getMockResponse(String jsonResource){ String MockResponse = null; try { MockResource = Resources.toString(Resources.getResource(jsonResource), Charsets.UTF_8); }catch(IOException ex){ ex.printStackTrace(); } return MockResponse; } view raw At this stage we have a unit test with a mock object and we can use data in JSON format. Of course, also at this stage the test method will fail. Why? Because the implementation of the ‘dao.getCountOfStudents(course_id)' has not yet been done! We are writing our tests first, mocking the external dependencies (behavior) our code is reliant on. When writing the code for the implementation, we will know we are finished when all the tests are passing. Ready to Implement BDD Testing? Try our free log management tool today.

5 Ways to Use Log Data to Analyze System Performance

Analyzing System Performance Using Log Data Recently we examined some of the most common behaviors that our community of 25,000 users looked for in their logs, with a particular focus on web server logs. In fact, our research identified the top 15 web server…

Analyzing System Performance Using Log Data Recently we examined some of the most common behaviors that our community of 25,000 users looked for in their logs, with a particular focus on web server logs. In fact, our research identified the top 15 web server tags and alerts created by our customers—you can read more about these in our https://logentries.com/doc/community-insights/ section—and you can also easily create tags or alerts based on the patterns to identify these behaviors in your systems log data. This week we are focusing on system performance analysis using log data. Again we looked across our community of over 25,000 users and identified five ways in which people use log data to analyze system performance. As always, customer data was anonymized to protect privacy. Over the course of the next week, we will be diving into each of these areas in more detail and will feature customers' first-hand accounts of how they are using log data to help identify and resolve issues in their systems and analyze overall system performance. Our research looked at more than 200,000 log analysis patterns from across our community to identify important events in log data. With a particular focus on system performance and IT operations related issues, we identified the following five areas as trending and common across our user base. 5 Key Log Analysis Insights 1. Slow Response Times Response times are one of the most common and useful system performance measures available from your log data. They give you an immediate understanding of how long a request is taking to be returned. For example, web server log data can give you insight into how long a request takes to return a response to a client device. This can include time taken for the different components behind your web server (application servers, DBs) to process the request, so it can offer an immediate view of how well your application is performing. Recording response times from the client device/browser can give you an even more complete picture since this also captures page load time in the app/browser as well as network latency. A good rule of thumb when measuring response times is to follow the 3 response time limits as outlined by Jakob Nielsen in his publication on ‘Usability Engineering' back in 1993 (still relevant today). 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, and 10 seconds is about the limit for keeping the user's attention focused on the dialogue. Slow response time patterns almost always follow the pattern below: response_time>X In this context, response_time is the field value representing the server or client's response, and ‘X' is a threshold. If this threshold is exceeded, you want the event to be highlighted or to receive a notification so that you and your team are aware that somebody is having a poor user experience. 2. Memory Issues and Garbage Collection Outofmemory errors can be pretty catastrophic, as they often result in the application's crashing due to lack of resources. Thus, you want to know about these when they occur; creating tags and generating notifications via alerts when these events occur is always recommended. Your garbage collection behavior can be a leading indicator of outofmemory issues. Tracking this and getting notified if heap used vs free heap space is over a particular threshold, or if garbage collection is taking a long time, can be particularly useful and may point you toward memory leaks. Identifying a memory leak before an outofmemory exception can be the difference between a major system outage and a simple server restart until the issue is patched. Furthermore, slow or long garbage collection can also be a reason for a user's experiencing slow application behavior. During garbage collection, your system can slow down; in some situations it blocks until garbage collection is complete (e.g. with ‘stop the world' garbage collection). Below are some examples of common patterns used to identify the memory related issues outlined above: Out of memory exceeds memory limit memory leak detected java.lang.OutOfMemoryError System.OutOfMemoryException memwatch:leak: Ended heapDiff GC AND stats 3. Deadlocks and Threading Issues Deadlocks can occur in many shapes and sizes and can have a range of negative effects—from simply slowing your system down to bringing it to a complete halt. In short, a deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does. For example, we say that a set of processes or threads is deadlocked when each thread is waiting for an event that only another process in the set can cause. Not surprisingly, deadlocks feature as one of our top 5 system performance related issues that our users write patterns to detect in their systems. Most deadlock patterns simply contain the keyword ‘deadlock', but some of the common patterns follow the following structure: ‘deadlock' ‘Deadlock found when trying to get lock' ‘Unexpected error while processing request: deadlock;' 4. High Resource Usage (CPU/Disk/ Network) In many cases, a slow down in system performance may not be as a result of any major software flaw, but instead is a simple case of the load on your system increasing without increased resources available to deal with this. Tracking resource usage can allow you to see when you require additional capacity such that you can kick off more server instances (for example). Example patterns used when analyzing resource usage: metric=/CPUUtilization/ AND minimum>X cpu>X disk>X disk is at or near capacity not enough space on the disk java.io.IOException: No space left on device insufficient bandwidth 5. Database Issues and Slow Queries Knowing when a query failed can be useful, since it allows you to identify situations when a request may have returned without the relevant data and thus helps you identify when users are not getting the data they need. There can be more subtle issues, however, such as when a user is getting the correct results but the results are taking a long time to return. While technically the system may be fine (and bug-free), a slow user experience hurts your top line. Tracking slow queries allows you to track how your DB queries are performing. Setting acceptable thresholds for query time and reporting on anything that exceeds these thresholds can help you quickly identify when your user's experience is being affected. Example patterns: SqlException SQL Timeout Long query Slow query WARNING: Query took longer than X Query_time > X Continued Log Data Analysis As always, let us know if you think we have left out any important issues that you like to track in your log data. To start tracking your own system performance, sign up for our free log management tool and include these patterns listed above to automatically create tags and alerts relevant for your system. Ready to start getting insights from your applications? Try InsightOps today.

How to Combine D3 with AngularJS

The Benefits and Challenges of D3 Angular Combination Today we'll be focusing on how to combine D3 with the AngularJS framework. As we all know, Angular and D3 frameworks are very popular, and once they work together they can be very powerful and helpful when…

The Benefits and Challenges of D3 Angular Combination Today we'll be focusing on how to combine D3 with the AngularJS framework. As we all know, Angular and D3 frameworks are very popular, and once they work together they can be very powerful and helpful when creating dashboards. But, they can also be challenging and confusing especially when new to these frameworks. The right way to incorporate D3 with Angular is to use custom directives. Directives in Angular are essentially functions that are executed on a DOM element. Let's go through a simple example together. D3 Bar Chart Example Our first step is to create a factory which will enable us to inject the D3 module into our custom directive. angular.module('d3', []) .factory('d3Service', [function(){ var d3; // insert d3 code here return d3; }]); After creating the factory, we can now inject the module into our main app. angular.module('app', ['d3']); For the purpose of this example, our directive will display a simple bar chart. We can call the directive from a HTML tag. <divblog-bar-chartbar-height="20"bar-padding="5"></div> We inject our d3 module into our custom directive. angular.module('myApp.directives', ['d3']) .directive('blogBarChart', ['d3Service', function(d3Service) { return { restrict: 'EA', // directive code } }]); In order to use the D3 library, we need to include our service. You can either copy and paste the entire D3 library or just reference a CDN. scriptTag.src='http://d3js.org/d3.v3.min.js'; In the service, we will use the notion of a promise. We need to wait for the promise to resolve by using the “then” method of our d3Service. For simplicity, we will use a link function which registers listeners and sets up binding. angular.module('myApp.directives',['d3']) .directive('blogBarChart', ['d3Service', function(d3Service) { return { restrict: 'EA', scope: {}, link: function(scope, element, attrs) { d3Service.d3().then(function(d3){ // d3 is the raw d3 object }); } } }]); We are now ready to write actual D3 code inside our AngularJS directive. The bar chart can also be made dynamic for the developer. We can pass attributes such as width of the bar chart to the directive and then use them with the D3 code. We will in our example set up an event listener to track changes in the size of the window. window.onresize = function() { scope.$apply(); }; After doing that we can focus on the D3 code for the bar chart. var svg = d3.select(ele[0]) .append('svg') .style('width', '100%'); One important thing to remember before the previous statement is that if SVG already existed, it would need to be emptied. Here is how you would remove all the preexisting SVG elements. svg.selectAll('*').remove(); D3 and Angular: Easy Implementation This example only shows a very small subset of the D3 code that could go into our directive. As well, we only worked with hardcoded data whereas in a real world scenario we would use data from a back-end service. Thanks to AngularJS, we are able to easily integrate that data into our custom D3 directive by using bi-directional binding. All in all, combining D3 with AngularJS now looks straight forward and relatively easy.  Let us know how it goes for you! Combine D3 and Angular Today Ready to see what combining D3 and Angular can do for you? Check out our free log management tool.

What are Javascript Source Maps?

It's generally a good practice to minify and combine your assets (Javascript & CSS) when deploying to production. This process reduces the size of your assets and dramatically improves your website's load time. Source maps create a map from these compressed asset files back to…

It's generally a good practice to minify and combine your assets (Javascript & CSS) when deploying to production. This process reduces the size of your assets and dramatically improves your website's load time. Source maps create a map from these compressed asset files back to the source files. This source map allows you to debug and view the source code of your compressed assets, as if you were actually working with the original CSS and Javascript source code. Take a look at jQuery minified & combined code that was generated from the original source code. The code is practically unreadable and would be difficult to debug. But, as we all know, no matter how thoroughly you test, sometimes bugs will fall through the cracks. This is why it's useful to debug Javascript code in production, and that's when source maps come in handy. How do you use Javascript source maps? With InsightOps we use UglifyJS for minification and Javascript source map generation. UglifyJS is a NodeJS library written in Javascript. To install Uglify JS with NPM: npm install uglify-js -g Minify the files and generate source maps: uglify-js file1.js file2.js -o output.js --source-map output.map.js The code above tells UglifyJS to: Take file1.js and file2.js as input Compress input files and output them to output.js Generate the source map for the compressed file and output it to output.map.js Marrying source maps and Django Compressor Django Compressor is a great Django plugin to mark assets for minification right inside your templates: {% load compress %} {% compress js %} <script src="/static/js/one.js" type="text/javascript" charset="utf-8"></script> <script type="text/ja vascript" charset="utf-8">obj.value = "value";</script> {% endcompress %} Behind the scenes you can develop logic to combine and minify the files with any algorithm or third party tools of your choosing. Browser support Source maps are a new addition to the developer toolbox. Although the source maps spec lives in Google docs (no kidding), they're supported by all major browsers: Chrome, Safari, Firefox, and IE11. By default, source maps are disabled so your users will not incur any unnecessary bandwidth overheads. To enable source maps in Google Chrome, go to Developer Tools, click the little cog icon, and then make sure that “Enable Javascript source maps” is checked. That's it. Now each compressed asset file contains a link pointing to its source map, and we've just told Chrome not to ignore them. See Javascript source maps in action If you'd like to see Javascript source maps in action, check out our free log management tool and take a look at our source code. The files highlighted in green are compressed Javascript files; the folders highlighted in blue are generated from source maps and contain the original source code that's mapped onto the compressed files. We can set breakpoints on mapped code, inspect variables, step through, and do pretty much anything that we can with original code. Pretty cool, huh?

12 Days of HaXmas: The Gift of Endpoint Visibility and Log Analytics

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts…

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them. Machine generated log data is probably the simplest and one of the most used data source for everyday use cases such as troubleshooting, monitoring, security investigations … the list goes on. Since log data records exactly what happens in your software over time it is extremely useful for understanding what had caused an outage or security vulnerability. With technologies like InsightOps, it can also be used to monitor systems in real time by looking at live log data which can contain anything from resource usage information, to error rates, to user activity etc. So in short when used for the right job, log data is extremely powerful... until it's NOT! When is it not useful to look at logs? When your logs don't contain the data you need. How many times during an investigation have your logs contained enough information to point you in the right direction, but then fell short of giving you the complete picture. Unfortunately, it is quite common to run out of road when looking at log data; if only you had recorded 'user logins', or some other piece of data that was important with hindsight, you could figure out what user installed some malware and your investigation would be complete. Log data, by its very nature, provides an incomplete view of your system, and while log and machine data is invaluable for troubleshooting, investigations and monitoring, it is generally at its most powerful when used in conjunction with other data sources. If you think about it, knowing exactly what to log up front to give you 100% code or system coverage is like trying to predict the future. Thus when problems arise or investigations are underway, you may not have the complete picture you need to identify the true root cause. So our gift to you this HaXmas is the ability to generate log data on the fly through our new endpoint technology, InsightOPs, which enables you to  fill in any missing information during troubleshooting or investigations. InsightOps is pioneering the ability to generate log data on the fly by allowing end users to ask questions of their environment, InsightOps is pioneering the ability to generate log data on the fly by returning answers in the form of logs. Essentially, it will allow you to create synthetic logs which can be combined with your traditional log data - giving you the complete picture! It also gives you all this information in one place (so no need to combine a bunch of different IT monitoring tools to get all the information you need). You will be able to ask anything from 'what processes are running on every endpoint in my environment' to ‘what is the memory consumption' of a given process or machine. In fact, our vision is to allow users to ask any question that might be relevant for their environment such that you will never be left in the dark and never again have to say ‘if only I had logged that.' Interested in trying InsightOps for yourself? Sign up here: https://www.rapid7.com/products/insightops/ Happy HaXmas!

Announcing InsightOps - Pioneering Endpoint Visibility and Log Analytics

Our mission at Rapid7 is to solve complex security and IT challenges with simple, innovative solutions. Late last year Logentries joined the Rapid7 family to help to drive this mission. The Logentries technology itself had been designed to reveal the power of log data to…

Our mission at Rapid7 is to solve complex security and IT challenges with simple, innovative solutions. Late last year Logentries joined the Rapid7 family to help to drive this mission. The Logentries technology itself had been designed to reveal the power of log data to the world and had built a community of 50,000 users on the foundations of our real time, easy to use yet powerful log management and analytics engine. Today we are excited to announce InsightOps, the next generation of Logentries. InsightOps builds on the fundamental premise that in a world where systems are increasingly distributed, cloud-based and made up of connected/smart devices, log and machine data is inherently valuable to understand what is going on, be that from a performance perspective, troubleshooting customer issues or when investigating security threats. However, InsightOps also builds on a second fundamental premise, which is that log data is very often an incomplete view of your system, and while log and machine data is invaluable for troubleshooting, investigations and monitoring, it is generally at its most powerful when used in conjunction with other data sources. If you think about it, knowing exactly what to log up front to give you 100% code or system coverage is like trying to predict the future. Thus when problems arise or investigations are underway, you may not have the complete picture you need to identify the true root cause. To solve this problem InsightOps allows users to ask questions of specific endpoints in your environment. The endpoints return answers to these questions, in seconds, in the form of log events such that they can be correlated with your existing log data. I think of it as being able to generate 'synthetic logs' on the fly - logs designed to answer your questions as you investigate or need vital missing information. How often have you said during troubleshooting or an investigation "I wish I had logged that…”? Now you can ask questions in real time to fill in the missing details e.g. “who was the last person to have logged into this machine?” InsightOps combines both log data and endpoint information such that users can get a more complete understanding of their infrastructure and applications through a single solution. InsightOps will now deliver this IT data in one place and thus avoids the need for IT professionals to jump between several, disparate tools in order to get a more complete picture of their systems. By the way - this is the top pain point IT professionals have reported across lots and lots of conversations that we have had, and that we continue to have, with our large community of users. To say I am excited about this is an understatement - I've been building and researching log analytics solutions for more than 10 years and I truly believe the power provided by combining logs and endpoints will be a serious game changer for anybody who utilizes log data as part of their day to day responsibilities -- be that for asset management, infrastructure monitoring, maintaining compliance or simply achieving greater visibility, awareness and control over your IT environment. InsightOps will also be providing some awesome new capabilities beyond our new endpoint technology, including: Visual Search: Visual search is an exciting new way of searching and analyzing trends in your log data by interacting with auto-generated graphs. InsightOps will automatically identify key trends in your logs and will visualize these when in visual search mode. You can interact with these to filter your logs allowing you to search and look for trends in your log data without having to write a single search query. New Dashboards and Reporting: We have enhanced our dashboard technology making it easier to configure dashboards as well as providing a new, slicker look and feel. Dashboards can also be exported to our report manager where you can store and schedule reports, which can be used to provide a view of important trends e.g. reporting to management or for compliance reporting purposes. Data Enrichment: Providing additional context and structuring log data can be invaluable for easier analysis and ultimately to drive more value from your log and machine data. InsightOps enhances your logs by enriching them in 2 ways, (1) by combining endpoint data with your traditional logs to provide additional context and (2) by normalization your logs into a common JSON structure such that it is easier for users to work with, run queries against, build dashboards etc. As always check it out and let us know what you think - we are super excited to lead the way into the next generation of log analytics technologies. You can apply for access to the InsightOps beta program here: https://www.rapid7.com/products/insightops/beta-request

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now