Rapid7 Blog

API  

Modern Applications Require Modern Dynamic Application Security Testing (DAST) Solutions

Is your Dynamic Application Security Testing (DAST) solution leaving you exposed? We all know the story of the Emperor's New Clothes. A dapper Emperor is convinced by a tailor that he has the most incredible set of clothes that are only visible to the wise.…

Is your Dynamic Application Security Testing (DAST) solution leaving you exposed? We all know the story of the Emperor's New Clothes. A dapper Emperor is convinced by a tailor that he has the most incredible set of clothes that are only visible to the wise. The emperor purchases them, but cannot see them because it is just a ruse. There are no clothes. Unwilling to admit that he doesn't see the clothes, he wanders out in public in front of all of his subjects, proclaiming the clothes' beauty until a child screams out that the Emperor is naked. Evolving Applications If there is one thing we know for sure in application security, it's that applications continue to evolve. This evolution continues at such a rapid pace that both security teams and vendors have trouble keeping pace. Over the last several years, there have been a few major evolutions in how applications are being built. For several years now, we have been security testing multi-page AJAX driven applications powered by APIs and now we're seeing more and more Single Page Applications (SPAs). And this is happening across all industries and at organizations of all sizes. Take Gmail for example, in the image below, you can see one of the original versions of Gmail compared to a more recent versions. Today's Gmail is a classic example of a modern application. So, as security professionals, we have built our programs around automated solutions, like DAST, but how are DAST solutions keeping up with these changes? DAST Solutions - The Widening Coverage Gap Unfortunately, most application security scanners have failed to keep up with these relatively recent evolutions. Web scanners were originally architected in the days of classic web applications when the applications were static and relatively simple HTML pages. While scanners have never and will never cover an entire web application, they should cover as much as possible. Unfortunately, the coverage gap has widened in recent years forcing security teams to conduct even more manual testing particularly of APIs and Single Page Applications. But with over-burdened and under-resourced application security teams, testing by hand just doesn't cut it. Closing the Gap with Rapid7 AppSpider Of course, we don't think manual testing is an acceptable solution. Application security teams and application scanners can and should close this coverage gap with automation to improve both the efficiency (reduce manual efforts) and effectiveness (find more vulnerabilities) of security efforts. If this is something that interests you, you have come to the right place! Keeping up with application technology is one of our specialties. The application security research team at Rapid7 has been committed to maximum coverage since AppSpider was created. Our customers rely on us to keep up with the latest application technologies and attack techniques so that they can leverage the power of automation to deliver a more effective application security program. If your solution isn't effectively addressing your applications and you are looking for a way to test APIs, dynamic clients and SPAs more automatically. Download a Free Trial of AppSpider today! To learn more, visit www.rapid7.com. For more information on how to reduce your application security exposure, check out these resources: 7 Questions to Ask your DAST Security Vendor Whiteboard Wednesday: Keeping up with Application Complexity Blog: AppSpider's Got Swagger

AppSpider's Got Swagger: The first end-to-end security testing for REST APIs

We are thrilled to announce a major new innovation in application security testing. AppSpider is the first Dynamic Application Security Testing (DAST) solution capable of testing Swagger-enabled APIs. Swagger is one of the most popular frameworks for building APIs and the ability to test Swagger-enabled…

We are thrilled to announce a major new innovation in application security testing. AppSpider is the first Dynamic Application Security Testing (DAST) solution capable of testing Swagger-enabled APIs. Swagger is one of the most popular frameworks for building APIs and the ability to test Swagger-enabled APIs is not only a huge time savings for application security testing experts, but also enables Rapid7 customers to more rapidly reduce risk.Why does this matter?Modern applications make liberal use of APIs. APIs are powering mobile apps like Twitter and Facebook and they're providing rich client experiences like Gmail. They are also powering the Internet of Things (IoT) – APIs are what connect the billions of IoT devices to the cloud where the data they collect is processed, crunched and made useful.APIs have enabled the complex web of applications that exists today in almost every corporate and government environment. and at the same time, have quickly become one of the most difficult challenges for security teams because most DAST solutions are blind to them.These modern problems, like API security testing, require modern solutions. AppSpider is a modern DAST solution designed for today's connected world. DAST solutions must be relevant for today's environment.Remaining relevant in today's inter-connected worldIn today's connected world, security professionals are challenged with securing exploding digital ecosystems that touch every facet of their business from customers and shareholders to employees and partners. These digital ecosystems have become a complex tapestry of old and new web applications, web services and APIs that are highly connected to each. Adding to the complexity, the Internet of Things (IoT) is now driving tremendous innovation by connecting our physical world to our digital one. This inter-connected network of applications is constantly accessing, sharing and updating critical sensitive data.Your company's data is one of your most precious assets and we know that securing that data is what keeps you up at night.It keeps us up at night too.We look at today's application ecosystems as having three pillars:Web applications and web servicesInternet of Things (IoT)Connected applications (connected by RESTful APIs)We at Rapid7 are dedicated to bringing you solutions that are relevant to today's ecosystem. We are committed to delivering solutions that can help you be effective at securing your organization's data even in this highly connected and complex world we are in. AppSpider: Modern DAST for a Connected WorldUnderstanding how AppSpider addresses today's connected, modern technologies requires a little understanding of history about DAST. Legacy DAST solutions communicate with applications through the web front-end in order to identify potential security vulnerabilities in the web application and architectural weaknesses. Most DAST solutions first perform a “crawl” of the client interface to understand the application and then they conduct an “attack” or “audit” to find the vulnerabilities.But with these newer applications that have rich clients and RESTful APIs, less and less of the applications can effectively be crawled. The applications are no longer just HTML and JavaScript which are more easily crawled. In dynamic application security testing, we're looking for a high application coverage rate, but what's actually happened is that many security teams have found that their coverage has actually eroded in recent years as their applications have been modernized and their legacy DAST solution has not kept pace. Today's applications – think about Amazon and Google – have rich clients with mini-applications nested inside of it and APIs on the back end checking and updating other data. These applications cannot be crawled using the legacy crawl and attack DAST approach. There are many faceless parts of the application, deep below the surface, that have to be analyzed by a scanner in a different way.Traditional crawling only works for the first pillar described above, web applications. A modern DAST solution must be relevant for all three pillars. Let us be the first to say that legacy DAST is dead. It's time for modern DAST solutions.AppSpider has moved beyond the crawl and attack framework and is able to analyze these modern applications even the portions that it can't crawl. It's capable of understanding IoT and interconnected applications because it can now analyze and test a Swagger-enabled REST API.APIs: A Source of Pain for Application Security ExpertsUnfortunately, APIs carry the exact same security risks that we have been fighting with web applications for years. APIs enable traffic to pass through normal corporate defenses like network firewalls, and, just like web applications, they are vulnerable to SQLInjection, XSS and many of the attacks we're used to because they access sensitive corporate data and pass it back and forth to all kinds of applications. Today's APIs have newer architectures and names like, RESTful Interfaces, microservices or just "APIs" and they have enabled developers to rapidly deliver highly-scalable solutions that are easy to modify and extend. As great as APIs are for developers and for end users, they have created some very serious challenges for security experts, and all too often, APIs are going completely untested leaving vulnerabilities undiscovered resulting in security risk. Until now, most teams haven't had the ability to security test APIs because they have required manual testing. We spoke to one customer who currently has about eight APIs. Each API takes about two hours to test manually and they want to test it every time there is a new build, but many security teams aren't staffed for that level of manual testing. And, to make matters worse, security experts often don't know the functionality of APIs because they aren't documented in such way that security teams can easily get up to speed. When you end up with is already-strapped-for-time security experts faced with a manual testing effort for functionality they need to learn about. Swagger-enabled APIsEnter Swagger (and, of course, AppSpider) to save the day! Swagger, an open source solution, is one of the most popular API frameworks. It defines a standard interface to REST APIs that is agnostic to the programming language. A Swagger-enabled API enables both humans and computers to discover and understand the capabilities of the service. Because APIs are being delivered so quickly, many APIs and microservices haven't been well documented (or documented only in a Word doc that sits with the development team). With Swagger, teams have increased the documentation for their APIs and have also increased their interoperability and the ability for other solutions. Its this machine readable documentation that enables like modern DAST solutions, to discover and analyze Swagger-enabled APIs. How AppSpider Tests Swagger-Enabled APIsAppSpider has two major innovations that enable it to fully test Swagger APIs. The first is AppSpider's Universal Translator and the second is the ability to analyze these Swagger files.Let's first look at AppSpider's Universal Translator. The Universal Translator was built to enable AppSpider to analyze the parts of the application that can't be crawled, like APIs. The Universal Translator analyzes traffic captured with a proxy like Burp or Paros. Now, AppSpider's Universal Translator is also able to analyze a Swagger file eliminating the need to capture proxy traffic for testing a Swagger-enabled RESTful API. The Universal translator then normalizes traffic and attacks the application.   The diagram above shows how the Universal Translator works. It consumes data that comes in from three sources, a traditional crawl, recorded HTTP traffic and now, Swagger files. It then normalizes that data into a standard format and then completes the attack phase of the application security test. We like to call the Universal Translator “future-proof” because its designed to be adaptable to this rapidly changing digital environment - we can easily extend it as technologies become available, like Swagger, which now enables further innovation. What can you do to improve API security testing on your team?There are many things you can do to begin testing APIs more effectively.Learn about them: Regardless of whether you test an API manually or automatically, it's important to understand the functionality. Security testers should invest the time to learn the API's functionality and then plan an appropriate test. Go to your developers and ask them how they are documenting APIs and how you can learn about them.Does your team have Swagger? Find out if your developers are using Swagger. If they aren't, encourage them to check it out. You can add Swagger files to existing APIs to make them machine readable and enable automated testing with AppSpider.Does your DAST have Swagger? Consider using a DAST solution that further automates testing of APIs. AppSpider is able to test Swagger APIs from end to end and it also automates much of the testing process for other APIs.Test APIs with AppSpider: If your team is interested in further automating API testing, download AppSpider to evaluate if it's a fit for your team and if it can help you address more of your attack surface automatically.Learn More:Press release: Rapid7 Automates API Security Testing to Reduce Risk in Web ApplicationsRelease Notes: AppSpider Pro 6.8 - Swagger UtilityWhitepaper: The Demands of a Modern Web Scanner: Closing the Coverage GapWhitepaper: AppSecurity Buyers Guide, 15 DAST Solution Requirements

Top 3 Takeaways from the "Skills Training: How to Modernize your Application Security Software" Webcast

In a recent webcast, Dan Kuÿkendall, Senior Director of Application Security Products at Rapid7, gave his perspective on how security professionals should respond to applications, attacks, and attackers that are changing faster than security technology. What should you expect for your application security solutions and…

In a recent webcast, Dan Kuÿkendall, Senior Director of Application Security Products at Rapid7, gave his perspective on how security professionals should respond to applications, attacks, and attackers that are changing faster than security technology. What should you expect for your application security solutions and what are some of the strategies you can use to effectively update your program? Read on for the top takeaways from the webcast “Skills Training: How to Modernize your Application Security Software”: Expect more from automation – It's important to leverage as much automation as possible. Make sure the tools you are using are covering the newer and more difficult technologies like AJAX, JSON and shopping cart. REST support is essential – You must start understanding RESTful interfaces and JSON in particular. The world is moving in this direction on web and mobile and leaving defensive tools in the dust. Most web app firewalls don't know how to deal with JSON and it takes them a long time to parse and validate content there, derailing them. Adopt a DevOps mindset – Partner with your development teams to understand how you can integrate security testing into their continuous integration and testing processes. To be successful and truly support change and growth in application security programs, DevOps must plug into what development teams are already doing and become part of their existing process. Bridge the gap between security and DevOps by running tests during nightly builds. Perform checks, report vulnerability findings into the existing bug system, and there will be more acceptance and progress from both sides. For the in-depth look at how to modernize your application security software: view the on-demand webinar now.

Mobile App & API Security - Application Security's "Where Waldo"

[A version of this blog was originally posted on February, 1 2013] As I have discussed in previous posts and at conferences, like OWASP AppSecUSA, while the number of attacks continue to increase, the attack techniques aren't new at all. They are actually the same…

[A version of this blog was originally posted on February, 1 2013] As I have discussed in previous posts and at conferences, like OWASP AppSecUSA, while the number of attacks continue to increase, the attack techniques aren't new at all. They are actually the same old attacks like SQL Injection showing up in new places including API's, mobile application services and AJAX applications. Because these newer technologies have exploded in popularity and become more mainstream, we keep seeing these same old vulnerabilities popping up in new places. I always say it's like Where's Waldo, and we simply need to understand the new landscape and start looking for Waldo again. Over the last several years, there has been a major evolution in how applications are being built with new underlying technologies, application architectures and data formats, but have application scanners evolved with them? These new technologies have grown at such a fast rate, we haven't been able to keep up at either end. On one end, developers aren't able to build these new applications securely because they are up against deadlines from the business and delivering on new technologies. And on the other end, web application scanners were architected in the golden days of web application security when almost all web applications were static and relatively simple HTML pages. While scanners have never and will never cover all 100% of a web application, our belief is that they can and should cover as much as possible. Unfortunately, most application security scanners haven't kept pace with the changing applications. Security professionals and application scanning vendors should be actively working to close the coverage gap detailed above to improve both the efficiency (reduce manual efforts) and effectiveness (find more vulnerabilities) of security efforts. The AppSpider team at Rapid7 continues to be committed to closing this gap. Our customers tell us that AppSpider automatically tests their AJAX applications and API's much more thoroughly than other options. We believe AppSpider is the only scanner that truly begins to address these newer technologies and formats like AMF, JSON and REST. But feel free to check it out for yourself. We welcome input and feedback. In my blogs, I'll detail the technologies used in modern applications and demonstrate why they create challenges for modern web scanners. In addition, I'll give you pointers on how you can determine if your application security scanners are effectively scanning and attacking these newer technologies. We will discuss the following kinds of applications and technologies: 1. RIA & HTML5 AJAX Applications: JSON (JQuery), REST, GWT (Google WebToolkit) Flash Remoting (AMF) HTML5 Applications (addressed in subsequent paper) 2. Mobile Backends powered by JSON, REST and other custom formats 3. Web Services JSON, REST XML-RPC, SOAP (addressed in subsequent paper) 4. Challenging Application Workflows Sequences: Shopping Cart and other strict processes XSRF/CSRF Tokens If you would like to read the full whitepaper on this topic, you can download it here. [Note: This blog has been transferred from Dan Kuykendall's blog, manvswebapp.com, as part of Rapid7's acquisition of NT OBJECTives. For more information on the acquisition, click here.]

UserInsight Integrates with Microsoft's New Office 365 API to Detect Intruders

If you are at the RSA Conference this week, you may have seen Microsoft's keynote announcing the new Office 365 Activity Feed API this morning. In case you missed it, Microsoft summarized the announcement in today's blog post. The new Management Activity API is a…

If you are at the RSA Conference this week, you may have seen Microsoft's keynote announcing the new Office 365 Activity Feed API this morning. In case you missed it, Microsoft summarized the announcement in today's blog post. The new Management Activity API is a RESTful API that provides an unprecedented level of visibility into all user and admin transactions within Office 365.Rapid7 got early access to this technology through Microsoft Technology Adoption Program and is one of the first companies to integrate with Microsoft's new Office 365 Management Activity API. As a result, Rapid7 UserInsight already fully integrates with Microsoft Office 365, enabling incident response professionals to detect and investigate incidents from endpoint to cloud, providing security and transparency for cloud services such as Office 365.Unlike the monitoring solutions that look exclusively at network data for malicious traffic, UserInsight monitors endpoints, networks, cloud services, and mobile devices, setting traps for intruders, detecting attacks automatically and enabling fast investigation to mitigate the risks posed by compromised accounts. Integration with the new Microsoft API, allows Rapid7 to automatically collect data from Office 365, SharePoint, Azure Active Directory, and OneDrive and add to its comprehensive view of network and user behavior, giving organizations the ability to detect attacks across network, cloud, and mobile environments.Lateral Movement Extends Beyond the Perimeter and Into the CloudResearch shows that the use of stolen credentials is still the most common threat action. What's most concerning is that intrusions often go undetected for more than six months because they move laterally across company systems, collecting more and more credentials to gain persistence.A common misconception is that lateral movement ends with the perimeter. However, with modern enterprise systems extending to cloud services, defenders need to think broader and include cloud services. Once they have compromised the credentials, attackers no longer have to be connected to the corporate network to access documents or email services.Organizations need to understand user behavior across multiple environments in order to discover and investigate security incidents quickly. The new Microsoft API is a big step forward to arm security professionals with the tools they need to protect their environments and detect malicious behavior as ecosystems expand to the cloud. In enables UserInsight customers to run analytics across their entire ecosystem, within the perimeter and in the cloud.How UserInsight leverages Microsoft's new Office 365 APIUserInsight builds a baseline understanding of a user's behavior in order to identify changes that would indicate suspicious activity and help security professionals detect an attack. Because UserInsight uniquely collects, correlates and analyzes data across all users and assets, including cloud applications, it can identify suspicious behavior other solutions can't. Examples of potential threats detected within Office 365 include:Advanced Attacks: UserInsight automatically correlates user activity across network, cloud and mobile environments.  UserInsight can detect advanced attacks such as lateral movement from the endpoint to the cloud, including Office365.Privileged user monitoring: Privileged users are often the ultimate target for intruders. UserInsight monitors Office 365 administrator accounts and alerts the security team of suspicious activity.Geographically impossible access: The key to protecting the environment is to be able to unify the network, mobile, and cloud environments. For example, a customer would receive an alert if an employee's cell phone synchronizes email via Office 365 from Brazil within an hour of the same user connecting to the corporate VPN from Paris -- clearly one of the connections cannot be legitimate.Account use after termination: UserInsight detects when a suspended or terminated employee accesses their Office 365 account, helping to stop stolen intellectual property and other business-critical information.Access to Office 365 from an anonymization service: UserInsight correlates a constantly-updated list of proxy sites and TOR nodes with an organization's Office 365 activity, detecting attackers that are trying to mask their identity and location.Once suspicious behavior is detected, security teams and incident responders can investigate the users and assets involved in context of various activity from the endpoint to the cloud, now including Microsoft Office 365 activity, and determine the magnitude and impact of the attack. Due to UserInsight's visual investigation capabilities, customers can combine asset and user data on a timeline to rapidly investigate and contain the incident.Learn more about UserInsight's new capability to detect intruders in Office 365The integration is available immediately to Microsoft Office 365 customers who have signed up for the API preview. Rapid7 will showcase the solution this week at the RSA Conference in San Francisco. Visit Rapid7's booth, located at North Expo #N3335, to learn more or request a personal demo online.

7 Ways to Improve the Accuracy of your Application Security Tests

For more than 10 years, application security testing has been a common practice to identify and remediate vulnerabilities in their web applications. While, it's difficult to figure out the best web security software for your organization, there are seven key techniques that not only increase…

For more than 10 years, application security testing has been a common practice to identify and remediate vulnerabilities in their web applications. While, it's difficult to figure out the best web security software for your organization, there are seven key techniques that not only increase accuracy of testing in most applications, but also enable teams to leverage expert resources to test necessary areas by hand. IT security experts who conduct application security testing or are trying to figure out the best application security solution should consider these techniques important and aim to use a solution that leverages as many of them as possible. Application Security Scanning Requirements 1. Coverage of Modern Web Technologies Application coverage is the first step of accuracy. Application security testing software can't test what it can't find or doesn't understand. Most scanners were built to scan HTML and they do so very effectively. Unfortunately, very few modern applications are built solely in HTML. Today's applications have gone way beyond brochure-ware to include rich clients and mobile API's, and web services that make use of new application technologies. These applications are powered by JavaScript and AJAX on the client-side and often have interfaces built in JSON, REST and SOAP with CSRF protection thrown in for good measure. The best web security software solutions are capable of interpreting and attacking these modern technologies and find an internal or vendor neutral test application with vulnerabilities that include these technologies to confirm coverage. 2. Future-Proof Application security software should have the ability to easily understand and adapt to new technologies as they become popular. The reality is that we will continue to see an increase in application complexity and the emergence of new technologies. Most scanners can understand and attack the classic web application of the past but a modern scanner needs to be architected so that new technologies can be bolted on like drill bits on a drill. Ask your vendor how their architecture provides the flexibility to handle new technologies. 3. Sophisticated Attack Techniques All web security software must find a balance between comprehensiveness and performance. In order to improve performance, the best web security software solutions randomly limit the set of attacks to send based on proprietary choices. Other scanners intelligently profile the application to determine which attacks are useful and dynamically adjust attacks for each input. This latter approach increases not only the efficiency of the scan, but also its ability to find valid vulnerabilities. Be sure you understand how your application security software selects its attacks and how configurable the attacks are to fit your needs. 4. Recursive False Positive Checking False positives are the bane of automated scanning and a time suck for security teams. Web applications often behave in mysterious ways and smart scanners must check and recheck findings to avoid false positives. Your vendor should be willing to stand by the findings and constantly improve based on your feedback. 5. Relevant Data Input During an automated scan there are usually two phases: crawl and attack. During the crawl phase, it is imperative that the scanner provide valid data for each input field as expected by the application. For example, when the form is asking for a shipping address, some scanners enter random values into each input instead of the expected values. Certain fields such as the ZIP code would be invalid and the application would reject a request due with an invalid ZIP code. In this case, the scan is actually halted, resulting in a less comprehensive scan and the potential for missed vulnerabilities. Ask application security software vendors what kind of data they use in their attack phase to determine if they are using both expected and unexpected datasets and if they are attacking one input at a time. 6. Check Every Parameter on Every Page The point of automation is to handle the repetitive tasks against every input, but this can lead to slower scan times. To save time, some web application security solutions only check the first several parameters on each page. Each parameter could use different filters so the scanner could be arbitrarily missing vulnerabilities. This time savings is not worth it! Make sure the solution you choose checks every parameter on every page. 7. Custom Mobile Applications, the New Frontier Custom mobile applications are the new frontier for security teams. They provide native mobile interfaces, but then communicate with web services or API's (JSON, REST/XML, AMF, etc.) that have the same range of potential vulnerabilities (SQLi, authentication and session management weaknesses) that web applications do. The best web security software is capable of testing these back-end interfaces or API's because that's where the real weaknesses are likely to be found. For more information about what to look for in an application security scanner, check out our Web Application Security Solutions Buyers' Guide.

Modernize Your Application Security Scanning in Four Easy Steps

You've built modern mobile and rich internet applications (RIAs) that are sure to improve your business' next major revenue stream. Conscious of security, you've ensured that the native application authenticates to the server, and you've run the app through a web application security scanner to…

You've built modern mobile and rich internet applications (RIAs) that are sure to improve your business' next major revenue stream. Conscious of security, you've ensured that the native application authenticates to the server, and you've run the app through a web application security scanner to identify weaknesses in the code. Those vulnerabilities have been remediated, and now you're ready to go live. Not so fast. Despite your best intentions, chances are good your mobile and rich internet applications are going live with dangerous security flaws. Most traditional web application security scanners don't provide the necessary protection when you're dealing with modern application architectures, data formats and other underlying technologies. However, you can still build state-of-the-art rich internet applications with reliable and safe web application security by following these simple steps. Step 1: Understand the application technologies and their security requirements Classic HTML applications are no challenge for web application security scanners because that's what they were originally built to do. However, rich internet applications (RIAs) based on newer technologies like AJAX, HTML5 and Silverlight are a different story –be sure your security scanner supports these new formats. Due to the heavy use of JavaScript or complete lack of HTML, these new application formats and technologies make it nearly impossible for scanners to crawl the application. Plus, mobile applications further complicate matters because they often use web services or back-end interfaces which cannot be crawled at all. To make matters worse, attackers are finding new ways to exploit the back-end interfaces associated with mobile applications. Web application session management techniques fail to deliver the protection developers expect, and these old and insecure techniques do not stop attackers from tampering with the application, committing fraud or performing man-in-the-middle attacks. That's why it's important to understand the technologies used in your applications so you can find an appropriate web application security scanner and/or supplement your scanning efforts accordingly. Below is a list of the technologies that may require a more in-depth security solution: AJAX applications: JSON (JQuery), REST, GWT (Google WebToolkit) Flash remoting: Action Message Format (AMF) HTML5 Back end of mobile apps powered by JSON, REST and other custom formats Web services: JSON, REST, XML-RPC, SOAP Complex application workflows: Sequences (shopping cart and other strict processes) and XSRF/CSRF tokens Step 2: Look under the hood There are two key qualities you should require of a web application security scanner that you plan to use for modern applications like mobile and rich internet applications. The first is the ability to import proxy logs. The second is an understanding of mobile application traffic, which enables the scanner to create attacks to test for security flaws. Vendors are often quick to advertise their scanners' ability to be fed data from a proxy, but if the scanner is not familiar with JSON and REST, it will not be able to create attack variations – even when fed recorded traffic. Like web application security scanners, traditional authentication methods fail to deliver the protection they once promised. While historically used to protect server-side mobile applications from SQL injection and cross-site scripting attacks, today's authentication methods simply aren't sophisticated enough to provide adequate web application security to new RIA's and mobile apps. For example, attackers can exploit weak passwords when a scheme only authenticates the user and not the application. This can be avoided by using a client-side certificate to identify the application, but this isn't feasible for all apps – especially customer-facing mobile apps. Step 3: Determine whether your web application security scanner is capable You can – and should – ask your web application security scanner provider what technologies the tool is able to scan. But don't leave it at that – verify what they say is true. For instance, you can test for the security scanning coverage of an AJAX application by analyzing the request/response traffic. To do so, simply enable the scanner's detailed logging feature, run the scanner through a proxy like Paros, Burp or WebScarab, and save the logs for manual review. JSON also poses a unique challenge to web application security scanners. They must be able to decipher the new format and insert attacks to test the security of web application interfaces. A review of detailed logs of request/response traffic will indicate whether the scanner is fully capable of protecting rich internet applications like yours. However, not all scanners provide detailed logging. If this is the case, you will need to set up a proxy to capture traffic during the scan. Begin by scanning only a page that uses JSON, then check to see if the scanner requests include the JSON traffic and requests. Step 4: Bolster testing efforts with the latest best practices Attackers are increasingly targeting back-end servers. And while new mobile APIs like JSON create new ways to engage customers in rich internet applications, they also create new ways for attackers to reach back-end servers. The only way to discover and remediate API security flaws, authentication weaknesses, protocol-level bugs and load-processing bugs is with several rounds of testing. Also, understand that you cannot rely on SQL or basic authentication to protect the back end. Develop server-based applications to anticipate attacks by continually verifying the integrity of the application and uptime environment. Finally, when developing mobile and RIAs, keep the following tips in mind: Data provided by the client should never be trusted. A device's mobile equipment identifier should never be used to authenticate a mobile application, but do use multiple techniques to verify that requests are from the intended user. Since session tokens for mobile apps rarely expire, attackers can use them for a very long time. Credentials should not be stored in the application's data store, local to the device. When requiring SSL, a valid certificate should be necessary. Guaranteeing reliable web application security for mobile and rich internet applications and mobile apps can be tricky business. However, completing the proper research, choosing the right security scanner, and performing an ample amount of testing will help detect vulnerabilities and ward off new attacks, allowing your application to be successful in the marketplace.

Nexpose API: SiteSaveRequest and IP Addresses vs Host Names

With the release of Nexpose 5.11.1 we made some changes under the hood that improved scan performance and scan integration performance. As a result of those changes, the rules applied to using SiteSaveRequest in API 1.1 became stricter, which may have caused…

With the release of Nexpose 5.11.1 we made some changes under the hood that improved scan performance and scan integration performance. As a result of those changes, the rules applied to using SiteSaveRequest in API 1.1 became stricter, which may have caused issues for some users. In the past this "worked" for the most part, though there were certainly side effects observable in the Web interface after the fact. Since these issues were not always apparent, nothing appeared wrong with this approach. If you are affected by these issues check the end of the post for workarounds. Note: This is not something that changed in the Nexpose API or the Nexpose Ruby gem. While reviewing the 1.1 API documentation and XML DTD (Document Type Definition) I realized why this might be confusing, so we're in the process of updating the documentation to be more clear on this. I'll walk you through where things can go wrong. Below is a snippet of the relevant section of the Site XML DTD where scan targets are defined: <!ELEMENT Hosts ((range|host)+)> <!-- IPv4 address range of the form 10.0.0.1 --> <!ELEMENT range EMPTY> <!ATTLIST range from CDATA #REQUIRED> <!ATTLIST range to CDATA #IMPLIED> <!-- named host (usually DNS or Netbios name --> <!ELEMENT host (#PCDATA)> This is what it looks like using a correct example: <hosts> <range from="192.168.1.50" to=""/> <range from="172.16.0.1" to="172.16.0.255"/> <host>example.com</host> <host>server01.internal.example.com</host> </hosts> Note that on line 2 we have a range element with a from attribute containing an IP address, but a to attribute that is empty. This is how a single IP address is correctly defined as a scan target in a site configuration. On line 3 we see a full IP address range, as expected. On lines 4 and 5 we see two examples of host names. Since the DTD may not obviously indicate how to save a single IP address, you might think it's correct to do this instead: <hosts> <host>192.168.1.50</host> <host>172.16.0.25</host> </hosts> This does work: the site is saved and these IP addresses can be scanned. Unfortunately this causes some issues with the way assets are treated before and after scans, which have become more apparent with the 5.11.1 release. This includes difficulty launching ad hoc scans on individual assets from the UI and reporting on assets that were scanned since the 5.11.1 release. For those of you who are affected by these issues there are 2 ways to address this: Change the way your API calls are made to use the range element for IP addresses and re-save any affected site configurations. If you use the Nexpose Ruby gem, use the add_asset method instead of add_host. The add_asset method will detect whether the input is an IP address or host name and assign it appropriately. Modify your site configuration in the Web interface and save. You may need to add a new IP address or host name to the site, save, and then remove it and save again for this to work. Note that if you did not update your API calls, this problem will occur again. After the next scan, the ad hoc scans and reporting should behave correctly again for these assets.

Site Consolidation with the Nexpose Gem

The introduction of the scan export/import feature opens up the ability to merge sites, at least through the Ruby gem. Imagine a scenario where you had split up your assets into several sites, but now you realize it would be easier to manage them…

The introduction of the scan export/import feature opens up the ability to merge sites, at least through the Ruby gem. Imagine a scenario where you had split up your assets into several sites, but now you realize it would be easier to manage them if you just merge them into one. Maybe you have duplicate assets across sites and that wasn't your intent. The script below allows you to merge multiple sites into one. I replays the scans from each site into the new one (in just a fraction of the amount of time it originally took to run the scans). I'll let the comments in the script mostly speak for themselves, but a quick run-through is this: Designate the sites to merge into a new site Copy the asset configuration from those sites into the new one Collect the scans (the script only pulls in scans from the last 90 days) to merge Import those scans into the new site in chronological order. #!/usr/bin/evn ruby require 'nexpose' include Nexpose nsc = Connection.new('your-console-address', 'nxadmin', 'openSesame') nsc.login at_exit { nsc.logout } # You can merge any number of sites into one. to_merge = [38, 47] unified_site = Site.new('Unity!') # Grab assets from each merge site and add them. to_merge.each do |site_id| # Merge based on configured assets. site = Site.load(nsc, site_id) unified_site.assets.concat(site.assets) # To merge based on actually scanned assets: # nsc.assets(site_id).each do |asset| # unified_site.add_asset(asset.address) # end end # Will still need to configure credentials, schedules, etc. unified_site.save(nsc) # Collect the scan history from each site, limited to the last 90 days. since = (DateTime.now - 90).to_time scans = [] to_merge.each do |site_id| recent_scans = nsc.site_scan_history(site_id).select { |s| s.end_time > since } scans.concat(recent_scans) end # Order them chronologically ordered = scans.sort_by { |s| s.end_time }.map { |s| s.scan_id } zip = 'scan.zip' ordered.each do |scan_id| nsc.export_scan(scan_id, zip) nsc.import_scan(unified_site.id, zip) # Poll until scan is complete before attempting to import the next scan. history = nil loop do sleep 15 history = nsc.site_scan_history(unified_site.id) break unless history.nil? end last_scan = history.max_by { |s| s.start_time }.scan_id while (nsc.scan_status(last_scan) == 'running') sleep 10 end File.delete(zip) puts "Done importing scan #{scan_id}." end Just a few caveats.... With scan import, you pull in all data from the scan as it originally ran. If you deleted assets, for example, they will return in the new site. Also, the looping mechanism of the script is a bit clunky. If you have a lot of data to import, you may want to export all the scans and change the way you wait on scans to be finished, in order to be more fault tolerant. And scan import is all or none, there is no way to split up assets that were in a scan.

Scan Export/Import Using the nexpose-client Gem

The latest release (5.10.13) introduces a new feature into Nexpose, scan exporting and importing. We're looking to address a need in air-gap environments, where customers can have multiple consoles to address network partitioning. This approach is not without its warts. For example, if…

The latest release (5.10.13) introduces a new feature into Nexpose, scan exporting and importing. We're looking to address a need in air-gap environments, where customers can have multiple consoles to address network partitioning. This approach is not without its warts. For example, if you have deleted assets from a site, this process will bring them back to life. This post is going to walk through a pair of Ruby scripts using the nexpose-client gem. The first script will export the site configuration and scans from one site, then after transferred to another console the second script will create a new site and import the data in. One restriction on scan importing is that you can only import scans newer than the most recent scan, so in order to pull in multiple scans, order must be preserved. While a scan is importing, the Nexpose web interface will indicate an "Importing" status. You should note that the scan importing call is asynchronous. This means that your script will not block waiting for the scan to run, but will return as soon as the scan has been initialized. Because of this, you may wish to wait for each scan to complete before proceeding to the next. Another side note on importing scans: the nexpose-client gem currently uses the rest-client gem to import the scan. We didn't want to force a new dependency for gem users, so scripts will need to explicitly require the rest-client gem in order to use the feature. Version 0.8.5 of the nexpose-client gem no longer has this dependency. The Export Script This script will take as input the site ID to export. It will also assume that it can write all the files to the local directory. #!/usr/bin/evn ruby require 'nexpose' include Nexpose nsc = Connection.new('air-gapped-console', 'nxadmin', 'superSecretPassword') nsc.login at_exit { nsc.logout } # Allow the user to pass in the site ID to the script. site_id = ARGV[0].to_i # Write the site configuration to a file. site = Site.load(nsc, site_id) File.write('site.xml', site.to_xml) # Grab scans and sort by scan end time scans = nsc.site_scan_history(site_id).sort_by { |s| s.end_time }.map { |s| s.scan_id } # Scan IDs are not guaranteed to be in order, so use a proxy number to order them. i = 0 scans.each do |scan_id| nsc.export_scan(scan_id, "scan-#{i}.zip") i += 1 end The Import Script This script assumes that you are running in the directory that you filled with data in the previous script (after transporting the lot of it to a new machine). #!/usr/bin/evn ruby require 'nexpose' include Nexpose nsc = Connection.new('reporting-console', 'nxadmin', 'someOtherSecretPassword') nsc.login at_exit { nsc.logout } site_xml = File.read('site.xml') xml = REXML::Document.new(site_xml) site = Site.parse(xml) site.id = -1 # Set to use the local scan engine. site.engine = nsc.engines.find { |e| e.name == 'Local scan engine' }.id site_id = site.save(nsc) # Import scans by numerical ordering scans = Dir.glob('scan-*.zip').map { |s| s.gsub(/scan-/, '').gsub(/\.zip/, '').to_i }.sort scans.each do |scan| zip = "scan-#{scan}.zip" puts "Importing #{zip}" nsc.import_scan(site.id, zip) # Poll until scan is complete before attempting to import the next scan. last_scan = nsc.site_scan_history(site.id).max_by { |s| s.start_time }.scan_id while (nsc.scan_status(last_scan) == 'running') sleep 10 end puts "Integration of #{zip} complete" end

Working with reports and exports via the RPC API

The Metasploit RPC API provides a straightforward, programmatic way to accomplish basic tasks with your Metasploit Pro instance. Two of the key capabilities are export generation to backup your data and report generation to summarize and share your findings. The RPC API docs are currently…

The Metasploit RPC API provides a straightforward, programmatic way to accomplish basic tasks with your Metasploit Pro instance. Two of the key capabilities are export generation to backup your data and report generation to summarize and share your findings. The RPC API docs are currently undergoing a major overhaul and are a bit out of date for reports and export generation. This post will provide all the examples and configuration options you need to get running. Setting up a client to make the API calls is simple: # This class is defined under pro/api-example require_relative 'metasploit_rpc_client' client = MetasploitRPCClient.new(host:host, token:api_token, ssl:false, port:50505) Note that there are example scripts shipped with Metasploit Pro that show these examples and more. They can be found inside the install directory (on *nix systems /opt/metasploit) under apps/pro/api-example. They are simple wrappers that allow you to pass in required arguments, so good for getting a feel for things. In addition to the API calling code, however you implement that, you need to have the Metasploit Pro instance running and you need to generate an API key. This can be done from Administration -> Global Settings. Reports Listing existing reports report_list displays all reports that have been generated in the workspace you specify: report_list = client.call('pro.report_list', workspace_name) puts "Existing Reports: #{report_list}" Sample output: Existing Reports: {7=>[{"id"=>6, "report_id"=>7, "file_path"=>"/Users/shuckins/rapid7/pro/reports/artifacts/CredentialMetaModule-20140912105153.pdf", "created_at"=>1410537159, "updated_at"=>1410537159, "accessed_at"=>nil, "workspace_id"=>2, "created_by"=>"shuckins", "report_type"=>"mm_auth", "file_size"=>34409}], The keys of the Hash are the report IDs, needed for download as will be seen below. The value Array contains all the artifacts that were generated. An artifact is simply a particular file in a particular format. For example, when you generate an Audit report and select file formats PDF, HTML, and Doc, this results in a single report with three child artifacts. Getting information on available reports to generate type_list = client.call('pro.list_report_types') puts "Allowed Report types: #{type_list}" Sample output (snipped, full output includes every report type): Allowed Report types: {"activity"=>{"required_data"=>"tasks", "file_formats"=>"pdf, html, rtf", "options"=>"include_task_logs", "sections"=>"cover, project_summary, task_details", "report_directory"=>"/Users/shuckins/rapid7/pro/reports/activity/", "parent_template_file"=>"/Users/shuckins/rapid7/pro/reports/activity/main.jrxml"}, Downloading a report (all child artifacts) report_id = 1 # Get this from report_list call report = client.call('pro.report_download', report_id) report['report_artifacts'].each_with_index do |a, i| tmp_path = "/tmp/report_test_#{i}_#{Time.now.to_i}#{File.extname(a['file_path'])}" File.open(tmp_path, 'w') {|c| c.write a['data']} puts "Wrote report artifact #{report_id} to #{tmp_path}" end This will download every artifact related to this report generation (1-4 files depending on format selection). Downloading particular report artifacts If you only want a particular artifact file under a report, you can download that using the artifact ID provided from the report_list call. report_artifact_id = 1 artifact = client.call('pro.report_artifact_download', report_artifact_id) tmp_path = "/tmp/report_#{report_artifact_id}#{File.extname(artifact['file_path'])}" File.open(tmp_path, 'w') {|c| c.write artifact['data']} puts "Wrote report artifact #{report_artifact_id} to #{tmp_path}" Generating a report There are a number of options available for this call, detailed below. This basic version generates a single PDF artifact of the Audit report: report_hash = {workspace: workspace_name, name: "SuperTest_#{Time.now.to_i}", report_type: :audit, created_by: 'whoareyou', file_formats: [:pdf] } report_creation = client.call('pro.start_report', report_hash) puts "Created report: #{report_creation}" There's currently no API call to provide report (or export) generation status. The time required depends entirely on your data size and complexity. One place to check for status is the reports.log file. Configuration options These are placed in the hash passed to the start_report call. Required: name: String, the name for the report shown in the web UI and in the file path; used in forming the filenames of the artifacts generated report_type: String, must be one of those listed by list_report_types, e.g.: activity, audit, credentials, collected_evidence, compromised_hosts, custom, fisma, mm_auth, mm_pnd, mm_segment, pci, services, social_engineering, webapp_assessment report_template: if type custom this can be set, String, full file path to custom Jasper jrxml template. If not a custom report, do not use this. workspace_name: String, name of the workspace to which the report will be scoped created_by: String, username to which the report should be attributed file_formats: Array, the file format(s) of the artifacts to be generated. Must specify at least one. Available types vary slightly per report, 'pdf' is present for all. See list_report_types for formats per type. Optional: email_recipients: String, addresses to which the report artifact(s) should be emailed. Addresses can be separated with comma, semicolon, newlines, or spaces. mask_credentials: Boolean, whether credentials shown in report artifacts should be scrubbed (replaced with MASKED) included_addresses: String, space-separated addresses to include in the report. Can include wildcards, ranges, CIDR. excluded_addresses: String, space-separated addresses to exclude from the report. Can include wildcards, ranges, CIDR. If included and excluded are both specified, they are both expanded and the address set used is included - excluded. logo_path: String, full path to image file to use on cover page of report artifacts. If not specified, the Rapid7 logo is used. Must be of type: gif, png, jpg, or jpeg options: sub hash of additional configuration options: include_sessions: Boolean, whether information on sessions should be included in the report if applicable include_charts: Boolean, whether graphs should be included in the report if applicable include_page_code: Boolean, whether HTML code of pages in SE campaigns should be included in the report versus just an image preview of the rendered page se_campaign_id: Integer, the ID of the SE campaign the report should cover. Only applied to SE report. sections: Array, specific sections of the report to include. If this is specified only the specified sections will be included. If not specified all sections will be included. For section names, see list_report_types. usernames_reported: String, comma-separated list of users to be included as active in the report. This is usually shown in the Executive summary section. Exports Export coverage is nearly identical to reports. Listing existing exports export_list = client.call('pro.export_list', workspace_name) puts "Existing Exports: #{export_list}" Generating an export export_types = ['zip_workspace','xml','replay_scripts','pwdump’] # NOTE: If you are not on the latest update of 4.10 (4.10.0-2014092401) this requires workspace_id with integer value of ID. # If you've updated to this point you can use workspace with a string value of name as below. export_config = {created_by: 'whoareyou', export_type: export_types[1], workspace: 'ThePlace'} export_creation = client.call('pro.start_export', export_config) puts "Created export: #{export_creation}" Downloading a generated export export_id = 1 export = client.call('pro.export_download', export_id) tmp_path = "/tmp/export_test_#{export_id}#{File.extname(export['file_path'])}" File.open(tmp_path, 'w') {|c| c.write export['data']} puts "Wrote export #{export_id} to #{tmp_path}" Configuration options Required: created_by: String, username to which the export should be attributed export_type: String, must be one of: zip_workspace, xml, replay_scripts, pwdump workspace: String, name of the workspace to which export will be scoped Optional: name: String, the name for the export shown in the web UI and in the file path; unacceptable characters are changed to underscores or removed mask_credentials: Boolean, whether credentials shown in XML and other files should be scrubbed (replaced with MASKED) included_addresses: String, space-separated addresses to include in the export. Can include wildcards, ranges, CIDR. excluded_addresses: String, space-separated addresses to exclude from the export. Can include wildcards, ranges, CIDR. If included and excluded are both specified, they are both expanded and the address set used is included – excluded.

Nexpose Gem Version 0.8.0 Released

With the release of Nexpose 5.9.16, we are also releasing a new version of the gem: 0.8.0We bumped the version from 0.7 to mark several changes. First, there are two methods that would not work against the new release without…

With the release of Nexpose 5.9.16, we are also releasing a new version of the gem: 0.8.0We bumped the version from 0.7 to mark several changes. First, there are two methods that would not work against the new release without some code changes to the gem. These cover searching for vulnerabilities and running ad hoc HTML reports.But most significant is the addition of the nokogiri (鋸) gemhttps://rubygems.org/gems/nokogirias a dependency. We decided to pull in this dependency because of a parsing issue around the ad hoc HTML reports. If this dependency proves to be a problem, then please let us know. But our testing has shown that nokogiri is handled smoothly in the environments we tested. If the transition proves to be smooth, we may being to use nokogiri more extensively, especially for parsing large, incoming data sets.For additional details, see the release notes: Changes to the Nexpose Gem in Version 0.8.0

Weekly Metasploit Update: Talking PJL With Printers

Abusing Printers with PJLThis week's release features a half dozen new modules that seek out printers that talk the Print Job Lanaguage (PJL) for use and abuse. Huge thanks to our newest full time Metasploit trouble maker, William Vu.As a penetration tester, you probably…

Abusing Printers with PJLThis week's release features a half dozen new modules that seek out printers that talk the Print Job Lanaguage (PJL) for use and abuse. Huge thanks to our newest full time Metasploit trouble maker, William Vu.As a penetration tester, you probably already know that office printers represent tasty targets. Like most hardware with embedded systems, they rarely, if ever, get patches. They don't often have very serious security controls. They're usually in network segments that are full of end-user desktops, but sometimes they just pop up where ever someone felt the need to have a printer, so they're often uncontrolled and unaccounted for by IT adminstrators.Finally, and most importantly, printers are often unintentional repositories of sensitive data. The printer_download_file, in particular, can snag all sorts of proof-of-insecurity artifacts, like copies of outbound faxes, signature samples, confidential contract language... all sorts of stuff. A payroll printer is (hopefully!) not going to be PJL-aware, but the community printer/fax that all the sales guys use for quotes and fielding POs? Better start scanning your office floor.Of course, techniques for abusing the total lack of authentication around PJL have been around for a million years. I don't know any university lab that hasn't had the LCD display changed to something funny. The story here is that these PJL modules (and associated Rex protocols) means that pentesters and IT security admins alike can more thoroughly, systematically, and routinely audit their sites for printer-based risk exposure. Hopefully, the publication of these modules will raise that visibility bar to a point where folks take this kind of thing seriously and stop relegating the risk to "party trick" levels.Metasploit API Docs OnlineIf you've been watching the development news around Metasploit for the last year or so, you will no doubt read that we are aggressively pursuing reasonable in-line documentation around core Metasploit functionality. As a quick update to that, you will be pleased to see that https://dev.metasploit.com/api/ is no longer a pack of outdated lies. What you see there is exactly the same as if you were on a recent clone of the Metasploit code repository and had typed "rake yard" to locally generate the docs.Hopefully, the increased visibility gained by dumping these autogenerated docs out to the Internet will save new Metasploit exploit devs the trouble of re-implimenting common Metasploit conventions over and over again. For example, just browsing the Wordpress class definition reveals what methods that our friend Christian @_FireFart_ Mehlmauer has already written for your Wordpress exploitation needs. Super useful.Browsing through the internal Metasploit docs will almost certainly lead to some "Ah-ha!" moments, when you notice a pre-defined method you've never seen used before. I forget tons of things about what makes Metasploit go, and I know I'm not alone. On top of that, YARD-generated docs are just so darn pleasant to read and navigate through.New ModulesAside from the PJL modules, we've got a new exploit for HP Data Protector Backup Client, thanks to Juan Vazquez's tireless pursuit of teasing exploit code out of ZDI disclosures.Exploit modulesHP Data Protector Backup Client Service Directory Traversal by juan vazquez and Brian Gorenc exploits ZDI-14-003Auxiliary and post modulesPrinter File Download Scanner by wvu, sinn3r, MC, Myo Soe, and Matteo CantoniPrinter Environment Variables Scanner by wvu, sinn3r, MC, Myo Soe, and Matteo CantoniPrinter Directory Listing Scanner by wvu, sinn3r, MC, Myo Soe, and Matteo CantoniPrinter Volume Listing Scanner by wvu, sinn3r, MC, Myo Soe, and Matteo CantoniPrinter Ready Message Scanner by wvu, sinn3r, MC, Myo Soe, and Matteo CantoniPrinter Version Information Scanner by wvu, sinn3r, MC, Myo Soe, and Matteo CantoniIf you're new to Metasploit, you can get started by downloading Metasploit for Linux or Windows. If you're already tracking the bleeding-edge of Metasploit development, then these modules are but an msfupdate command away. For readers who prefer the packaged updates for Metasploit Community and Metasploit Pro, you'll be able to install the new hotness today when you check for updates through the Software Updates menu under Administration.For additional details on what's changed and what's current, please see Brandont's most excellent release notes.

SQL Export Report using the API

This morning we published the release of the new SQL Query Export report. Simultaneously the Nexpose Gem has released version 0.6.0 to support this new report format in all the reporting API calls (you must update to this latest version to run the…

This morning we published the release of the new SQL Query Export report. Simultaneously the Nexpose Gem has released version 0.6.0 to support this new report format in all the reporting API calls (you must update to this latest version to run the report). When the SQL Query Export is paired with adhoc-report generation, you are able to write simple yet powerful custom scripts using the API. Let's walk through an example. Example The following example uses the Ruby Gem to invoke the API with a query that returns the metadata for all unauthenticated assets: require 'nexpose' require 'csv' include Nexpose query = " SELECT da.ip_address, da.host_name, da.mac_address, dos.description AS operating_system FROM dim_asset da JOIN dim_operating_system dos USING (operating_system_id) WHERE da.asset_id IN ( SELECT DISTINCT asset_id FROM dim_asset_operating_system WHERE certainty < 1 ) ORDER BY da.ip_address" @nsc = Connection.new('localhost', 'nxadmin', 'nxadmin') @nsc.login report_config = Nexpose::AdhocReportConfig.new(nil, 'sql') report_config.add_filter('version', '1.1.0') report_config.add_filter('query', query) report_output = report_config.generate(@nsc) csv_output = CSV.parse(report_output.chomp, { :headers => :first_row }) puts csv_output @nsc.logout This example creates a query to find all the assets with an operating system that does not have a high level confidence (indicative of improper credential configuration). Note: This example is built in to the product within the inline help. After building the SQL query, an adhoc-report configuration is constructed with two new filters. The SQL Query Export report takes two new filter types (in addition to the existing scope filters). The new filters are "version" and "query". The version identifies the version of the Reporting Data Model being queried against. For now, only version "1.1.0" is supported. This option matches what is visible in the user interface for configuring this type of report. The second filter is a "query" filter. This value is the query that you want to run. Both "version" and "query" are required filters for this report type ("sql"). The query can itself inherently filter the data in the output using WHERE clauses, but the report will also honor any scope filters that are applied via "site", "scan", "device", "group", and vulnerability filter types. Remember, the larger the scope and report output, the longer the report will take to generate and download, so use your scoping filters wisely for large reports. After the adhoc-report is run successfully, the output can be parsed using a CSV library and further process. The example above simply echos the output to standard out, but we are sure you can find more creative ways to use the output data. Errors If the SQL query you specify is invalid, an error message will be returned helping point out what the problem is in the syntax. For example, if we misspelled "DISTINCT", then the following would be returned by this script: NexposeAPI: Action failed: The query filter supplied is invalid: (Nexpose::APIError) ERROR: column "distint" does not exist Column: 216 State: undefined_column Next Steps For more examples you can try, refer to the Nexpose Reporting side street. Show us how you plan to leverage this capability and don't hesitate to ask questions or start discussions.

Kvasir: Penetration Data Management for Metasploit and Nexpose

Data management is half the battle for penetration testing, especially when you're auditing large networks. As a penetration tester with Cisco's Advanced Services, I've created a new open source tool called Kvasir that integrates with Metasploit Pro, Nexpose, and a bunch of other tools I…

Data management is half the battle for penetration testing, especially when you're auditing large networks. As a penetration tester with Cisco's Advanced Services, I've created a new open source tool called Kvasir that integrates with Metasploit Pro, Nexpose, and a bunch of other tools I use regularly to aggregate and manage the data I need. In this blog post, I'd like to give you a quick intro what Kvasir does - and to invite you to use it with Metasploit Pro.Cisco's Advanced Services has been performing penetration tests for our customers since the acquisition of the Wheel Group in 1998. We call them Security Posture Assessments or SPA for short, and I've been pen testing for just about as long. During our typical assessments we may analyze anywhere between 2,000 and 10,000 hosts for vulnerabilities, perform various exploitation methods such as account enumeration and password attempts, buffer/stack overflows, administrative bypasses and others. We then have to collect and document our results within the one or two weeks we are on-site and prepare a report.How can anyone keep track of all this data, let a lone work together as a team? Are you sure you really found the holy grail of customer data and adequately documented it? What if you're writing the report but you weren't the one who did the exploit?The answer is to build a data management application that works for you. The first iterations the SPA team created were a mixture of shell, awk, sed, tcl, perl, expect, python and whatever else engineers felt comfortable programming in. If you remember the Cisco Secure Scanner product (aka NetSonar) then our early tools were this with extra goodies.Welcome to the 21st CenturyAs time moved on our tools became unfriendly to larger data sets, inter-team interaction, and support of new data types were difficult. The number of issues detected by vulnerability scanners started to increase and while we have always been able to support very large environments the edges were starting to bulge.We don't believe this scenario is unique to us. We also don't believe current publicly available solutions really help. Most teams we've talked with have used a variant of issue tracking software (TRAC, Redmine) or just let Metasploit Pro handle everything.We think this isn't good enough which is why we are releasing our tool, Kvasir, as open source for you to analyze, integrate, update, or ignore. We like the tool a lot and we think it fills a missing key part of penetration testing. It's not perfect but it's grown up a lot and will improve.What's Kvasir?Kvasir is a web-based application with its goal to assist “at-a-glance” penetration testing. Disparate information sources such as vulnerability scanners, exploitation frameworks, and other tools are homogenized into a unified database structure. This allows security testers to accurately view the data and make good decisions on the next attack steps.Multiple testers can work together on the same data allowing them to share important collected information. There's nothing worse than seeing an account name pass by and finding out your co-worker cracked it two days ago but didn't find anything “important” so it was never fully documented.Supported Data SourcesAt current release Kvasir directly supports the following tools:Rapid7 Nexpose Vulnerability ScannerRapid7 Metasploit Pro (limited support for Express/Framework data)Nmap Security ScannerShodanHQTHC-HydraFoofus MedusaJohn The Ripper…and more!Nexpose  and Metasploit Pro IntegrationSince the SPA team generally uses Rapid7's Nexpose  and Metasploit Pro, Kvasir integrates with these tools via API. We purposefully did not incorporate some features but may have future plans for others.The importation of Nexpose site reports is fully automated. Just pick a site and let Kvasir generate the XML report, download and parse it! After parsing, the scan file can be imported into a Metasploit Pro instance.For Metasploit Pro results you must first generate an XML report but after that is done Kvasir will download and parse it automatically. Kvasir also supports the db_creds output and will automatically import pwdump and screenshots through the Metasploit Pro API.Metasploit Pro's automatic Bruteforce and Exploit features can be called directly from Kvasir. Just select your list of target IP Addresses and go!From Vulnerability to ExploitSo you have a host with a list of vulnerabilities, but what is exploitable? Metasploit Pro as well as other exploit frameworks and databases are mapped to vulnerability and CVE entries granting the user an immediate view of potential exploitation methods.Screenshots!The initial screen of Kvasir shows two bar graphs detailing the distribution of vulnerabilities based on severity level count and host/severity count as well as additional statistical data:A tag-cloud based on high-level severities (level 8 and above) is included which may help pinpoint the highest risk vulnerabilities. This is based solely on vulnerability count.Kvasir's Host Listing page displays details such as services, vulnerability counts, operating systems, assigned groups and engineers:Kvasir supports importing exploit data from Nexpose (Exploit Database and Metasploit) and other tools. Link to exploits from vulnerabilities and CVE assignments are made so you can get an immediate glance at what hosts/services have exploitable vulnerabilities.The host detail page provides an immediate overview of valuable information such as services, vulnerability mapping, user accounts, and notes, all shared between testing engineers:Of course as you collect user accounts and passwords it's nice to be able to correlate them to hosts, services, hashes and hash types, and sources.Where can I get more info?For more information, see my post on Kvasir on the Cisco blog. You can also get the Kvasir source code on GitHub. Fork, Install, Review, Contribute!

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now