Rapid7 Blog

Leo Varela  

AUTHOR STATS:

10

EternalBlue: Metasploit Module for MS17-010

This week's release of Metasploit includes a scanner and exploit module for the EternalBlue vulnerability, which made headlines a couple of weeks ago when hacking group, the Shadow Brokers, disclosed a trove of alleged NSA exploits. Included among them, EternalBlue, exploits MS17-010, a Windows SMB…

This week's release of Metasploit includes a scanner and exploit module for the EternalBlue vulnerability, which made headlines a couple of weeks ago when hacking group, the Shadow Brokers, disclosed a trove of alleged NSA exploits. Included among them, EternalBlue, exploits MS17-010, a Windows SMB vulnerability. This week, EternalBlue has been big news again due to attackers using it to devastating effect in a highly widespread ransomware attack, WannaCry. Unless you've been vacationing on a remote island, you probably already know about this; however, if you have somehow managed to miss it, check out Rapid7's resources on it, including guidance on how to scan for MS17-010 with Rapid7 InsightVM or Rapid7 Nexpose.The Metasploit module - developed by contributors zerosum0x0 and JennaMagius - is designed specifically to enable security professionals to test their organization's vulnerability and susceptibility to attack via EternalBlue. It does not include ransomware like WannaCry does and it won't be worming its merry way around the internet.Metasploit is built on the premise that security professionals need to have the same tools that attackers do in order to understand what they're up against and how best to defend themselves. The community believes in this, and we have always supported it. This philosophy drove the amazing Metasploit contributor community to take on the challenge of reverse engineering and recreating the EternalBlue exploit as quickly and reliably as possible, so they could arm defenders with the info they need. We want to say a big thanks to JennaMagius and zerosum0x0 for their work on this.From a vulnerability management perspective, there are a lot things that security practitioners can do to understand their exposure, however, with Metasploit you can go beyond theoretical risk and show the impact of compromise. Access to systems is more concrete evidence of the problem. Metasploit effectively allows security practitioners to test their own systems and dispel the hype and speculation of headlines with facts.From a penetration testing perspective, research shows that over two thirds of engagements had exploitable vulnerabilities leading to compromise. Metasploit modules such as EternalBlue enable security practitioners to communicate the real impact of not patching to the business.UPDATE – May 19, 2017: Security researcher, Krypt3ia, wrote a blog post highlighting a possible connection between the process that zerosum0x0 and JennaMagius went through in reversing the EternalBlue exploit, and the WannaCry attack.Zerosum0x0 and JennaMagius both work at as security researchers at RiskSense, a provider of pro-active cyber risk management solutions. In response to Krypt3ia's blog, RiskSense provided this clarification of the situation:The module was developed to enable security professionals to test their organization's vulnerability and susceptibility to attack via EternalBlue. As part of their research, the researchers created a recording of the network traffic that occurs when the Fuzzbunch EternalBlue exploit is run. The purpose of this recording was to help educate other security professionals, and get feedback as they worked through the process. This kind of approach is fairly common in both the security researcher and open source contributor communities, where transparent collaboration enables individuals to pool their expertise and achieve greater results. It's possible that data from this analysis was copied and rewritten by individuals with malicious intent; we cannot confirm if this is the case or not. Unfortunately, this is a risk that is taken whenever technical information and techniques are shared publicly. None-the-less, we believe the educational and collaborative benefits generally outweigh the risk. To our knowledge, no code from the Metasploit module was ever used in the WannaCry attacks, and once Krypt3ia's blog pointed out the possibility that some of the information may have been used by the attackers, we removed the recording from the Github repository to ensure no other bad actors would be able to do likewise to create variants of the malware. Here's a summary of context and the technical details:On April 27th, JennaMagius created a recording of the network traffic that occurs when the Fuzzbunch EternalBlue exploit is run. That recording was subsequently posted at https://github.com/rapid7/metasploit-framework/issues/8269#issuecomment-29786257 1. The recording included an IP that was used as a lab target of the original exploits.Recording the replay and playing it back works against freshly booted boxes because the Tree Connect AndX response will assign TreeID 2048 on the first few connections, after which it will move on to other tree IDs. This is the same for the user login request. The replay would then fail because the rest of the replay is using "2048" for the tree and user IDs, and the server has no idea what the client is talking about.On April 30th, JennaMagius published a script that slightly enhanced that replay by substituting in the server provided TreeIDs and UserIDs. This code was subsequently posted at https://github.com/RiskSense-Ops/MS17-010/commit/9ddfe7e79256a9d386f0b488c38f504 8a2dfd083Zerosum0x0x's research supplemented these findings by outlining that __USERID__PLACEHOLDER__ and __TREEID__PLACEHOLDER__ strings were also present in the malware.Replaying ANY recording of EternalBlue will produce the same result, so the attackers may have chosen to use that particular recording to throw investigators off track. It is important to note that to our knowledge no code from the Metasploit module was ever used in the WannaCry attacks.To be successful, the attackers independently implemented sending the network traffic in C; constructed additional code to interact with DoublePulsar (which is a significantly harder undertaking than just replaying the recorded traffic), implemented the rest of their malware (maybe before or after), and then released it on the world.Again, Rapid7 wants to reiterate how much we appreciate community participants such as zerosum0x0 and JennaMagius, who contribute their time and expertise to better arm organizations to defend themselves against cyberattackers.

Announcement: End-of-life Metasploit 32-bit versions

UPDATE: With the release of version 4.15 on July 19, 2017, commercial Metasploit 32-bit platforms (Metasploit Pro, Metasploit Express, and Metasploit Community) no longer receive future product or content updates. These platforms are now obsolete and are no longer supported. Rapid7 announced the end…

UPDATE: With the release of version 4.15 on July 19, 2017, commercial Metasploit 32-bit platforms (Metasploit Pro, Metasploit Express, and Metasploit Community) no longer receive future product or content updates. These platforms are now obsolete and are no longer supported. Rapid7 announced the end of life of Metasploit Pro 32-bit versions for both Windows and Linux operating systems on July 5th, 2017.  This announcement applies to all editions: Metasploit Pro, Metasploit Express and Metasploit Community.  After this date Metasploit 32-bit platforms will not receive product or content updates. Metasploit Framework will continue to provide installers and updates for the 32-bit versions. Milestone Description Date End-of-life announcement date The date that the end-of-life date has been announced to the general public. July 5th, 2016 Last date of available installers The last date Rapid7 will generate 32-bit installers. After this date, Rapid7 will continue to provide updates until the last date of support. July 5th, 2016 Last date of support The last date to receive service and support for the product. After this date, all support services for the product are unavailable, and the product becomes obsolete. July 5th, 2017 Product Migrations Customers are encouraged to migrate to Metasploit 64-bit versions of the product, installation files can be found in the following link.  When upgrading to there maybe changes to system requirements including memory, please view the System requirements to see if your current system meets the minimum requirements.  To migrate to a newer platform you create a platform independent backup and restore it on the new system.  Please see this page to learn how to determine if you need to migrate, how to do a backup/restore, and about other related topics. More Information For Metasploit Pro and Express customers, contact support@rapid7.com or your account manager for assistance. For Metasploit Community customers, submit your inquiries to the community discussion forum. For more information about Rapid7 End-Of-Life Policy, go to: http://www.rapid7.com/docs/end-of-life-policy.pdf

Further Control of Dynamic Connections with Adaptive Security

As we have reached out to customers for feedback on Adaptive Security use cases (see: Adaptive Security Overview for details on this feature), we have found that many customers would like to control the outcome of the “New Asset discovered” trigger. They want to be…

As we have reached out to customers for feedback on Adaptive Security use cases (see: Adaptive Security Overview for details on this feature), we have found that many customers would like to control the outcome of the “New Asset discovered” trigger. They want to be able to not just kick a scan since they either have some restrictions as to when to scan, or they don't scan everything that comes out of DHCP (or other dynamic source of assets), for some networks they do spot checking and don't want to scan everything. The video below illustrates the usage of adaptive security's “New Asset Discovered” trigger and how to pick the actions taken when new assets are added to your environment. The video shows that you can do multiple things to answer to the trigger: Add the assets to a site and scan them Add the assets to a site and not scan right away Add assets that meet a certain rule (ie. ip range 10.1.0.0 - 10.1.255.255) to a site and scan, while assets that meet another rule (ie. ip range 10.2.0.0 - 10.2.255.255) to be added to the site but not immediately scanned. The video shows how a Dynamic Site based on a DHCP connection is different than a Static site with Automated actions for new assets discovered. Furthermore the video explains that you have full control of your scanning windows and the fact that a “New Asset Discovered” action triggered does not mean you have to scan the asset right away, you have full control. Also, blackouts, both site level and global are ALWAYS respected by the Adaptive security feature, therefore, if a trigger that starts a scan happens in between a blackout, the scan will be held/queued until the blackout is completed and then kicked. I hope you enjoy the video and you can put in practice these concepts to automate further the Vulnerability Management program at your organization.

Adaptive Security Overview

In Nexpose 6, we are introducing Adaptive Security, a smarter way to automate actions taken based on security incidents as they occur in your environment. The ultimate goal is to give back to security teams the time spent configuring tools to respond to a threat…

In Nexpose 6, we are introducing Adaptive Security, a smarter way to automate actions taken based on security incidents as they occur in your environment. The ultimate goal is to give back to security teams the time spent configuring tools to respond to a threat and automating the tedious and repetitive tasks taken to understand changes in the asset inventory and the threat landscape.With Adaptive Security, you can create workflows called automated actions that respond to new and existing assets coming online, assets that are missed on scan windows, and more importantly, to instantly understand the surface area of a critical threat that is adding risk to the environment. Imagine a world where you know exactly what the affected assets are for a recently published Zero-day vulnerability. A world where your team have answers to questions like "How is the new celebrity Zero-day vulnerability affecting our environment?" or "What risk does an unauthorized asset adds to our security program?" as soon as the vulnerability is found or when the device comes online. Today, with Adaptive Security you do not need to imagine that world anymore. It is a reality, security teams now have the ability to work smarter and faster to take action in an automated way and focus on strategies to address the risk as opposed to finding it.One of the more powerful aspects of this new features is that is highly configurable. Security teams can eliminate the noise generated  by just continuous monitoring and create filters and rules to intelligently react to threats and asset discovery in a way that makes sense and meet the particular needs of each of the customer environments managed by their security team. Not all findings or threats are born the same and they should be treated and addressed in the context that they live in.Adaptive Security brings in a set of triggers that kick off automated actions. Differing actions based on the selected triggers are available allowing users to easily customize the response to a change on the environment or the threat landscape. Customization such as filtering the scope of the action or the area of the environment that needs to be addressed. The possibilities that this feature opens for efficiency and productivity are enormous and will make the usage of Nexpose even more enjoyable and useful than ever before.Looking forward to hearing from you, new triggers and actions will be added and existing ones refined based on your feedback. Please check out our introductory video: Meet your newest asset: Adaptive Security. And also our video on how to use the "New Discovered Asset" trigger: Further control of Dynamic connections with Adaptive Security

The real challenge behind asset inventory

As the IT landscape evolves, and as companies diversify the assets they bring to their networks - including on premise, cloud and personal assets - one of the biggest challenges becomes maintaining an accurate picture of which assets are present on your network. Furthermore, while…

As the IT landscape evolves, and as companies diversify the assets they bring to their networks - including on premise, cloud and personal assets - one of the biggest challenges becomes maintaining an accurate picture of which assets are present on your network. Furthermore, while the accurate picture is the end goal, the real challenge becomes optimizing the means to obtain and maintain that picture current. The traditional discovery paradigm of continuous discovery sweeps of your whole network by itself is becoming obsolete. As companies grow, sweeping becomes a burden on the network. In fact, in a highly dynamic environment, traditional sweeping approaches pretty quickly become stale and irrelevant.Our customers are dealing with networks made up of thousands of connected assets. Lots of them are decommissioned and many others brought to life multiple times a day from different physical locations on their local or virtual networks. In a world where many assets are not 'owned' by their organization, or unauthorized/unmanaged assets connect to their network (such as mobile devices or personal computers), understanding the risk those assets introduce to their network is paramount to the success of their security program.Rapid7 believes this very process of keeping your inventory up to date should be automated and instantaneous. Our technology allows our customers to use non-sweeping technologies like monitoring DHCP, DNS, Infoblox, and other relevant servers/applications. We also enable monitoring through technology partners such as vSphere or AWS for virtual infrastructure, and mobile device inventory with ActiveSync.. In addition, Rapid7's research team through its Sonar project technology (this topic deserves it's own blog) is able to scan the internet and understand our customer's external presence. All of these automated techniques provide great visibility and complements the traditional approaches such that our customer's experiences on our products revolves around taking action and reducing risk as opposed to configuring the tool.Why should you care? It really comes down to good hygiene and good security practices. It is unacceptable not to know about the presence of a machine that is exfiltrating data off of your network or rogue assets listening on your network. And beyond being unacceptable, it can take you out of business. Brand damage, legal and compliance risks are great concerns that are not mitigated by an accurate inventory alone, however, without knowing those assets exists in your network in a timely manner it is impossible to assess the risk they bring and take action.SANS Institute has this topic rated as the Top security control https://www.sans.org/critical-security-controls/control/1. They bring up key questions that companies should be asking to their security teams: How long does it take to detect new assets on their networks? How long does it take their current scanner to detect unauthorized assets? How long does it take to isolate/remove unauthorized assets from the network? What details (location, department) can the scanner identify on unauthorized devices? and plenty more.Let Rapid7 technology worry about inventory. Once you've got asset inventory covered, then you can move to remediation, risk analysis, and other much more fun security topics with peace of mind that if it's in your network then you will detect it in a timely manner.

The Operational Report

There are several kinds of reports available in ControlsInsight. One that I want to bring your attention to is the operational report, a report that provides details to be consumed by your IT department.The operational report was born to bridge the gap between identifying…

There are several kinds of reports available in ControlsInsight. One that I want to bring your attention to is the operational report, a report that provides details to be consumed by your IT department.The operational report was born to bridge the gap between identifying security controls needed in your organization and implementing them. The WHY and HOW associated to the WHAT. It is one of the most important parts of the product because it is meant to drive action. It gives you the latest information regarding why you should deploy a particular control and is geared towards company-wide deployment, not per individual asset. This is why the report includes step by step instructions for using group policies or large deployment tools.Understanding the format of the reportThe report is very straightforward. It includes an overview of the control, supported operating systems, step by step deployment guidance, a list of assets that would benefit from deploying the control, and a list of references to validate the information that is being presented.The Overview: This gives you high level information on the security control at hand - brief, concise and to the point.Supported Operating Systems: A list of operating systems supports the implementation of the step by step guidance.Step by Step Guidance: A detailed list of instructions for how to implement the control in your environment.List of assets: A list of the assets that would benefit from deploying a particular control.References: This section allows customers to see references for articles relevant to the control. It also allows customers to cross-reference the control with recommendations from SANS Top 20 Critical Security Controls, Australian DSD Top Mitigation Strategies and NIST SP 800-53 Security & Privacy Controls.So, in a nutshell, this report gives you a single place to understand what to do, why to do it, how to deploy it, which assets lack the control today in your context, relevant reference material, and how other organizations rank the importance of the control.

APIs, the fastest and easiest way to get Nexpose integrated in your environment.

The Nexpose team have created some really cool integration points for Nexpose that you can use with your events and tools. Now to make it even simpler we have created a couple of blogs that will walk you through some integration scenarios which will guide…

The Nexpose team have created some really cool integration points for Nexpose that you can use with your events and tools. Now to make it even simpler we have created a couple of blogs that will walk you through some integration scenarios which will guide you and give you a head start on how to create your very own integration!All these examples are simple enough to follow, and complex enough to be used as part of your unique environment's integration. Let us know if you found them helpful, how you modified them and what other examples you would like to see.Enjoy!

Security Configuration assessment capabilities that meet your needs with Nexpose 5.4

A new great looking feature in our configuration assessment component has been added in Nexpose 5.4: the ability to customize policies to meet your unique contextual needs, i.e. are specific to your environment. You are now going to be able to copy a…

A new great looking feature in our configuration assessment component has been added in Nexpose 5.4: the ability to customize policies to meet your unique contextual needs, i.e. are specific to your environment. You are now going to be able to copy a built-in policy, edit its configuration including the policy checks values to test your assets for compliance. This flexibility allows for custom, accurate and relevant configuration assessment.Configuration assessment is important to assess the risk in deployments where heterogeneous configurations are present. It allows identifying the assets that are presenting a risk to a network by being misconfigured. Another advantage on configuration assessment is that it allows identifying the most and least compliant rules for each policy. This means that you will be able to identify not only areas where you are doing good, but also potential areas where your policies may not make sense.The goal is then to assess configuration compliance for policies that make sense to your particular needs. One good example when this can become handy: Let's say your company policy for account lockout threshold is more restrictive than the FDCC Windows policy one(less than or equal to five). Your company has decided that three failed attempts can occur before an account is locked out.  You can now easily copy the FDCC policy, Find the Account Lockout Threshold rule, and tweak it to check for three instead of five.This is where the policy editor and the new features shipped with Nexpose 5.4 come into play. There are several operations that were enabled on built-in policies, and some other that can be done against copies of the built-in policies.Operations on built-in policies include:Viewing the policy structure and check values with the policy viewerCopying the policyOperations on copies (custom) policies include:Viewing the policy structure and check values with the policy viewerCopying the policyEditing the policyDeleting the policyAll these options are available on the policies tab:When you are on the policy viewer you can browse through the policy structure to find the groupings and rules that you are interested in, as well as using the "Find" mechanism to get to it. On viewer mode, you will only be able to see how a policy is configured.On the left hand side you will see the policy structure in a tree format. On the right hand side you will see the details for the node selected on the left hand side.When you are on the policy editor, you can not only browse through the policy structure, you can also modify the summary details for the policy (like its name and description), groups, rules and check values. Notice the "Save" and "Cancel" buttons available to save or cancel your modifications to the policy.Once you have configured your policy to address your particular needs, you are ready to start checking for compliance on policies that you care about.Stay tuned, there's more coming.

Configuration assessment and policy management in Nexpose 5.2

We love our policy Dashboards. They are new, hot, intuitive, robust and really useful. In our latest release of Nexpose, version 5.2, we've made two major enhancements to our configuration assessment capabilities:A policy overview dashboard: To understand the current status of compliance of…

We love our policy Dashboards. They are new, hot, intuitive, robust and really useful. In our latest release of Nexpose, version 5.2, we've made two major enhancements to our configuration assessment capabilities:A policy overview dashboard: To understand the current status of compliance of configurations delivering a summary of the policy itself.A policy rule dashboard: To provide further details for a particular rule and the current compliance status for that rule.What makes these dashboards so useful?The policy overview dashboard pageThis dashboard gives users the ability to understand their overall policy compliance stance within their organization. Users have the ability to drill down into any of pre-configured-in policies included within Nexpose (currently related to USGCB and FDCC, more to follow) and visually see their compliance status for that particular policy. Those visual indicators as well as some tables will allow users to know everything they need about a particular policy such as:The policies' compliance percentage.A pie chart will show as of the last policy scan, what percentage of the assets that were tested with the policy passed and failed. The five most and least compliant assets.The graph shows trending information on the assets that have been scanned within the last 90 days. It shows the compliance percentage over time and allows security professionals to see important facts, such as sudden drops of the compliance status of specific policies, or improvements that allow users to focus their compliance efforts on specific assets.Users can use trending to see whether a particular event in their organization caused the drop in organizational compliance.An overview of the policyUseful information associated with a particular policy, such as the name, category, benchmark, benchmark id, version, and, rule and asset compliance percentages.A Listing of rules that make up the policy.Security professionals can also view all the rules that make up the policy and have a per rule breakdown of asset compliance. In this table users can create rule global overrides that will impact the current and future scan results. In the rule listing they can sort by asset compliance and understand which rules are the most and least compliant in their organization. Some of the key benefits include:Check which are the rules that are the most and least compliant, giving insight on which rules to focus compliance efforts first.Make decisions on whether the rule makes sense in an organization's specific environment or not. Example: let's take look at the Windows 7 policy in the context of USGCB: If users perform all of their troubleshooting operations through remote desktop in their organization, chances are that most of their assets will fail the Allow users to connect remotely using Remote Desktop Services rule. In this case security professionals may want to globally override the rule results because the rule does not make sense in their organization.A Listing of assets that were tested for compliance with the policy.The list of assets and the test results are listed in this table making it really easy to understand and focus compliance efforts on the specific assets.The tested assets table will also let users see which assets were tested for compliance, allows them to sort by result and focus on the assets which are failures. Security professionals will be able to sort on a lot of criteria, such as by operating system, asset IP, asset name, the date of the last scan among others. Users will also have the ability to export all of the assets' data to a CSV format.The policy rule dashboard pageLike the policy dashboard, the policy rule dashboard helps organizations to better understand their compliance status with regards to a particular rule within a policy. Users will have the ability to drill down into one of the rules that make up the policies and visually see their organization's compliance status for that particular rule for each asset that has been tested with the policy. The policy rule dashboard allows users to know everything they need about a particular policy rule such as:An overview of the rule.Useful information associated with a particular policy rule, such as the name, category, benchmark id, version, and the name of the policy that the rule belongs to. A Listing of assets that were tested for compliance with the policy rule.This table will allow security professionals to see only those assets that they ran the rule against, visualize the result, and also gives them the ability to drill down to the asset to get even more granular information of the test results(by clicking on the asset name or IP): With these additional capabilities, organizations will find it much easier to track the most important activities happening in their environment as it relates to policy scanning. Stay tuned, there's more coming! We love our policy dashboards.

Java API client - How to augment it and share with the community

The prerequisite is that you get the client: clee-r7/nexpose_java_api · GitHub This blog post will show you how to augment the java api client and use it in 4 easy steps. The Java API client uses XML templates to generate requests. Browse to…

The prerequisite is that you get the client: clee-r7/nexpose_java_api · GitHub This blog post will show you how to augment the java api client and use it in 4 easy steps. The Java API client uses XML templates to generate requests. Browse to the src/org/rapid7/nexpose/api folder within the API source code, you will see the templates for the currently supported API client requests. i.e:  AssetGroupSaveRequest.xml. There are currently 2 versions of our APIs, v1.1, and v1.2, schemas for v1.2 are shipped with the product: \nsc\resources\api\v12\xsd. Schemas for v1.1 are attached to this blog post. 1 Step: Pick the request you want to add to the java API client Let's pick the request that we want added to the API client, lets say you are interested in pausing active scans. Grab the schema for the API request you want to add. In this case ScanPauseRequest: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:redefine schemaLocation="ScanRequestType.xsd"/> <xsd:element name="ScanPauseRequest" type="ScanRequestType"/> </xsd:schema> ScanRequestType looks like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:redefine schemaLocation="session-id_Type.xsd"/> <xsd:complexType name="ScanRequestType"> <xsd:attribute name="sync-id" type="xsd:string" use="optional"/> <xsd:attribute name="session-id" type="session-id_Type"/> <xsd:attribute name="scan-id" type="xsd:positiveInteger"/> </xsd:complexType> </xsd:schema> 2 Step. Create the template for the selected request. Come up with the xml required for this ScanPauseRequest, based on the previous step xsd definition: <ScanPauseRequest session-id="1234567GFT67890" sync-id="my synch id" scan-id="12345"/> Make it a template to be used by the java api-client: <ScanPauseRequest session-id="${session-id}" sync-id="${sync-id}" scan-id="${scanId}"/> Save the template file in the src/org/rapid7/nexpose/api folder  with the name ScanPauseRequest.xml 3.Step. Create the Java class to support the template. The Java request for the ScanPauseRequest is very simple, the TemplateAPIRequest.java already takes care of the heavy load, all you will have to do is to extend the TemplateAPIRequest.java and use a setter method: package org.rapid7.nexpose.api; import org.rapid7.nexpose.api.APISession.APISupportedVersion; /** * Represents the ScanPauseRequest NeXpose API request. * * @author Leonardo Varela */ public class ScanPauseRequest extends TemplateAPIRequest { ///////////////////////////////////////////////////////////////////////// // Public methods ///////////////////////////////////////////////////////////////////////// /** * Creates a ScanPauseRequest NeXpose API Request. Sets the first API * supported version to 1.0 and the last supported version to 1.1. * * NOTE: All parameters are strings or generators, since we want to be able * to test edge cases and simulate incorrect usage of the tool for robustness * * @param sessionId the session to be used if different from the current * acquired one (You acquire one when you authenticate correctly with * the login method in the {@link APISession} class). This is a * String of 40 characters. * @param syncId the synchronization id to identify the response associated * with the response in asynchronous environments. It can be any * string. This field is optional. * @param scanId the positive integer that represents the scan id of the * scan to be stopped. */ public ScanPauseRequest(String sessionId, String syncId, String scanId) { super(sessionId, syncId); set("scanId", scanId); m_firstSupportedVersion = APISupportedVersion.V1_0; m_lastSupportedVersion = APISupportedVersion.V1_1; } } There you go, you are ready to give this to the community! 4. Step. Now what? How do I use it? We will create a site, launch a scan, pause it, using our newly created ScanPauseRequest, stop it and deleted the site. APISession session = createAPISession(new URL("<Nexpose URL>"), APISupportedVersion.V1_2); // Create your site with a single host to scan List<String> hosts = new ArrayList<String>(); hosts.add("127.0.0.1"); // Get the session ID from the session. String sessionID = session.getSessionID(); // Now create a simple Site Save Request host generator, this is required for elements in the xml that can be // repeated N times, in this case the <host> element on the SiteSaveRequest, please see // SiteSaveRequest.xml for details. SiteSaveRequestHostsGenerator hostsGenerator = new SiteSaveRequestHostsGenerator(); hostsGenerator.setHosts(hosts); // Now create the SaveSiteRequest SiteSaveRequest siteSaveRequest = new SiteSaveRequest( sessionID, // The session ID null, // the sync id "-1", // -1 to create the site. "My API site", // The name of the site "This site was created through The Java API client", // The description of the site "1.00", // The risk factor. hostsGenerator, // The host generator null, // the ip ranges generator null, // the credentials generator null, // the alerts generator "Full audit", // The name of the configuration "3", // the configuration version "-1", // -1 to denote a new configuration "full-audit", // the configuration template id "2", // the ID of the engine to use "false", //whether scheduling is enabled or not. "false", // whether the schedule is incremental "daily", // the type of scheduling "0", // the interval of scheduling "20120310T061011000", // the date to start the schedule "100", // the max duration of scheduled scans "20120310T061011000"); // the expiration date of the schedule // now we are going to save the site. APIResponse response = session.executeAPIRequest(siteSaveRequest); // we are going to grab the site ID for reference. int siteID = response.grabInt("/SiteSaveResponse/@site-id"); // Let's start the scan now. SiteScanRequest siteScanRequest = new SiteScanRequest(sessionID, null, Integer.toString(siteID)); response = session.executeAPIRequest(siteScanRequest); // We are going to grab the scan-id for reference int scanID = response.grabInt("//Scan/@scan-id"); // Now let's pause the scan with our newly created Java API client request. ScanPauseRequest scanPauseRequest = new ScanPauseRequest(sessionID, null, Integer.toString(scanID)); session.executeAPIRequest(scanPauseRequest); // Now that the scan is paused, stop it to be able to delete the site. ScanStopRequest scanStopRequest = new ScanStopRequest(sessionID, null, Integer.toString(scanID)); session.executeAPIRequest(scanStopRequest); SiteDeleteRequest siteDeleteRequest = new SiteDeleteRequest(sessionID, Integer.toString(siteID), null); session.executeAPIRequest(siteDeleteRequest); We would love to see people contributing their own augmentations to the java API client and sharing with the community.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now