Rapid7 Blog

AppSpider  

What's New in AppSpider Pro 7.0?

In the latest release of AppSpider Pro version 7.0 you will find some great new features which will improve the crawling, attack and overall usability of the product. Below are a few of the key new enhancements you will find in the release. Chrome/…

In the latest release of AppSpider Pro version 7.0 you will find some great new features which will improve the crawling, attack and overall usability of the product. Below are a few of the key new enhancements you will find in the release. Chrome/WebKit Integration With the introduction of the Chrome/WebKit browser, AppSpider Pro now supports both Chrome and Internet Explorer as default browsers. These integrated browsers facilitate AppSpider's crawling and attack functionality, so with the added support of the Chrome browser, AppSpider now has improved coverage for web applications that aren’t fully compatible with Internet Explorer. Validation Scan Need a quick way to verify that a vulnerability has been remediated by your development team? AppSpider's new validation scan method allows users to target a scan against a previously scanned application and rescan only selected vulnerabilities rather than re-running a complete scan. Save time and get immediate visibility into remediation status. Improved UI Updates Looking for real-time and at-a-glance information of your scans? AppSpider Pro's main UI screen has been updated to give you visibility into scan status, number of vulnerabilities found, number of links crawled, authentication used and the attack policy which was used. All of your scan info in one place makes it easier than ever for you to monitor scan progress.. Confidence level for findings Based on the experience and research of Rapid7’s engineering teams, a confidence level for findings is now available in HTML and JSON reports to provide users with a visual indicator of how certain AppSpider is that a particular finding is valid. New Attack Modules The following attack modules have been added as a part of this release: ASP.NET Serialization: ASP.NET Serialization security module checks for serialized binary objects. Serialized data can potentially be intercepted and read by malicious users. Furthermore, in some cases controls might use serialized data for internal processing, so a malicious code may be processed on the web server. Cross-site scripting (XSS), (DOM based Reflected via Ajax Request): DOM Based XSS (or as it is called in some texts, "type-0 XSS") is an XSS attack wherein the attack payload is executed as a result of modifying the DOM “environment” in the victim's browser used by the original client-side script, so that the client-side code runs in an “unexpected” manner. HTTP Query Session Check: HTTP Query Checks parameter values which can expose an application to various security risks. HTTP User Agent Check: HTTP User-Agent Check is performed to understand whether user-agent sniffing is turned on. Session Upgrade: Reports a risk factor for exposing or binding the user session between states of anonymous users and authenticated users. For additional details on these feature please review the AppSpider Pro 7.0 User guide here (PDF).

Protecting Your Web Apps with AppSpider Defend Until They Can Be Patched

AppSpider scans can detect exploitable vulnerabilities in your applications, but once these vulnerabilities are detected how long does it take your development teams to create code fixes for them?  In some cases it could take several days to weeks before a fix/patch to…

AppSpider scans can detect exploitable vulnerabilities in your applications, but once these vulnerabilities are detected how long does it take your development teams to create code fixes for them?  In some cases it could take several days to weeks before a fix/patch to resolve the vulnerability can be deployed, and during this time someone could be actively exploiting this issue in your application.  AppSpider Defend, which is now integrated into AppSpider Pro, helps to protect your applications until a fix for the identified vulnerabilities are deployed.Defend allows you to easily create custom defenses for Web Application Firewalls(WAFs), Intrusion Protection Systems(IPS), or Intrusion Detection Systems(IDS), based on the results of vulnerability scans conducted with AppSpider .Using innovative automated rule generation, Defend, part of AppSpider Pro, helps security professionals to patch web application vulnerabilities with custom rules in a matter of minutes, instead of the days or weeks it can take by hand. Without the need to build a custom rule for a WAF or IPS or the need to deliver a source code patch, Defend allows developers the time to identify the root cause of the problem and fix it in the code. When you are ready to generate Defend rules, simply:Click on the Load Findings icon.Select the vulnerability summary XML file from a completed AppSpider scan.Determine which of the discovered vulnerabilities you would like to generate Defend rules for.Select the WAF/IDS/IPS that you want to configure with Defend. The current supported WAF/IDS/IPS's are the following:  ModSecurity, SourceFire/Snort, Nitro/Snort, Imperva, Secui/Snort, Akamai, Barracuda, F5, and DenyAll.Then click on the Export Rules icon to generate a Defend rules file which can be uploaded into your WAF/IDS/IPS solution.With these 5 easy steps you can generate a set of Defend rules that, along with your existing WAF/IDS/IPS solution, can help protect against exploits discovered by AppSpider.Once you have loaded the Defend rule set into your WAF/IDS/IPS solution you can verify that the Defend protection has been enabled by clicking the Defend Scan icon which will launch a Defend Quick scan to replay the attacks which AppSpider used to discover the vulnerabilities and confirm that the attacks are no longer successful due to the Defend rules being deployed.For more information on how the Defend functionality works you can review the AppSpider Pro User Guide.

Multiple Vulnerabilities Affecting Four Rapid7 Products

Today, we'd like to announce eight vulnerabilities that affect four Rapid7 products, as described in the table below. While all of these issues are relatively low severity, we want to make sure that our customers have all the information they need to make informed security…

Today, we'd like to announce eight vulnerabilities that affect four Rapid7 products, as described in the table below. While all of these issues are relatively low severity, we want to make sure that our customers have all the information they need to make informed security decisions regarding their networks. If you are a Rapid7 customer who has any questions about these issues, please don't hesitate to contact your customer success manager (CSM), our support team, or leave a comment below. For all of these vulnerabilities, the likelihood of exploitation is low, due to an array of mitigating circumstances, as explained below. Rapid7 would like to thank Noah Beddome, Justin Lemay, Ben Lincoln (all of NCC Group); Justin Steven; and Callum Carney - the independent researchers who discovered and reported these vulnerabilities, and worked with us on pursuing fixes and mitigations. Rapid7 ID CVE Product Vulnerability Status NEX-49834 CVE-2017-5230 Nexpose Hard-Coded Keystore Password Fixed (6.4.50-2017-0809) MS-2417 CVE-2017-5228 Metasploit stdapi Dir.download() Directory Traversal Fixed (4.13.0-2017020701) MS-2417 CVE-2017-5229 Metasploit extapi Clipboard.parse_dump() Directory Traversal Fixed (4.13.0-2017020701) MS-2417 CVE-2017-5231 Metasploit stdapi CommandDispatcher.cmd_download() Globbing Directory Traversal Fixed (4.13.0-2017020701) PD-9462 CVE-2017-5232 Nexpose DLL Preloading Fixed (6.4.24) PD-9462 CVE-2017-5233 AppSpider Pro DLL Preloading Fix in progress (6.14.053) PD-9462 CVE-2017-5234 Insight Collector DLL Preloading Fixed (1.0.16) PD-9462 CVE-2017-5235 Metasploit Pro DLL Preloading Fixed (4.13.0-2017022101) CVE-2017-5230: Rapid7 Nexpose Static Java Keystore Passphrase Cybersecurity firm NCC Group discovered a design issue in Rapid7's Nexpose vulnerability management solution, and has released an advisory with the relevant details here. This section briefly summarizes NCC Group's findings, explains the conditions that would need to be met in order to successfully exploit this issue, and offers mitigation advice for Nexpose users. Conditions Required to Exploit One feature of Nexpose, as with all other vulnerability management products, is the ability to configure a central repository of service account credentials so that a VM solution can login to networked assets and perform a comprehensive, authenticated scan for exposed, and patched, vulnerabilities. Of course, these credentials tend to be sensitive, since they tend to have broad reach across an organization's network, and care must be taken to store them safely. The issue identified by NCC Group revolves around our Java keystore for storing these credentials, which is encrypted with a static, vendor-provided password, "r@p1d7k3y5t0r3." If a malicious actor were to get a hold of this keystore, that person could use this password to decrypt and expose all stored scan credentials. While this is not obviously documented, this password is often known to Nexpose customers and Rapid7 support engineers, since it's used in some backup recovery scenarios. This vulnerability is not likely to offer an attacker much of an advantage however, since they would need to already have extraordinary control over your Nexpose installation in order to exercise it. This is because you need high level privileges to be able to actually get hold of the keystore that contains the stored credentials. So, in order to obtain and decrypt this file, an attacker would need to already have at least root/administrator privileges on the server running the Nexpose console, OR have a Nexpose console "Global Administrator" account, OR have access to a backup of a Nexpose console configuration. If the attacker already has root on the Nexpose console, the jig is up; customers are already advised to restrict access to Nexpose servers through normal operating system and network controls. This level of access would already represent a serious security incident, since the attacker would have complete control over the Nexpose services and could leverage one of any number of techniques to extend privileges to other network assets, such as conducting local man-in-the-middle network monitoring, local memory profiling, or other, more creative techniques to increase access. Similarly, Global Administrator access to the Nexpose console would, at minimum, allow an attacker to obtain a list of every vulnerable system in scope, alter or skip scheduled scans, and create new and malicious custom scan templates. That leaves Nexpose console backups, which we believe represents the most likely attack vector. Sometimes, backups of critical configurations are stored in networked locations that aren't as secure as the backed-up system itself. We advise against this, for obvious reasons; if backups are not secured at least as well as the Nexpose server itself, it is straightforward to restore the backup to a machine under the attacker's control (where he would have root/administrator), and proceed to leverage that local privilege as above. Designing a Fix While encrypting these credentials at rest is clearly important for safety's sake, eventually these credentials do have to be decrypted, and the key to that decryption has to be stored somewhere. After all, the whole point of a scheduled, authenticated scan is to automate logins. Storing that key offline, in an operator's head, means having to deal with a password prompt anytime a scan kicks off. This would be a significant change in how the product works, and would be a change for the worse. Designing a workable fix to this exposure is challenging. The simple solution is to enable users to pick their own passwords for this keystore, or generate one per installation. This would at least force attackers who have gained access to critical network infrastructure to do the work of either cracking the saved keystore, or do the slightly more complicated work of stepping through the decryption process as it executes. Unfortunately, this approach would immediately render existing backups of the Nexpose console unusable -- a fact that tends to only be important at the least opportune time, after a disaster has taken out the hosting server. Given the privilege requirements of the attack, this trade-off, in our opinion, isn't worth the future disaster of unrestorable backups. While we do expect to implement a new strategy for encrypting stored credentials in a future release, care will need to be taken to both ensure that the customer experience with disaster recovery remains the same and support costs aren't unreasonably impacted by this change. Mitigations for CVE-2017-5320 As of August of 2017, a fixed version has been released. CVE-2017-5228, CVE-2017-5229, CVE-2017-5231: Metasploit Meterpreter Multiple Directory Traversal Issues Metasploit Framework contributor and independent security researcher Justin Steven reported three issues in the way Metasploit Meterpreter handles certain directory structures on victim machines, which can ultimately lead to a directory traversal issue on the Meterpreter client. Justin reported his findings in an advisory, here. Conditions Required to Exploit In order to exploit this issue, we need to first be careful when discussing the "attacker" and "victim." In most cases, a user who is loading and launching Meterpreter on a remote computer is the "attacker," and that remote computer is the "victim." After all, few people actually want Meterpreter running on their machine, since it's normally delivered as a payload to an exploit. However, this vulnerability flips these roles around. If a computer acts as a honeypot, and lures an attacker into loading and running Meterpreter on it, that honeypot machine has a unique opportunity to "hack back" at the original Metasploit user by exploiting these vulnerabilities. So, in order for an attack to be successful, the attacker, in this case, must entice a victim into establishing a Meterpreter session to a computer under the attacker's control. Usually, this will be the direct result of an exploit attempt from a Metasploit user. Designing a Fix Justin worked closely with the Metasploit Framework team to develop fixes for all three issues. The fixes themselves can be inspected in the open source Metasploit framework repository, at Pull Requests 7930, 7931, and 7932, and ensure that data from Meterpreter sessions is properly inspected, since that data can possibly be evil. Huge thanks to Justin for his continued contributions to Metasploit! Mitigations for CVE-2017-5228, CVE-2017-5229, CVE-2017-5230 In addition to updating Metasploit to at least version 4.3.20, Metasploit users can help protect themselves from the consequences of interacting with a purposefully malicious host with the use of Meterpreter's "Paranoid Mode," which can significantly reduce the threat of this and other undiscovered issues involving malicious Meterpreter sessions. CVE-2017-5232, CVE-2017-5233, CVE-2017-5234, CVE-2017-5235: DLL Preloading Independent security researcher Callum Carney reported to Rapid7 that the Nexpose and AppSpider installers ship with a DLL Preloading vulnerability, wherein an attacker could trick a user into running malicious code when installing Nexpose for the first time. Further investigation from Rapid7 Platform Delivery teams revealed that the installation applications for Metasploit Pro and the Insight Collector exhibit the same vulnerability. Conditions Required to Exploit DLL Preloading vulnerabilities are well described by Microsoft, here, but in short, DLL preloading vulnerabilities occur when a program fails to specify an exact path to a system DLL; instead, the program can seek that DLL in a number of default system locations, as well as the current directory. In the case of an installation program, that current directory may be a general "Downloads" folder, which can contain binaries downloaded from all sorts of places. If an attacker can convince a victim to download a malicious DLL, store it in the same location as one of the Rapid7 installers identified above, and then install one of those applications, the victim can trigger the vulnerability. In practice, DLL preloading vulnerabilities occur more often on shared workstations, where the attacker specifically poisons the Downloads directory with a malicious DLL and waits for the victim to download and install an application susceptible to this preloading attack. It is also sometimes possible to exercise a browser vulnerability to download (but not execute) an arbitrary file, and again, wait for the user to run an installer later. In all cases, the attacker must already have write permissions to a directory that contains the Rapid7 product installer. Usually, people only install Rapid7 products once each per machine, so the window of exploitation is also severely limited. Designing a Fix In the case of Metasploit Pro, Nexpose, and the Insight Collector, the product installers were updated to define exactly where system DLLs are located, and no longer rely on dynamic searching for missing DLLs. An updated installer for Appspider Pro will be made available once testing is completed. Mitigations for CVE-2017-5232, CVE-2017-5233, CVE-2017-5234, CVE-2017-5235 In all cases, users are advised to routinely clean out their "Downloads" folder, as this issue tends to crop up in installer packages in general. Of course, users should be aware of where they are downloading and running executable software, and Microsoft Windows executables support a robust, certificate-based signing procedure that can ensure that Windows binaries are, in fact, what they purport to be. Users who keep historical versions of installers for backup and downgradability purposes should be careful to only launch those installation applications from empty directories, or at least, directories that do not contain unknown, unsigned, and possibly malicious DLLs. Coordinated Disclosure Done Right NCC Group, Justin Steven, and Callum Carney all approached Rapid7 with these issues privately, and have proven to be excellent and accommodating partners in reporting these vulnerabilities to us. As a publisher of vulnerability information ourselves, Rapid7 knows that this kind of work can at times be combative, unpleasant, and frustrating. Thankfully, that was not the case with these researchers, and we greatly appreciate their willingness to work with us and lend us their expertise. If you're a Rapid7 customer who has any questions about this advisory, please don't hesitate to contact your regular support channel, or leave a comment below.

Finalists in FIVE categories at the Network Computing Awards!

Ring Ring! You're in the Final! It's always nice to get a phone call letting us know that we've been shortlisted for awards – but when it's five awards, we like those calls even more! Two of our products, and our company have reached the final…

Ring Ring! You're in the Final! It's always nice to get a phone call letting us know that we've been shortlisted for awards – but when it's five awards, we like those calls even more! Two of our products, and our company have reached the final stages for the Network Computing Awards, and of course we'd love it if you took a moment to vote for us please. La La Land may have racked up the Oscar noms, but at the Network Computing Awards it's looking good for LE LE Land! OK, so we might not quite have the fourteen nominations that La La Land has, but our Logentries (lovingly shortened to LE) product is a finalist in three categories: Best Picture, Best Soundtrack, Best Original Screenplay (or rather: IT Optimisation Product of the Year, Software Product of the Year, and The Return on Investment Award). To reach this stage in these categories is huge, and we're very happy to be triple listed. If you've not yet experienced Logentries, I would highly recommend you take a look – it's a pretty amazing product: Imagine trying to put together a jigsaw puzzle, without an image of the completed puzzle, no idea of how many pieces are required, and to add to your woes the pieces are hidden all over the building. If you've ever had to trawl through multiple logs to try and work out what's causing a problem, and you only have symptoms to work from – say a production server is running slowly – you'll recognise the analogy. Logentries puts the answers hidden within your myriad of logs right at your fingertips. It's simple to use, lightning fast, and you can create some very cool visualisations from your data too. Click here to learn more about how Logentries can revolutionise how you see your ecosystem. Look out! Here comes the AppSpider, Man! Whilst my tenuously linked movie reference here is no stranger to Oscar nominations either, I'm obviously referring to our AppSpider product, which is listed as a finalist in the Network Computing Awards, in the Testing and Monitoring Product of the Year category. Web apps, and the plethora of technologies that power them, are growing at a crazy rate, presenting complicated security challenges for organisations. AppSpider crawls to the deepest, darkest corners of even the most modern and complex apps to effectively test for risk and get you the insight you need to remediate faster. It plays a key part in the SDLC, and allows DevOps to fix issues earlier in the cycle - resulting in a huge reduction in last minute delays caused by vulnerabilities being found late in the day. You can read more about how DevOps teams using AppSpider can reduce stress and possibly live longer happier lives* here. *Life lengthening not guaranteed, but your web app SDLC will be in a happier place for sure. Always read the label. So many great movies, so little time….but which One should I Watch? The Rapid7 movie, of course! Well, OK, we don't have a movie length extravaganza of Rapid7 for you yet (cough, cough: Kyle Flaherty,), but we do have some pretty cool YouTube videos you can watch, plus a highly acclaimed podcast you should listen to. We've also been listed as a finalist for the One to Watch Company - hooray! We're pleased (read: overjoyed), humbled, and indeed chuffed (I had to get a Britishism in somewhere) to have received our finalist nominations, and very much looking forward to attending the event in London later this year. If you could please take a minute to cast your votes for Logentries, AppSpider and Rapid7 that would be most wonderful of you – voting is open until March 22nd. Click here to vote!

Bug, Not Alert: How Application Security Must Use Different Words

"Words matter” is something that comes out of my mouth nearly each day. At work it matters how we communicate with each other and the words we use might be the difference between collaboration or confrontation. The same happens with the security world, especially…

"Words matter” is something that comes out of my mouth nearly each day. At work it matters how we communicate with each other and the words we use might be the difference between collaboration or confrontation. The same happens with the security world, especially when we communicate with folks in IT or within the devops methodology. Last week this became highly apparent sitting with folks attending OWASP's annual AppSec USA, where they discussed the difference between a fix or fail. The problem, in our world, often stems from the fact that security is oftentimes a scary concept, conjuring up thoughts of clowns lurking in the woods on the walk to school (my therapist told me to express my fears outwardly). Security means something is at risk and that if it doesn't get fixed immediately the world may come to a frantic halt. The truth, however, is that not all security threats are created equal and in most cases the need to prioritize fixes can eliminate the panic. The challenge is actually how security threats or vulnerabilities are presented to those outside of security. Imagine what a "security vulnerability report" does to the devops folks working the app your business uses to bill customers? For years we've focused on finding all the vulnerabilities, prioritizing them based on business and threat context, and then ultimately throwing them over the wall to IT or devops. But security has been learning how to more effectively create a remediation workflow. In some cases this means true management of the workflow, analytics that tells you if a vuln has been patched, or dashboards fed from live data so decisions are made at the point of impact. All that stuff is great, but what if I said you also must have a reframing of what the word "security vulnerability" actually means? Security Vuln or JIRA Ticket? Back to my time with the OWASP crew in DC; and I'll be fully transparent that this idea came to me as I spoke with Rapid7 application security customers (check out AppSpider for more info). We talked a long time about the importance of collecting all the right application data for scanning and then prioritizing the vulns found. But the part of the conversation that really turned around my thinking was when we got to remediation. The functionality that these customers liked the most was the ability to not throw over a 2,300-page stale report (true story!) but instead translate found vulnerabilities directly into the devops ticketing system. In this case it was a simple measure of taking what was found via application security testing and then placing that, with context, into JIRA. All of a sudden the devops team had a list of high-priority bug fixes, which they valued and would get to quickly, rather than a big security report that seemed to be more blame-game rather than helpful. Words matter in security, as does intent. It's an important thing to consider as you build our your security program and discover the points of contact with IT and devops.

UNITED 2016: Want to share your experience?

Key trends. Expert advice. The latest techniques and technology. UNITED 2016 is created from the ground up to provide the insight you need to drive your security program forward, faster. This year, we're also hoping you can provide us with the insight we need to…

Key trends. Expert advice. The latest techniques and technology. UNITED 2016 is created from the ground up to provide the insight you need to drive your security program forward, faster. This year, we're also hoping you can provide us with the insight we need to make our products and services even better. That's why we're running two UX focus groups on November 1, 2016. We'd love to see you there—after all, your feedback is what keeps our solutions ever-evolving.UX Focus Group: Help us make Nexpose Now even betterStale results. False alerts. Windows of wait. We heard your issues with traditional scanning and released Nexpose Now to help you resolve them. Now that you've been using it for several months, we'd love to know: how's it going? Actually, we have way more questions than that, but they're all in the service of making sure Nexpose Now is meeting – or exceeding – your needs. And the only person who can tell us that is you! So please join us for this 1.5-hour focus group, where you – along with other Nexpose Now users – can share your list of loves and loathes. It's the perfect opportunity to speak with Rapid7, as well as your peers, about your Nexpose Now experience, so we can help make it even more exceptional.UX Focus Group: Creating personalized and exceptional experiencesHere at Rapid7, we think we've done some pretty great stuff, but we also know we can do some things even better. Though, frankly, what we think doesn't really matter—as a Rapid7 customer, the only opinion we care about is yours. And we want it! Why? Well, as our favorite customer experience author John A. Goodman put it, “We can solve only the problems we know about.” So join this 1.5-hour focus group and let us know: from the first time you heard about our solutions to the last time you used them, what's worked well and what could work better? Your participation will really help us to understand the experience from your perspective, and how we can further personalize and improve that experience moving forward.Want in?Saving a seat in our focus groups is easy. If you haven't yet registered for UNITED, you can register for a UX session while registering for the conference.If you have already registered for UNITED, just head back to the conference registration page, enter the email address you used to register – along with your confirmation number – and tack on the session that makes sense for you.Space is limited, so act soon! We are looking forward to seeing you!Ger JoyceSenior UX Researcher, Rapid7

Web Application Security Testing: Single Page Applications Built with JavaScript Frameworks

In recent years, more and more applications are being built on popular new JavaScript frameworks like ReactJS and AngularJS. As is often the case with new application technologies, these frameworks have created an innovation gap for most application security scanning solutions and an acute set…

In recent years, more and more applications are being built on popular new JavaScript frameworks like ReactJS and AngularJS. As is often the case with new application technologies, these frameworks have created an innovation gap for most application security scanning solutions and an acute set of challenges for those of us who focus on web application security. It is imperative that our application security testing approaches keep pace with evolving technology. When we fail to keep up, portions of the applications go untested leaving unknown risk. Related resource: [VIDEO] Securing Single Page Applications Built on JavaScript Frameworks So, let's look at some of the key things we need to think about when testing these modern web applications.1. Dynamic clients of today's complex web applications. – These applications have highly dynamic clients. Applications are built on JavaScript platforms like AngularJS and ReactJS. Single Page Application (SPA) frameworks fundamentally change the browser communication that security experts have long understood. These frameworks use custom event names instead of the traditional browser events we understand (‘on click,' ‘on submit,' etc.). Evaluate whether your dynamic application security testing solution is capable of translating these custom events into the traditional browser event names we understand.Related Resource: [WEBCAST] BEST PRACTICES FOR REDUCING RISK WITH A DYNAMIC APP SECURITY PROGRAM2. RESTful APIs (back-end). – Today's modern applications are powered by complex back-end APIs. Most organizations are currently testing RESTful API's manually or not testing them at all. Your dynamic application security solutions should be able to automatically discover and test a RESTful API while crawling both AJAX applications and SPA. Because APIs are proliferating so rapidly, they take a long time test. Ensuring your dynamic application security solutions should enable your expert pen testers to focus on the problems that can't be automated, like Business Logic testing.Related resource: [Whitepaper] The Top Ten Business Logic Attack Vectors3. Interconnected applications. - As security experts, it's imperative that we understand today's interconnected world. We are seeing interconnected applications at work and at home. For example, The Yahoo homepage shows news from many sites and includes your Twitter feed. Amazon is offering up products from eBay. We are used to thinking about testing an individual application, but now we must go beyond that. Many applications have created open APIs so that other applications can connect to it, or are consuming API's of 3rd party applications. These applications are becoming increasingly interconnected and interdependent. Your DAST solution should help you address this interconnectivity by testing the API's that power them.Dynamic application security testing solutions are evolving rapidly. We encourage you to expect more from your solution. AppSpider enables you to keep up with the changing application landscape so that you can be confident your application has been effectively tested. AppSpider goes where no scanner has gone before - to the deep and dark crevices of your modern applications. By using AppSpider for Dynamic Application Security Testing (DAST), you can keep up with application evolutions from the dynamic clients of Single Page Applications (SPAs) to the complex backend APIs. Learn more about AppSpider and how it scans Single Page Applications that are built on JavaScript frameworks.

AppSpider application security scanning solution deepens support for Single Page Applications - ReactJS

Today, Rapid7 is pleased to announce an AppSpider (application security scanning) update that includes enhanced support for JavaScript Single Page Applications (SPAs) built with ReactJS. This release is significant because SPAs are proliferating rapidly and increasingly creating challenges for security teams. Some of the key…

Today, Rapid7 is pleased to announce an AppSpider (application security scanning) update that includes enhanced support for JavaScript Single Page Applications (SPAs) built with ReactJS. This release is significant because SPAs are proliferating rapidly and increasingly creating challenges for security teams. Some of the key challenges with securing SPA's are: Diverse frameworks - The diversity and number of JavaScript frameworks contributes to the complexity in finding adequate scan coverage against all modern Single Page Applications. Custom events - These frameworks implement non-standard or custom event binding mechanisms, and in the case for ReactJS, it creates a so-called “Virtual DOM” which provides an internal representation of events outside of the real browser DOM. It is important to discover the type and location of every actionable component on the page. Tracking the event bindings on a real DOM is relatively straightforward by shimming EventTarget.prototype.addEventListener and determining the event type, and which node it is bound to. For example: However, in cases where a framework manages its own event delegation (such as in the ReactJS Virtual DOM) it becomes more efficient to hook into the framework, effectively providing a query language into the framework for its events (instead of listening for them). According to ReactJS page: Event delegation React doesn't actually attach event handlers to the nodes themselves. When React starts up, it starts listening for all events at the top level using a single event listener. When a component is mounted or unmounted, the event handlers are simply added or removed from an internal mapping. When an event occurs, React knows how to dispatch it using this mapping. AppSpider has now created a generalized lightweight framework hooking structure that can be used to effectively crawl/discover/scan frameworks that do things ‘their own way.'  Look for an upcoming announcement on how you can incorporate and contribute your own custom framework scanning hooks with AppSpider. What's New? So what is AppSpider doing with ReactJS now? AppSpider is leveraging Facebook's open source developer tools (react-devtools) that are wrapped in a generalized framework hook and now crawled exhaustively by AppSpider. Additionally, ‘doin' it their own way' event binding systems (such as the ReactJS Virtual DOM) are being considered and executed. It is still the case that frameworks are supported right out of the box including AngularJS, Backbone, jQuery, Knockout, etc., without the need for tuning. Only where needed are we adding specific support for frameworks with custom techniques. Why is this important? Web application security scanners struggle with understanding these more complex types of systems that don't rely on fat clients and slow processes. Scanners were built using standard event names relying on these ever-present fields to allow them to interact with the web application. Without these fields, a traditional scanner no longer has the building blocks necessary to correctly interact with the web applications it is scanning. Additionally, they have used the ever-present DOM structure to better understand the application and assist with crawling. This becomes difficult if not impossible for a traditional scanner when they have to deal with applications that process information on the server side instead of the client side. If this creates such an issue, why are people flocking to these frameworks? There are several reasons: Two-way data bindings which allow a better back and forth between client side and server side interactions Processing information on the server side which increases performance and gives a better user experience Flexibility to break away from the fat client side frameworks These capabilities can make a dramatic difference to developers and end users but they also introduce unique issues to security teams. Security teams and their tools are all used to the standard event names like OnClick or OnSubmit. These events drive how we interact with a web application, allowing our standard tools to crawl through the application and interact with them. By using these standard events we have been able to automate the manual tasks of making the application think we interacted with it. This becomes much more complicated when we introduce custom event names. How do you automate interacting with something that changes from application to application or even worse whenever you refresh the same application? AppSpider answers that question by allowing you to connect directly into the framework and have the framework tell you what those custom events are before it even begins to crawl/attack. Security experts have relied upon the DOM to know what was needed to test an application and monitor this interaction to understand potential weaknesses. Server side processing complicates this as all processing is done on the server side, away from the eyes and tools of the security expert and displaying only the end results. With AppSpider, you can now handle applications that are utilizing server side processes because we are not dependent on what is shown to us; instead we already know what is there. Currently, the only way for pen testers to conduct web application tests on applications using ReactJS and other modern formats is to manually attack them, working their way one-by-one through each option. This is a time consuming and tedious task. Pen testers lack the tools to be able to quickly and dynamically scan a web application using these SPA frameworks to identify potential attack points and narrow down where they would like to do further manual testing. AppSpider now allows them to quickly and efficiently scan these applications, saving them time and allowing them to focus efforts where they will be the most effective. How can you tell if your scanner is supporting these custom event names? Answering this question can be difficult as you have to know your application to truly understand what is being missed. You can typically see this quickly when you start to analyze the results of your scans. You will see areas of your application completely missed or parameters that don't show up in your logs as being tested against. Know your weaknesses.  Can your scanner effectively scan this basic ReactJS application and find all of the events? http://webscantest.com/react/ Test it and find out!  Is your scanner able to get see past the DOM? AppSpider can.

Validate Web Application Security Vulnerabilities with AppSpider's New Chrome Plug-In

AppSpider's Interactive Reports Go Chrome We are thrilled to announce a significant reporting enhancement to AppSpider, Rapid7's dynamic application security scanner. AppSpider now has a Chrome Plug-in that enables users to open any report in Chrome and be able to use the real-time vulnerability validation…

AppSpider's Interactive Reports Go Chrome We are thrilled to announce a significant reporting enhancement to AppSpider, Rapid7's dynamic application security scanner. AppSpider now has a Chrome Plug-in that enables users to open any report in Chrome and be able to use the real-time vulnerability validation feature without the need for Java or having to zip up the folder and send it off. This makes reporting and troubleshooting even easier! Enabling Security - Developer Collaboration to Speed Remediation AppSpider is a dynamic application security scanning solution that finds vulnerabilities from the outside-in, just as a hacker would. Our customers tell us that AppSpider not only makes it easier to collaborate with developers, but also speeds remediation efforts. Unlike other application security scanning solutions, we don't just report security weaknesses for security teams to ‘send' to developers. Our solution includes an interactive component that enables developers to quickly and easily review a vulnerability and replay the attack in real-time. This enables them to see how the vulnerability works all from their desktop without having direct access to AppSpider itself - and without learning how to become a hacker. Related Content [VIDEO] Why it's important to drive application security earlier in the software development lifecycle. Developers can then use AppSpider's interactive reports to confirm that their fixes have resolved the vulnerability and are actually protecting the application from the weaknesses found. Developer's don't need to have AppSpider installed in their environment to leverage this functionality, just the report, connection to the application they are testing and they're good to go. Related Content [VIDEO] Watch AppSpider interactive reports in action. AppSpider Interactive Reports - How it Works Pretty cool, huh? Well, here's how and why it works... For those who work in application security, we know all too well that many, if not most, of the application security vulnerabilities we deal with exist in the source code of custom applications that we are responsible for - often in the form of unvalidated inputs. As security professionals, we aren't able to resolve these vulnerabilities (or defects) with a simple patch. We need to work with the developers to resolve security defects, implement coding best business practices and then re-release the new code into production. At Rapid7, we have understood this for a long time and we have been helping security teams and development teams to collaborate more effectively through AppSpider. There are many reasons why effective DevSecOps collaboration is difficult. Developers aren't security professionals and reporting security defects to them is easier said than done. We have the logistical issues of emailing around spreadsheets or PDFs and then we have the communication issues related to us speaking security and them speaking to developer. Not to mention the pain of having to go back and forth re-testing their “fixes” to see if they are still vulnerable or not, ‘cause let's face it, most developers wouldn't know a SQL Injection from a Cross Site Request Forgery (CSRF), let alone know how to actually attack their code to see if it's vulnerable to these attack types. This is an area that we have always shined in however, until today AppSpider required the security professional and the developer to make use of a Java applet to accomplish this within our reports. Now that Chrome and Firefox have disabled Java support, some teams weren't able to leverage this awesome functionality. Are you looking to upgrade your dynamic application security scanner? Check out AppSpider in action? Check out this on-demand demo of our web application security solution here!

RESTful Web Services: Security Testing Made Easy (Finally)

AppSpider's got even more Swagger now! As you may remember, we first launched improved RESTful web services security testing last year. Since that time, you have been able to test the REST APIs that have a Swagger definition file, automatically without capturing proxy traffic. Now,…

AppSpider's got even more Swagger now! As you may remember, we first launched improved RESTful web services security testing last year. Since that time, you have been able to test the REST APIs that have a Swagger definition file, automatically without capturing proxy traffic. Now, we have expanded upon that functionality so that AppSpider can automatically discover Swagger definition files as part of the application crawl phase. You no longer have to import the Swagger definition file, delivering an even easier and more automatic approach for security testing RESTful web services (APIs, microservices, and web APIs). This is a huge timesaver and another evolution along AppSpider's long history of being better at handling modern applications than other application security scanning solutions. Challenges with Security Testing RESTful APIs When it comes to RESTful web services, most application scanning solutions have been stuck in the traditional web application dark ages. As APIs have proliferated, security teams have been forced to manually crawl each API call, relying on what little - if any - documentation is available and knowledge of the application. With a manual process like that, the best we can hope for is to not miss any path or verb (GET, PUT, POST, DELETE) within the API. In this scenario, you also have to figure out how to stay current with a manually documented API. The introduction of documentation formats such as Swagger, API Blueprint, and RAML helped, but testing it was still a manual process fraught with errors. RESTful Web Services: Security Testing Made Easy Enter Rapid7. At the end of 2015, we released a revolutionary capability for testing your REST APIs with the introduction of Swagger-based DAST scanning. This ability for AppSpider to learn how to test an API by consuming a Swagger definition (.json) file revolutionized the way DAST solutions handle API security testing. Doing so allowed our customers for the first time, to easily scan their API without a lot of manual work. Now, we are taking it up another notch by making REST API security testing even easier. What's New? This is no trivial task as it's not just parsing data out. When our engineers started this task, the first thing they thought about was how customers would use this feature. We quickly realized that just like everything else in application security, when we start scanning new technologies in the web application ecosphere, we realize that we encounter the same challenges we did when learning to effectively scan traditional web applications. So, here are three of the latest enhancements we have made to speed REST security testing. Automated Discovery of Swagger Definitions - Instead of feeding your Swagger definition file into AppSpider, you can simply point AppSpider to the URL that contains your Swagger definition and AppSpider will automatically ingest it and begin to take action. Parameter Identification and Testing with Expected Results - Application security testing solutions always have the challenge of knowing what the parameters are and what data they are expecting. Web applications can have many different parameters, some of which may be unique to just that API. We knew that if this was going to be effective we needed to be able to account for these unique types of parameters. This led us to expand our capability so that you can give AppSpider guidance on what these parameters mean to your application. Your guidance allows AppSpider to improve the comprehensiveness of the testing. AppSpider remembers your guidance and uses it in subsequent tests. Quick tip: Regardless of which application security testing solution or experts you use, be sure that your scanner or testers are using expected results (a date for ‘date', a name for ‘last name' and a valid credit card number for ‘ccn'). Without expected results, the test is largely ineffective. Scan restrictions - Just like any other area of a web application, APIs have sensitive portions that you may not want to scan, a good example of this is a HTTP verb like DELETE. Many teams have effectively documented ALL of their REST API. This is great and is really where you should be, but we need to be able to avoid testing certain sections. We are already very good at customizing your web application scanning to make it the best it can be. We have just extended this capability into the handling of APIs. Now you can leverage AppSpider's scan restrictions capability and exclude any parameter or HTTP verb you do not want to use. By leveraging AppSpider's automated testing of RESTful web services that includes both parameter training with scan restrictions, you really have an unparalleled opportunity to test the security of your REST APIs quickly and frequently. We know you thought this was out of reach, but it's not! So keep this in mind next time you are having a discussion on how to efficiently scan and understand the security weaknesses in your APIs. If you're stuck in manual process it might be time to take a look at how to automate these processes using something like Swagger. Note, Swagger has been renamed to the (OpenAPI Specification). If you are already automated well then we can give you an answer you've always wanted..we can automate your API scanning like never before. You may also be interested in : AppSpider Release Notes Blog: AppSpider's Got Swagger: The first end-to-end security testing for REST APIs

Lessons Learned in Web Application Security from the 2016 DBIR

We spent last week hearing from experts around the globe discussing what web application security insights we have gotten from Verizon's 2016 Data Breach Investigations Report. Thank you, Verizon, and all of your partners for giving us a lot to think about! We also polled…

We spent last week hearing from experts around the globe discussing what web application security insights we have gotten from Verizon's 2016 Data Breach Investigations Report. Thank you, Verizon, and all of your partners for giving us a lot to think about! We also polled our robust Rapid7 Community asking them what they have learned from the 2016 DBIR. We wanted to share some of their comments as well: Quick Insights from the Rapid7 Community "I find that the Verizon Data Breach Investigation Report is a good indication of the current environment when it comes to the threat climate - I use it to prioritize what areas and scenarios I spend the most time focusing resources upon. For my environment, the continued shrinking of time between vulnerability disclosure and exploit is very important. For offices like mine with a small staff, identifying and applying patches in an ever more strategic manner is key. I think vendors who successfully market intelligent heterogeneous automated patching systems will start to see big gains in sales. And those that can tie it to scanning/compliance/reporting/attack suites are going to be even better positioned in the market." Scott Meyer, Sr. Systems Engineer at United States Coast guard >"The internet is evolving, and greater complexity creates greater risk by introducing new potential attack vectors. Attackers aren't always after data when targeting a web application. Frequently sites are re-purposed to host malware or as a platform for a phishing campaign. Website defacements are still prevalent, accounting for roughly half of the reported incidents." Steven Maske, Sr. Security Engineer >"Train, train, and retrain your users. Use proper coding. Really, we still fall victim to SQLi? Two factor authentication is still king. Limit download to x to prevent complete data exfiltration" Jack Voth, Sr. Director of Information Technology at Algenol Biotech Lessons Learned from the 2016 Verizon Data Breach Report Learning from DBIR Strategies to Implement 1. Web application attacks are a primary vector. • Start security testing your applications today. 2. No industry is immune, but some are more affected than others. • Focus on the attack patterns that your industry is experiencing. • Know your enemy's motivation. 3. Unvalidated inputs continue to plague our web applications. • Validate your inputs. • Train and retrain your developers. • Keep in mind that software security issues are software defects • Conduct regular dynamic application security testing (DAST) assessments to find unvalidated inputs 4. Web applications are evolving and so should your application security program. • Make sure your skills and tools are up to snuff with the latest dynamic and complex applications. • Ask your vendors if their tools handle Dynamic clients, RESTful APIs and Single Page Applications. Learn why this is important and what questions you need to ask vendors in this quick video. 5. Different industries have different enemies. • Know who and what you are defending against. Grudge or Money? 6. There are so many free and fabulous resources. Use them! • Get involved with OWASP today! How Rapid7 Can Help Rapid7's AppSpider, a Dynamic Application Security Testing (DAST) solution finds real-world vulnerabilities in your applications from the outside in, just as an attacker would. AppSpider goes beyond basic testing by enabling you to build a truly scalable web application security program. You can watch an on-demand demo of AppSpider here if you are interested in learning more. Deeper application coverage The AppSpider development team keeps up with evolving web application technologies so that you don't have to. From AJAX and REST APIs to Single Page Applications, we're committed to making sure that AppSpider assesses as much of your applications as is possible, so that you can rely on AppSpider to find unvalidated inputs and a host of other vulnerabilities in your modern web applications. View our quick video to learn how to achieve deeper web application coverage with your web app scanner. Breadth of web app attack types From unvalidated inputs to information disclosure, with more than 50 different, we've got you covered. AppSpider goes way beyond the OWASP Top 10 attack types, including SQL Injection and Cross Site Scripting (XSS) - we test for every custom attack pattern that can be tested by software. This leaves your team more time and budget to test the attack types that require humanlike business logic testing. Application security program scalability AppSpider is designed to help you scale your application security testing program so that you can conduct regular testing across hundreds or thousands of applications throughout the software development lifecycle. Dynamic Application Security Testing (DAST) earlier in the SDLC AppSpider comes with a host of integrations that enable you to drive application security earlier into the SDLC through Continuous Integration (like Jenkins), issue tracking (like Jira) and browser integration testing (like Selenium). Our customers are successfully collaborating with their developers and building dynamic application security testing earlier into the SDLC. You may also be interested in these blog posts that also offer perspective on the 2016 Verizon DBIR: Social Attacks in Web App Hacking - Investigating Findings of the DBIR 3 Web App Sec-ian Takeaways From the 2016 DBIR 2016 DBIR & Application Security: Let's Get Back to the Basics Folks The 2016 Verizon Data Breach Investigations Report (DBIR) - A Web Application Security Perspective

Social Attacks in Web App Hacking - Investigating Findings of the DBIR

This is a guest post from Shay Chen, an Information Security Researcher, Analyst, Tool Author and Speaker. The guy behind TECAPI , WAVSEP and WAFEP benchmarks.Are social attacks that much easier to use, or is it the technology gap of exploitation engines that make social…

This is a guest post from Shay Chen, an Information Security Researcher, Analyst, Tool Author and Speaker. The guy behind TECAPI , WAVSEP and WAFEP benchmarks.Are social attacks that much easier to use, or is it the technology gap of exploitation engines that make social attacks more appealing?While reading through the latest Verizon Data Breach Investigations Report, I naturally took note of the Web App Hacking section, and noticed the diversity of attacks presented under that category. One of the most notable elements was how prominent the use of stolen credentials and social vectors in general turned out to be, in comparison to "traditional" web attacks. Even SQL Injection (SQLi) - probably the most widely known (by humans) and supported attack vector (by tools) is far behind - and numerous application level attack vectors are not even represented in the charts.Although it's obvious that in 2016 there are many additional attack vectors that can have a dire impact, attacks tied to the social element are still much more prominent, and the “traditional” web attacks being used all seem to be attacks supported out-of-the-box by the various scan engines out there.It might be interesting to investigate a theory around the subject: are the attackers limited to attacks supported by commonly available tools? Are they further limited by the engines not catching up with the recent technology complexity?With the recent advancements and changes in web technologies - single page applications, applications referencing multiple domains, exotic and complicated input vectors, scan barriers such as anti-CSRF mechanisms and CAPTCHA variations - even enterprise scale scanners have a hard time scanning modern application in a point-and-shoot scenario, and the typical single page application may require scan policy optimization to get it to work properly, let alone get the most out of the scan.Running phishing campaigns still requires a level of investment/effort from the attacker, at least as much as the configuration and use of capable, automated exploitation tools. Attackers appear to be choosing the former and that's a signal that presently there is a better ROI for these types of attacks.If the exploitation engines that attackers are using face the same challenges as vulnerability scanner vendors - catching up with technology - then perhaps the technology complexity for automated exploitation engines is the real barrier that makes the social elements more appealing, and not only the availability of credentials and the success ratio of social attacks.How about testing it for yourself?If you have a modern single-page application in your organization (Angular, React, etc), and some method of monitoring attacks (WAF, logs, etc), note:Which attacks are being executed on your apps?Which pages/methods and parameters are getting attacked on a regular basis, and which pages/methods are not?Are the pages being exempted technologically complex to crawl, activate or identify?Maybe complexity isn't the enemy of security after all.

2016 DBIR & Application Security: Let's Get Back to the Basics Folks

This is a guest post from Tom Brennan, Owner of ProactiveRISK and serving on the Global Board of Directors for the OWASP Foundation. In reading this year's Verizon Data Breach Investigations Report, one thing came to mind: we need to get back to the basics.…

This is a guest post from Tom Brennan, Owner of ProactiveRISK and serving on the Global Board of Directors for the OWASP Foundation. In reading this year's Verizon Data Breach Investigations Report, one thing came to mind: we need to get back to the basics. Here are my takeaways from the DBIR. 1. Remain Vigilant Recently, data relating to 1.5 million customers of Verizon Enterprise were for available for sale. Some would say this is ironic, but what it means to me is that everyone is HUMAN. SEC_RITY requires “U” to be vigilant in all aspects of its operations from creation, deployment, and use of technology.  I was very happy to see the work of the Center of Internet Security (CIS) Top 20 Security Controls referenced. These are important proactive steps in operating ANY business and I'm proud to be one of the collaborators on this important project. CIS Top 20 Security Controls 1: Inventory of Authorized and Unauthorized Devices 2: Inventory of Authorized and Unauthorized Software 3: Secure Configurations for Hardware and Software on Mobile Device Laptops, Workstations, and Servers 4: Continuous Vulnerability Assessment and Remediation 5: Controlled Use of Administrative Privileges 6: Maintenance, Monitoring, and Analysis of Audit Logs 7: Email and Web Browser Protections 8: Malware Defenses 9: Limitation and Control of Network Ports, Protocols, and Services 10: Data Recovery Capability 11: Secure Configurations for Network Devices such as Firewall Routers, and Switches 12: Boundary Defense 13: Data Protection 14: Controlled Access Based on the Need to Know 15: Wireless Access Control 16: Account Monitoring and Control 17: Security Skills Assessment and Appropriate Training to Fill Gaps 18: Application Software Security 19: Incident Response and Management 20: Penetration Tests and Red Team Exercises 2. All software security issues are software quality issues. Unfortunately, finding fault is what some humans do best, having adequate controls is what IT defending is actually about. The sections in the Verizon report that discussed attack vectors should remind everyone that not all software quality issues are security issues, but all software security issues are software quality issues. Currently one of the greatest risks to software is third party software components. 3. What type of Attacker are you Defending Against? What has not changed since 1989 when I first used ATDT,,,  to wardial by modem off an 8-bit for the first time is that it's STILL people behind the keyboards. People on a wide ethical spectrum are still using keyboards to harm, steal, deface, intimidate, and wage cyber attacks/wars, and ALL criminals need is means, motive, and opportunity. Every organization needs to be asking what TYPE of attacker are they defending against (Threat Modeling). For example: "My business relies on the internet for selling widgets, the adversary is an indiscriminate bot/worm, or a random individual with skills, or a group of skilled and motivated attackers. This is where OWASP's Threat Risk Modeling workflow can really help when proactively defined with OWASP's Incident Response Guidelines. Modern and resilient businesses should conduct mock training exercises to educate and prepare the team. Business is about taking risks, and not all survive. Some lack the number of customers they need to survive, others struggle to move enough product, and now for many, the eventuality a business could be hacked and unable to recover is a concern whether you are a Small Business or sitting on the Board of Directors of a Fortune 50 organization. You can use insider threat examples, outsider and 3rd party vendor risks, all are different and based on a tolerance threshold decisions need to be made. 4. OWASP - Get Involved! It's free and it's helpful! As the 2016 Verizon Data Breach Investigations Report shows, web applications remain a primary vector of successful breaches. I encourage everyone to get involved with the OWASP Foundation where I spend a great deal of time. OWASP operates as a non-profit and is not affiliated with any technology company, which means it is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a community of like-minded professionals, OWASP issues software tools and documentation on application security. All of its articles, methodologies and technologies are made available free of charge to the public. OWASP maintains roughly 250 local chapters in 110 countries and counts tens of thousands of individual members. The OWASP Foundation uses several licenses to distribute software, documentation, and other materials. I encourage everyone to review this OPEN resource and ADD to the knowledge tree. I really enjoyed the 2016 Verizon DBIR for the data. Their perspective in this report is based on wide array of both customer engagements and data from nearly 70 partners. The average reader that uses a credit card at a hotel, casino, or retail store may feel uneasy about the risk of trusting others with their data. If your business is dealing with confidential data you should be concerned and proactive about the risks you take. If you haven't already, take a look at the Defender's Perspective of this year's DBIR, written by Bob Rudis.

3 Web App Sec-ian Takeaways From the 2016 DBIR

This year's 2016 Verizon Data Breach Report was a great read. As I spend my days exploring web application security, the report provided a lot of great insight into the space that I often frequent. Lately, I have been researching out of band and second…

This year's 2016 Verizon Data Breach Report was a great read. As I spend my days exploring web application security, the report provided a lot of great insight into the space that I often frequent. Lately, I have been researching out of band and second order vulnerabilities as well as how Single Page Applications are affecting application security programs.  The following three takeaways are my gut reaction thoughts on the 2016 DBIR from a web app sec-ian perspective: 1. Assess Your Web Applications Today Not tomorrow, not next week, today. I don't want to see talented geeks jump on board a hot startup and hear, “Oh, we don't have a security program.” I look at this report and the huge increase in web application attacks wondering how ANYONE could still not be taking their web application security program seriously. Seriously? Let's get serious for a slim second. There has been a dramatic rise in web application attack patterns across all industry verticals as covered in the research. Though three industries: entertainment, finance, and information, have seen a larger jump. Web application attacks make up 50% or more of the total breaches, with a notable jump in the finance industry from 31% to 82% in 2016. However, it is suggested that this jump is due to sampling errors introduced from the overwhelming data points linked to Dridex. 2. Fun, Ideology, or Grudge drove most incidents. Money motivated most theft. Few spies were caught.  Although at first eye numbing stare, it appears that all web application hacking motives of 2015 were from grudge wielding, whistle blowing people with no real secret agent spying going on, though admittedly with a sizable criminal element.  When this same data is filtered through ‘confirmed data disclosure,' 95% of the resultant cases appear to be financially motivated, and it becomes much more apparent that data disclosure is all about the money. 3. “I value your input, I just don't trust it.” (p. 30) Unvalidated input continues to be one of the most fundamental software problems that lead to web application breaches.  From the dawn of client/server software to the now modern Single Page Application framework, we have been releasing applications with partially validated inputs despite the fact that we have known about validating inputs for decades. Unfortunately, this fundamental cultural development flaw will likely not be leaving us anytime soon. Please, if you learn anything from the DBIR, make sure to validate input, folks! In terms of the top 10 threat varieties of 2015, SQL Injection (#7), and Remote File Inclusion (#9) are ever present and are a direct result to trusting input in an unsafe manner. The ‘Recommended Controls' for Web App Attacks section in the DBIR states, "validate inputs, whether it is ensuring that the image upload functionality makes sure that it is actually an image and not a web shell, or that users can't pass commands to the database via the customer name field." This is not to say validation of output is not also of high importance. Rather, it indicates the place where most initial damage can occur, whereby output validation reduces the available information able to be gathered on the target. That's it for my take on the 2016 Verizon Data Breach Investigations Report. Be sure to check out the Defender's Perspective, written by Bob Rudis.

The 2016 Verizon Data Breach Investigations Report (DBIR) - A Web Application Security Perspective

The 2016 Verizon Data Breach Investigations Report (DBIR) is out and everyone is poring over the report to see what new insights we can take from last year's incidents and breaches. We have not only created this post to look at some primary application security…

The 2016 Verizon Data Breach Investigations Report (DBIR) is out and everyone is poring over the report to see what new insights we can take from last year's incidents and breaches. We have not only created this post to look at some primary application security takeaways, but we also have gathered guest posts from industry experts. Keep checking back this week to hear from people living at the front lines of web application security, as well as commentary from several of our customers who provided some quick takeaways that can help you and your team. Let's dive into four key takeaways from this year's DBIR, from an application security point of view. 1. Protect Your Web Applications Web app attacks remain the most common breach pattern underscoring what we already know - that web applications are a preferred vector for malicious attackers and they are difficult to protect and secure. The figure below shows that 40% of the breaches analyzed for the 2016 DBIR were web app attacks. 2. Stop Auditing Like It's 1999 We've said this before and we'll say it again. Applications are evolving at a rapid pace and they are becoming more complex and more dynamic with each passing year. From web APIs to Single Page Applications, it's critical that your application security experts not only understand the technologies used in your applications, but also find tools that are able to handle these modern applications. As we pay our respects to the dearly beloved, Prince, please, stop testing like it's 1999. Update your application security testing techniques, sharpen your skills, and make sure your tools understand modern applications. 3. No Industry is Immune No industry is exempt from web app attacks, but some are seeing more breaches than others. For the finance, entertainment, and information industries, web app attacks are the primary attack pattern in reported breaches. For the financial industry, web app attacks are a whopping 82% of their attacks. These industries, in particular, should be assessing and gearing up their web application security programs to ensure optimal investment and attention. 4. Validate Your Inputs As an industry, we have been talking about invalidated inputs forever. It feels like we are fighting an uphill battle. We strive to train our developers on secure coding, the importance of input validation and how to prevent SQL Injection, XSS, buffer overflows, and other attacks that stem from invalidated and unsanitary inputs. Unfortunately, too many application inputs continue to be vulnerable and we are swimming against a steady stream of new applications written by developers who continue to repeat the same mistakes. That's our take on the 2016 Verizon Data Breach Investigations Report. We would love to hear your thoughts in the comments! Please check back throughout the week to hear what some of our favorite web application security experts have to share about their key takeaways and reactions from this year's DBIR. For more perspective in this year's DBIR through an application security lens. Check out the rest of the blogs in this series. 3 Web App Sec-ian Takeaways From the 2016 DBIR Social Attacks in Web App Hacking - Investigating Findings of the DBIR 2016 DBIR & Application Security: Let's Get Back to the Basics Folks Be sure to check out The 2016 Data Breach Investigations Report Summary (DBIR) - The Defenders Perspective, by Bob Rudis (aka @hrbrmstr).

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now