Rapid7 Blog

Application Security  

What's New in AppSpider Pro 7.0?

In the latest release of AppSpider Pro version 7.0 you will find some great new features which will improve the crawling, attack and overall usability of the product. Below are a few of the key new enhancements you will find in the release. Chrome/…

In the latest release of AppSpider Pro version 7.0 you will find some great new features which will improve the crawling, attack and overall usability of the product. Below are a few of the key new enhancements you will find in the release. Chrome/WebKit Integration With the introduction of the Chrome/WebKit browser, AppSpider Pro now supports both Chrome and Internet Explorer as default browsers. These integrated browsers facilitate AppSpider's crawling and attack functionality, so with the added support of the Chrome browser, AppSpider now has improved coverage for web applications that aren’t fully compatible with Internet Explorer. Validation Scan Need a quick way to verify that a vulnerability has been remediated by your development team? AppSpider's new validation scan method allows users to target a scan against a previously scanned application and rescan only selected vulnerabilities rather than re-running a complete scan. Save time and get immediate visibility into remediation status. Improved UI Updates Looking for real-time and at-a-glance information of your scans? AppSpider Pro's main UI screen has been updated to give you visibility into scan status, number of vulnerabilities found, number of links crawled, authentication used and the attack policy which was used. All of your scan info in one place makes it easier than ever for you to monitor scan progress.. Confidence level for findings Based on the experience and research of Rapid7’s engineering teams, a confidence level for findings is now available in HTML and JSON reports to provide users with a visual indicator of how certain AppSpider is that a particular finding is valid. New Attack Modules The following attack modules have been added as a part of this release: ASP.NET Serialization: ASP.NET Serialization security module checks for serialized binary objects. Serialized data can potentially be intercepted and read by malicious users. Furthermore, in some cases controls might use serialized data for internal processing, so a malicious code may be processed on the web server. Cross-site scripting (XSS), (DOM based Reflected via Ajax Request): DOM Based XSS (or as it is called in some texts, "type-0 XSS") is an XSS attack wherein the attack payload is executed as a result of modifying the DOM “environment” in the victim's browser used by the original client-side script, so that the client-side code runs in an “unexpected” manner. HTTP Query Session Check: HTTP Query Checks parameter values which can expose an application to various security risks. HTTP User Agent Check: HTTP User-Agent Check is performed to understand whether user-agent sniffing is turned on. Session Upgrade: Reports a risk factor for exposing or binding the user session between states of anonymous users and authenticated users. For additional details on these feature please review the AppSpider Pro 7.0 User guide here (PDF).

About User Enumeration

User enumeration is when a malicious actor can use brute-force to either guess or confirm valid users in a system. User enumeration is often a web application vulnerability, though it can also be found in any system that requires user authentication. Two of the most…

User enumeration is when a malicious actor can use brute-force to either guess or confirm valid users in a system. User enumeration is often a web application vulnerability, though it can also be found in any system that requires user authentication. Two of the most common areas where user enumeration occurs are in a site's login page and its ‘Forgot Password' functionality. The malicious actor is looking for differences in the server's response based on the validity of submitted credentials. The Login form is a common location for this type of behavior. When the user enters an invalid username and password, the server returns a response saying that user ‘rapid7' does not exist. A malicious actor would know that the problem is not with the password, but that this username does not exist in the system, as shown in Figure 1: On the other hand, if the user enters a valid username with an invalid password, and the server returns a different response that indicates that the password is incorrect, the malicious actor can then infer that the username is valid, as shown in Figure 2: At this point, the malicious actor knows how the server will respond to ‘known good' and ‘known bad' input. So, the malicious actor can then perform a brute-force attack with common usernames, or may use census data of common last names and append each letter of the alphabet to generate valid username lists. Once a list of validated usernames is created, the malicious actor can then perform another round of brute-force testing, but this time against the passwords until access is finally gained. An effective remediation would be to have the server respond with a generic message that does not indicate which field is incorrect. When the response does not indicate whether the username or the password is incorrect, the malicious actor cannot infer whether usernames are valid. Figure 3 shows an example of a generic error response: The application's Forgot Password page can also be vulnerable to this kind of attack. Normally, when a user forgets their password, they enter a username in the field and the system sends an email with instructions to reset their password. A vulnerable system will also reveal that the username does not exist, as shown in Figure 4: Again, the response from the server should be generic and simply tell the user that, if the username is valid, the system will send an instructional email to the address on record. Figure 5 shows an example of a message that a server could use in its response: Sometimes, user enumeration is not as simple as a server responding with text on the screen. It can also be based on how long it takes a server to respond. A server may take one amount of time to respond for a valid username and a very different (usually longer) amount of time for an invalid username. For example, Outlook Web Access (OWA) often displays this type of behavior. Figure 6 shows this type of attack, using a Metasploit login module. In this example, the ‘FAILED LOGIN' for the user 'RAPID7LAB\admin' took more than 30 seconds to respond and it resulted in a redirect. However, the user 'RAPID7LAB\administrator' got the response ‘FAILED LOGIN, BUT USERNAME IS VALID' in a fraction of a second. When the response includes ‘BUT USERNAME IS VALID', this indicates that the username does exist, but the password was incorrect. Due to the explicit notification about the username, we know that the other response, ‘FAILED LOGIN', is for a username that is not known to the system. How would you remediate this? One way could be to have the application pad the responses with a random amount of time, throwing off the noticeable difference. This might require some additional coding into an application, or may not be possible on a proprietary application. Alternately, you could require Two-Factor Authentication (2FA). While the application may still be vulnerable to user enumeration, the malicious actor would have more trouble reaching their end goal of getting valid sets of credentials. Even if a malicious actor can generate user lists and correctly guess credentials, the SMS token may become an unbeatable obstacle that forces the malicious actor to seek easier targets. One other way to block user enumeration is with a Web Application Firewall (WAF). To perform user enumeration, the malicious actor needs to submit lots of different usernames. A legitimate user should probably never not need to send hundreds or thousands of usernames. A good WAF will detect and block single IP address making many of these requests. Some WAFs will drop these requests entirely, others will issue a negative response, regardless of whether the request is valid. We recommend testing any part of the web application where user accounts are checked by a server for validity and look for some different types of responses from the server. A different response can be as obvious as an error message or the amount of time a server takes to respond, or a subtler difference, like an extra line of code in a response or a different file being included. Adding 2FA or padding the response time can prevent these types of attacks, as any of these topics discussed could tip off a malicious actor as to whether a username is valid. Read about Rapid7's web application security testing solutions.

R7-2017-02: Hyundai Blue Link Potential Info Disclosure (FIXED)

Summary Due to a reliance on cleartext communications and the use of a hard-coded decryption password, two outdated versions of Hyundai Blue Link application software, 3.9.4 and 3.9.5 potentially expose sensitive information about registered users and their vehicles, including application usernames,…

Summary Due to a reliance on cleartext communications and the use of a hard-coded decryption password, two outdated versions of Hyundai Blue Link application software, 3.9.4 and 3.9.5 potentially expose sensitive information about registered users and their vehicles, including application usernames, passwords, and PINs via a log transmission feature. This feature was introduced in version 3.9.4 on December 8, 2016, and removed by Hyundai on March 6, 2017 with the release of version 3.9.6. Affected versions of Hyundai Blue Link mobile application upload application logs to a static IP address over HTTP on port 8080. The log is encrypted using a symmetrical key, "1986l12Ov09e", which is defined in the Blue Link application (specifically, C1951e.java), and cannot be modified by the user. Once decoded, the logs contain personal information, including the user's username, password, PIN, and historical GPS data about the vehicle's location. This information can be used to remotely locate, unlock and start the associated vehicle. This vulnerability was discovered by Will Hatzer and Arjun Kumar, and this advisory was prepared in accordance with Rapid7's disclosure policy. Product Description The Blue Link app is compatible with 2012 and newer Hyundai vehicles. The functionality includes remote start, location services, unlocking and locking associated automobiles, and other features, documented at the vendor's web site. Credit This vulnerability was discovered by independent researchers William Hatzer and Arjun Kumar. Exploitation for R7-2017-02 The potential data exposure can be exploited one user at a time via passive listening on insecure WiFi, or by standard man-in-the-middle (MitM) attack methods to trick a user into connecting to a WiFi network controlled by an attacker on the same network as the user. If this is achieved, an attacker would then watch for HTTP traffic directed at an HTTP site at 54.xx.yy.113:8080/LogManager/LogServlet, which includes the encrypted log file with a filename that includes the user's email address. It would be difficult to impossible to conduct this attack at scale, since an attacker would typically need to first subvert physically local networks, or gain a privileged position on the network path from the app user to the vendor's service instance. Vendor Statement Hyundai Motor America (HMA) was made aware of a vulnerability in the Hyundai Blue Link mobile application by researchers at Rapid7. Upon learning of this vulnerability, HMA launched an investigation to validate the research and took immediate steps to further secure the application. HMA is not aware of any customers being impacted by this potential vulnerability. The privacy and security of our customers is of the utmost importance to HMA. HMA continuously seeks to improve its mobile application and system security. As a member of the Automotive Information Sharing Analysis Center (Auto-ISAC), HMA values security information sharing and thanks Rapid7 for its report. Remediation On March 6, 2017, the vendor updated the Hyundai Blue Link app to version 3.9.6, which removes the LogManager log transmission feature. In addition, the TCP service at 54.xx.yy.113:8000 has been disabled. The mandatory update to version 3.9.6 is available in both the standard Android and Apple app stores. Disclosure Timeline Tue, Feb 02, 2017: Details disclosed to Rapid7 by the discoverer. Sun, Feb 19, 2017: Details clarified with the discoverer by Rapid7. Tue, Feb 21, 2017: Rapid7 attempted contact with the vendor. Sun, Feb 26, 2017: Vendor updated to v3.9.5, changing LogManager IP and port. Mon, Mar 02, 2017: Vendor provided a case number, Consumer Affairs Case #10023339 Mon, Mar 06, 2017: Vendor responded, details discussed. Mon, Mar 06, 2017: Version 3.9.6 released to the Google Play store. Wed, Mar 08, 2017: Version 3.9.6 released to the Apple App Store. Wed, Mar 08, 2017: Details disclosed to CERT/CC by Rapid7, VU#152264 assigned. Wed, Apr 12, 2017: Details disclosed to ICS-CERT by Rapid7, ICS-VU-805812 assigned. Fri, Apr 21, 2017: Details validated with ICS-CERT and HMA, CVE-2017-6052 and CVE-2017-6054 assigned. Tue, Apr 25, 2017: Public disclosure of R7-2017-02 by Rapid7. Tue, Apr 25, 2017: ICSA-17-115-03 published by ICS-CERT. Fri, Apr 28, 2017: Redacted the now-disabled IP address for the LogManager IP address.

Apache Struts Vulnerability (CVE-2017-5638) Protection: Scanning with Nexpose

On March 9th, 2017 we highlighted the availability of a vulnerability check in Nexpose for CVE-2017-5638 – see the full blog post describing the Apache Struts vulnerability here. This check would be performed against the root URI of any HTTP/S endpoints discovered during a…

On March 9th, 2017 we highlighted the availability of a vulnerability check in Nexpose for CVE-2017-5638 – see the full blog post describing the Apache Struts vulnerability here. This check would be performed against the root URI of any HTTP/S endpoints discovered during a scan.On March 10th, 2017 we added an additional check that would work in conjunction with Nexpose's web spider functionality. This check will be performed against any URIs discovered with the suffix “.action” (the default configuration for Apache Struts apps).It may be necessary to configure your scan template to direct Nexpose to specific paths on web servers if they cannot be discovered during the default spidering process. If your app's URI is not linked to from any of these discovered pages, you will need to configure these paths. Follow the steps below to configure your scan template:Let's say you have 2 Apache Struts apps in the following locations:Example App URL 1: http://example.com/org/apps/myapp.actionExample App URL 2: http://example.com/other/org/different.actionIn Nexpose's web UI, select the scan template that you wish to use (Administration → Templates → manage)Go to the Web Spidering section of the template (WEB SPIDERING → PATHS) and then add all the paths you wish Nexpose to try accessing to the “Bootstrap paths” section. PLEASE NOTE: Each path must be followed by a trailing slash and are comma separated (e.g. /org/apps/,/other/org/):Once you configured the paths, save the changes to the template.Not a Nexpose customer and want to scan your network for the Apache Struts vulnerability? Download a free trial of Nexpose here.

Bug, Not Alert: How Application Security Must Use Different Words

"Words matter” is something that comes out of my mouth nearly each day. At work it matters how we communicate with each other and the words we use might be the difference between collaboration or confrontation. The same happens with the security world, especially…

"Words matter” is something that comes out of my mouth nearly each day. At work it matters how we communicate with each other and the words we use might be the difference between collaboration or confrontation. The same happens with the security world, especially when we communicate with folks in IT or within the devops methodology. Last week this became highly apparent sitting with folks attending OWASP's annual AppSec USA, where they discussed the difference between a fix or fail. The problem, in our world, often stems from the fact that security is oftentimes a scary concept, conjuring up thoughts of clowns lurking in the woods on the walk to school (my therapist told me to express my fears outwardly). Security means something is at risk and that if it doesn't get fixed immediately the world may come to a frantic halt. The truth, however, is that not all security threats are created equal and in most cases the need to prioritize fixes can eliminate the panic. The challenge is actually how security threats or vulnerabilities are presented to those outside of security. Imagine what a "security vulnerability report" does to the devops folks working the app your business uses to bill customers? For years we've focused on finding all the vulnerabilities, prioritizing them based on business and threat context, and then ultimately throwing them over the wall to IT or devops. But security has been learning how to more effectively create a remediation workflow. In some cases this means true management of the workflow, analytics that tells you if a vuln has been patched, or dashboards fed from live data so decisions are made at the point of impact. All that stuff is great, but what if I said you also must have a reframing of what the word "security vulnerability" actually means? Security Vuln or JIRA Ticket? Back to my time with the OWASP crew in DC; and I'll be fully transparent that this idea came to me as I spoke with Rapid7 application security customers (check out AppSpider for more info). We talked a long time about the importance of collecting all the right application data for scanning and then prioritizing the vulns found. But the part of the conversation that really turned around my thinking was when we got to remediation. The functionality that these customers liked the most was the ability to not throw over a 2,300-page stale report (true story!) but instead translate found vulnerabilities directly into the devops ticketing system. In this case it was a simple measure of taking what was found via application security testing and then placing that, with context, into JIRA. All of a sudden the devops team had a list of high-priority bug fixes, which they valued and would get to quickly, rather than a big security report that seemed to be more blame-game rather than helpful. Words matter in security, as does intent. It's an important thing to consider as you build our your security program and discover the points of contact with IT and devops.

AppSpider application security scanning solution deepens support for Single Page Applications - ReactJS

Today, Rapid7 is pleased to announce an AppSpider (application security scanning) update that includes enhanced support for JavaScript Single Page Applications (SPAs) built with ReactJS. This release is significant because SPAs are proliferating rapidly and increasingly creating challenges for security teams. Some of the key…

Today, Rapid7 is pleased to announce an AppSpider (application security scanning) update that includes enhanced support for JavaScript Single Page Applications (SPAs) built with ReactJS. This release is significant because SPAs are proliferating rapidly and increasingly creating challenges for security teams. Some of the key challenges with securing SPA's are: Diverse frameworks - The diversity and number of JavaScript frameworks contributes to the complexity in finding adequate scan coverage against all modern Single Page Applications. Custom events - These frameworks implement non-standard or custom event binding mechanisms, and in the case for ReactJS, it creates a so-called “Virtual DOM” which provides an internal representation of events outside of the real browser DOM. It is important to discover the type and location of every actionable component on the page. Tracking the event bindings on a real DOM is relatively straightforward by shimming EventTarget.prototype.addEventListener and determining the event type, and which node it is bound to. For example: However, in cases where a framework manages its own event delegation (such as in the ReactJS Virtual DOM) it becomes more efficient to hook into the framework, effectively providing a query language into the framework for its events (instead of listening for them). According to ReactJS page: Event delegation React doesn't actually attach event handlers to the nodes themselves. When React starts up, it starts listening for all events at the top level using a single event listener. When a component is mounted or unmounted, the event handlers are simply added or removed from an internal mapping. When an event occurs, React knows how to dispatch it using this mapping. AppSpider has now created a generalized lightweight framework hooking structure that can be used to effectively crawl/discover/scan frameworks that do things ‘their own way.'  Look for an upcoming announcement on how you can incorporate and contribute your own custom framework scanning hooks with AppSpider. What's New? So what is AppSpider doing with ReactJS now? AppSpider is leveraging Facebook's open source developer tools (react-devtools) that are wrapped in a generalized framework hook and now crawled exhaustively by AppSpider. Additionally, ‘doin' it their own way' event binding systems (such as the ReactJS Virtual DOM) are being considered and executed. It is still the case that frameworks are supported right out of the box including AngularJS, Backbone, jQuery, Knockout, etc., without the need for tuning. Only where needed are we adding specific support for frameworks with custom techniques. Why is this important? Web application security scanners struggle with understanding these more complex types of systems that don't rely on fat clients and slow processes. Scanners were built using standard event names relying on these ever-present fields to allow them to interact with the web application. Without these fields, a traditional scanner no longer has the building blocks necessary to correctly interact with the web applications it is scanning. Additionally, they have used the ever-present DOM structure to better understand the application and assist with crawling. This becomes difficult if not impossible for a traditional scanner when they have to deal with applications that process information on the server side instead of the client side. If this creates such an issue, why are people flocking to these frameworks? There are several reasons: Two-way data bindings which allow a better back and forth between client side and server side interactions Processing information on the server side which increases performance and gives a better user experience Flexibility to break away from the fat client side frameworks These capabilities can make a dramatic difference to developers and end users but they also introduce unique issues to security teams. Security teams and their tools are all used to the standard event names like OnClick or OnSubmit. These events drive how we interact with a web application, allowing our standard tools to crawl through the application and interact with them. By using these standard events we have been able to automate the manual tasks of making the application think we interacted with it. This becomes much more complicated when we introduce custom event names. How do you automate interacting with something that changes from application to application or even worse whenever you refresh the same application? AppSpider answers that question by allowing you to connect directly into the framework and have the framework tell you what those custom events are before it even begins to crawl/attack. Security experts have relied upon the DOM to know what was needed to test an application and monitor this interaction to understand potential weaknesses. Server side processing complicates this as all processing is done on the server side, away from the eyes and tools of the security expert and displaying only the end results. With AppSpider, you can now handle applications that are utilizing server side processes because we are not dependent on what is shown to us; instead we already know what is there. Currently, the only way for pen testers to conduct web application tests on applications using ReactJS and other modern formats is to manually attack them, working their way one-by-one through each option. This is a time consuming and tedious task. Pen testers lack the tools to be able to quickly and dynamically scan a web application using these SPA frameworks to identify potential attack points and narrow down where they would like to do further manual testing. AppSpider now allows them to quickly and efficiently scan these applications, saving them time and allowing them to focus efforts where they will be the most effective. How can you tell if your scanner is supporting these custom event names? Answering this question can be difficult as you have to know your application to truly understand what is being missed. You can typically see this quickly when you start to analyze the results of your scans. You will see areas of your application completely missed or parameters that don't show up in your logs as being tested against. Know your weaknesses.  Can your scanner effectively scan this basic ReactJS application and find all of the events? http://webscantest.com/react/ Test it and find out!  Is your scanner able to get see past the DOM? AppSpider can.

Honing Your Application Security Chops on DevSecOps

Integrating Application Security with Rapid Delivery Any development shop worth its salt has been honing their chops on DevOps tools and technologies lately, either sharpening an already practiced skill set or brushing up on new tips, tricks, and best practices. In this blog, we'll examine…

Integrating Application Security with Rapid Delivery Any development shop worth its salt has been honing their chops on DevOps tools and technologies lately, either sharpening an already practiced skill set or brushing up on new tips, tricks, and best practices. In this blog, we'll examine how the rise of DevOps and DevSecOps have helped to speed application development while simultaneously enabling teams to embed application security earlier into the software development lifecycle in automatic ways that don't delay development timeframes or require major time investments from developers and QA teams. What is DevOps? DevOps is a set of methodologies (people, process, and tools) that enable teams to ship better code – faster.  DevOps enables cross-team collaboration that is designed to support the automation of software delivery and decrease the cost of deployment. The DevOps movement has established a culture of collaboration and an agile relationship that unites the Development, Quality Engineering, and Operations teams with a set of processes that fosters high-levels of communication and collaboration. Collaboration between these three groups is critical because of the inherent conflict of development organizations being pressured to ship new features faster while operations groups are encouraged to slow things down to be sure that performance and security are up to snuff. DevSecOps and Application Security Getting new code out to production faster is a great goal that often drives new business, however in today's world, that goal needs to be balanced with addressing security. DevSecOps is really an extension of the DevOps concept. According to DevSecOps.org, “It builds on the mindset that "everyone is responsible for security" with the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the safety required. Web application attacks continue to be the most common breach pattern confirming what we have known for some time- that web applications are a preferred vector for malicious actors and they are difficult to protect and secure. According the 2016 Verizon Data Breach Report, 40% of the breaches analyzed for the 2016 DBIR were web app attacks. Today's web and mobile applications pose risk to organizational security that must be addressed. There are several well-known classes of vulnerabilities that can be present in applications; SQL Injection, Cross-Site Scripting, Cross Site Request Forgery, and Remote Code Execution are some of the most common. Why are Applications a Primary Target? Applications have become a primary target for attackers for the following reasons: 1. They are open for business and easily accessible: Companies rely on firewalls and network segmentation to protect critical assets.  Applications are exposed to the internet in order to be used by customers. Therefore, they are easy to reach when compared to other critical infrastructure and malicious attackers are often masked as legitimate desired traffic. 2. They hold the keys to the data kingdom: Web Applications frequently communicate with databases, file shares, and other critical information.  Because they are close, if they are compromised it is easier to reach this data which can often times be some of the most valuable.  Credit Card, PII, SSN, and proprietary information can be just a few steps away from the application. 3. Penetrating applications is relatively easy. There are tools available to attackers that allow them to point-and-shoot at a web application to discover exploitable vulnerabilities. Embed Application Security Early in the SDLC - A Strategic Approach So, we know that securing applications is critical. We also know that most application vulnerabilities are found in the source code. So, it stands to reason that application vulnerabilities are really just application defects and should be treated as such. Dynamic Application Security Testing (DAST) is one primary methods for scanning web applications in their running state to find vulnerabilities which are usually security defects that require remediation in the source code. These DAST scans help developers identify real exploitable risks and improve security. Typically, speed and punctiliousness don't go and in hand, so why would you go about mixing two things that might be thought of as having a natural polarity? There are several reasons that implementing a web application scan early in the SDLC as part of DevOps can be beneficial and there are ways to do it so that it doesn't take additional time for developers or for testers, it can be baked in as part of your SDLC and part of your Continuous Integration process. When dynamic application security testing first became popular, security experts often conducted the tests at the end of the software development lifecycle. That only served to frustrate developers, increase costs and delay timelines. We have known for some time now that the best solution is to drive application security testing early into the lifecycle along with secure coding training. Microsoft was one of the early pioneers of this with their introduction of the Secure Development Lifecycle (SDL) which was one of the first well-known programs that explicitly stated that security must be baked into the software development lifecycle early and at every stage of development not bolted on at the end. The benefits of embedding application security earlier into the SDLC are well understood. If you treat security vulnerabilities like any other software defect, you save money and time by finding them earlier when developers and testers are working on the release. Reduced Risk Exposure -The faster you find and fix vulnerabilities in your web applications mean less exposure to risk. If you can find a vulnerability before it hits production you've prevented a potential disaster, and the faster you remove vulnerabilities from production, the exposure you are faced with. Reduced Remediation Effort - If a vulnerability is found earlier in the SDLC then it's going to be easier and less expensive to fix for several reasons. The code is fresh, the developer is familiar with it and can jump in and fix it without have to dig up old skeletons in the code. There is less context switching (context switching is bad) when we find security defects during the development process. Additionally, if a vulnerability is found early then it is much more likely that there won't be other code relying on it so it can be changed more safely.  Finally, new code will be less likely burdened with tech debt and therefore be easier to fix. Reduced schedule delays - Security experts are well aware that development teams don't want to be slowed down. By embedding application security earlier in the SDLC, we can avoid they time delays that come with testing during later stages. These factors should help explain why incorporating application security into a DevOps mentality makes sense.  So how can a security-focused IT staff member help the developers get excited about this? Adopting a DevSecOps Mindset for Application Security - 8 Best Practices Build a Partnership Partnership and collaboration is what DevOps is all about. Sit down with your development team and explain that you aren't trying to slow them down at all. You simply want to help them secure the awesome stuff they are building. Help them learn by explaining the risk.  The ubiquitous “ALERT(XSS)” doesn't do a good enough job of pointing out the significance of a cross-site scripting vulnerability. Talk your developers through the real-world impact and risks.   Conduct Secure Code Training Schedule some “Lunch-n-Learn”s or similar session to explain how these vulnerabilities can emerge in code.  Discuss parameterization and data sanitization so developers are familiar with these topics.  The more aware of secure coding practices the developers are, the less likely they are to introduce vulnerabilities into the application's code-base. Know the Applications It helps when the security expert understands the code base. Try to work with your developers to learn the code base so you can help highlight serious vulnerabilities and can clearly capture risk levels. Security Test Early, Fail Fast. Failure isn't typically a good word, but failing fast and early is an agile development mindset that is applicable to application security. If you test early and often you can find and fix vulnerabilities faster and easier. The earlier new code is tested for security vulnerabilities the easier it is to fix. Security Test Frequently Test your code when new changes are introduced so that critical risks don't make it past staging.  Fixing issues is easier when they are fresh. Scan new code in staging before it hits production to reduce risk and speed remediation of issues. Integrate Security with Existing Tools Find opportunities a solution that to embed dynamic security testing early into your software development lifecycle by integrating with your existing tools. Seamlessly integrating security into the development lifecycle will make it easier to adopt. Here are some of the most effective ways of integration security testing into the SDLC: Continuous Integration - Many organizations achieve early SDLC security testing by integrating their DAST solutions into their Continuous Integration solutions (Hudson, Jenkins, etc) to ensure security testing is conducting easily and automatically before the application goes into production. This requires a application security scanner that works well in “point and shoot” mode and includes an open API's for running scans. Ask your vendor how their scanner would fit into your CI environment. Issue Tracking - Another effective strategy for building application security early into the SDLC is ensuring your application security solution automatically sends security defects to the issue tracking solution, like Jira, that is used by your development and QA teams. Test Automation - Many QA teams are having success by leveraging their pre-built automated functional tests to help drive security testing to make security tests even more effective. This can be done through browser automation solutions like Selenium. Rapid7's AppSpider is built with this in mind and includes a broad range of integrations to suit your team's needs. Learn more about how AppSpider helps drive application security earlier into the SDLC in this video. AppSpider is a DAST solution designed to help application security folks test applications both as part of DevOps and as part of a scheduled scanning program. Thanks for reading and have a great day.

Validate Web Application Security Vulnerabilities with AppSpider's New Chrome Plug-In

AppSpider's Interactive Reports Go Chrome We are thrilled to announce a significant reporting enhancement to AppSpider, Rapid7's dynamic application security scanner. AppSpider now has a Chrome Plug-in that enables users to open any report in Chrome and be able to use the real-time vulnerability validation…

AppSpider's Interactive Reports Go Chrome We are thrilled to announce a significant reporting enhancement to AppSpider, Rapid7's dynamic application security scanner. AppSpider now has a Chrome Plug-in that enables users to open any report in Chrome and be able to use the real-time vulnerability validation feature without the need for Java or having to zip up the folder and send it off. This makes reporting and troubleshooting even easier! Enabling Security - Developer Collaboration to Speed Remediation AppSpider is a dynamic application security scanning solution that finds vulnerabilities from the outside-in, just as a hacker would. Our customers tell us that AppSpider not only makes it easier to collaborate with developers, but also speeds remediation efforts. Unlike other application security scanning solutions, we don't just report security weaknesses for security teams to ‘send' to developers. Our solution includes an interactive component that enables developers to quickly and easily review a vulnerability and replay the attack in real-time. This enables them to see how the vulnerability works all from their desktop without having direct access to AppSpider itself - and without learning how to become a hacker. Related Content [VIDEO] Why it's important to drive application security earlier in the software development lifecycle. Developers can then use AppSpider's interactive reports to confirm that their fixes have resolved the vulnerability and are actually protecting the application from the weaknesses found. Developer's don't need to have AppSpider installed in their environment to leverage this functionality, just the report, connection to the application they are testing and they're good to go. Related Content [VIDEO] Watch AppSpider interactive reports in action. AppSpider Interactive Reports - How it Works Pretty cool, huh? Well, here's how and why it works... For those who work in application security, we know all too well that many, if not most, of the application security vulnerabilities we deal with exist in the source code of custom applications that we are responsible for - often in the form of unvalidated inputs. As security professionals, we aren't able to resolve these vulnerabilities (or defects) with a simple patch. We need to work with the developers to resolve security defects, implement coding best business practices and then re-release the new code into production. At Rapid7, we have understood this for a long time and we have been helping security teams and development teams to collaborate more effectively through AppSpider. There are many reasons why effective DevSecOps collaboration is difficult. Developers aren't security professionals and reporting security defects to them is easier said than done. We have the logistical issues of emailing around spreadsheets or PDFs and then we have the communication issues related to us speaking security and them speaking to developer. Not to mention the pain of having to go back and forth re-testing their “fixes” to see if they are still vulnerable or not, ‘cause let's face it, most developers wouldn't know a SQL Injection from a Cross Site Request Forgery (CSRF), let alone know how to actually attack their code to see if it's vulnerable to these attack types. This is an area that we have always shined in however, until today AppSpider required the security professional and the developer to make use of a Java applet to accomplish this within our reports. Now that Chrome and Firefox have disabled Java support, some teams weren't able to leverage this awesome functionality. Are you looking to upgrade your dynamic application security scanner? Check out AppSpider in action? Check out this on-demand demo of our web application security solution here!

RESTful Web Services: Security Testing Made Easy (Finally)

AppSpider's got even more Swagger now! As you may remember, we first launched improved RESTful web services security testing last year. Since that time, you have been able to test the REST APIs that have a Swagger definition file, automatically without capturing proxy traffic. Now,…

AppSpider's got even more Swagger now! As you may remember, we first launched improved RESTful web services security testing last year. Since that time, you have been able to test the REST APIs that have a Swagger definition file, automatically without capturing proxy traffic. Now, we have expanded upon that functionality so that AppSpider can automatically discover Swagger definition files as part of the application crawl phase. You no longer have to import the Swagger definition file, delivering an even easier and more automatic approach for security testing RESTful web services (APIs, microservices, and web APIs). This is a huge timesaver and another evolution along AppSpider's long history of being better at handling modern applications than other application security scanning solutions. Challenges with Security Testing RESTful APIs When it comes to RESTful web services, most application scanning solutions have been stuck in the traditional web application dark ages. As APIs have proliferated, security teams have been forced to manually crawl each API call, relying on what little - if any - documentation is available and knowledge of the application. With a manual process like that, the best we can hope for is to not miss any path or verb (GET, PUT, POST, DELETE) within the API. In this scenario, you also have to figure out how to stay current with a manually documented API. The introduction of documentation formats such as Swagger, API Blueprint, and RAML helped, but testing it was still a manual process fraught with errors. RESTful Web Services: Security Testing Made Easy Enter Rapid7. At the end of 2015, we released a revolutionary capability for testing your REST APIs with the introduction of Swagger-based DAST scanning. This ability for AppSpider to learn how to test an API by consuming a Swagger definition (.json) file revolutionized the way DAST solutions handle API security testing. Doing so allowed our customers for the first time, to easily scan their API without a lot of manual work. Now, we are taking it up another notch by making REST API security testing even easier. What's New? This is no trivial task as it's not just parsing data out. When our engineers started this task, the first thing they thought about was how customers would use this feature. We quickly realized that just like everything else in application security, when we start scanning new technologies in the web application ecosphere, we realize that we encounter the same challenges we did when learning to effectively scan traditional web applications. So, here are three of the latest enhancements we have made to speed REST security testing. Automated Discovery of Swagger Definitions - Instead of feeding your Swagger definition file into AppSpider, you can simply point AppSpider to the URL that contains your Swagger definition and AppSpider will automatically ingest it and begin to take action. Parameter Identification and Testing with Expected Results - Application security testing solutions always have the challenge of knowing what the parameters are and what data they are expecting. Web applications can have many different parameters, some of which may be unique to just that API. We knew that if this was going to be effective we needed to be able to account for these unique types of parameters. This led us to expand our capability so that you can give AppSpider guidance on what these parameters mean to your application. Your guidance allows AppSpider to improve the comprehensiveness of the testing. AppSpider remembers your guidance and uses it in subsequent tests. Quick tip: Regardless of which application security testing solution or experts you use, be sure that your scanner or testers are using expected results (a date for ‘date', a name for ‘last name' and a valid credit card number for ‘ccn'). Without expected results, the test is largely ineffective. Scan restrictions - Just like any other area of a web application, APIs have sensitive portions that you may not want to scan, a good example of this is a HTTP verb like DELETE. Many teams have effectively documented ALL of their REST API. This is great and is really where you should be, but we need to be able to avoid testing certain sections. We are already very good at customizing your web application scanning to make it the best it can be. We have just extended this capability into the handling of APIs. Now you can leverage AppSpider's scan restrictions capability and exclude any parameter or HTTP verb you do not want to use. By leveraging AppSpider's automated testing of RESTful web services that includes both parameter training with scan restrictions, you really have an unparalleled opportunity to test the security of your REST APIs quickly and frequently. We know you thought this was out of reach, but it's not! So keep this in mind next time you are having a discussion on how to efficiently scan and understand the security weaknesses in your APIs. If you're stuck in manual process it might be time to take a look at how to automate these processes using something like Swagger. Note, Swagger has been renamed to the (OpenAPI Specification). If you are already automated well then we can give you an answer you've always wanted..we can automate your API scanning like never before. You may also be interested in : AppSpider Release Notes Blog: AppSpider's Got Swagger: The first end-to-end security testing for REST APIs

Lessons Learned in Web Application Security from the 2016 DBIR

We spent last week hearing from experts around the globe discussing what web application security insights we have gotten from Verizon's 2016 Data Breach Investigations Report. Thank you, Verizon, and all of your partners for giving us a lot to think about! We also polled…

We spent last week hearing from experts around the globe discussing what web application security insights we have gotten from Verizon's 2016 Data Breach Investigations Report. Thank you, Verizon, and all of your partners for giving us a lot to think about! We also polled our robust Rapid7 Community asking them what they have learned from the 2016 DBIR. We wanted to share some of their comments as well: Quick Insights from the Rapid7 Community "I find that the Verizon Data Breach Investigation Report is a good indication of the current environment when it comes to the threat climate - I use it to prioritize what areas and scenarios I spend the most time focusing resources upon. For my environment, the continued shrinking of time between vulnerability disclosure and exploit is very important. For offices like mine with a small staff, identifying and applying patches in an ever more strategic manner is key. I think vendors who successfully market intelligent heterogeneous automated patching systems will start to see big gains in sales. And those that can tie it to scanning/compliance/reporting/attack suites are going to be even better positioned in the market." Scott Meyer, Sr. Systems Engineer at United States Coast guard >"The internet is evolving, and greater complexity creates greater risk by introducing new potential attack vectors. Attackers aren't always after data when targeting a web application. Frequently sites are re-purposed to host malware or as a platform for a phishing campaign. Website defacements are still prevalent, accounting for roughly half of the reported incidents." Steven Maske, Sr. Security Engineer >"Train, train, and retrain your users. Use proper coding. Really, we still fall victim to SQLi? Two factor authentication is still king. Limit download to x to prevent complete data exfiltration" Jack Voth, Sr. Director of Information Technology at Algenol Biotech Lessons Learned from the 2016 Verizon Data Breach Report Learning from DBIR Strategies to Implement 1. Web application attacks are a primary vector. • Start security testing your applications today. 2. No industry is immune, but some are more affected than others. • Focus on the attack patterns that your industry is experiencing. • Know your enemy's motivation. 3. Unvalidated inputs continue to plague our web applications. • Validate your inputs. • Train and retrain your developers. • Keep in mind that software security issues are software defects • Conduct regular dynamic application security testing (DAST) assessments to find unvalidated inputs 4. Web applications are evolving and so should your application security program. • Make sure your skills and tools are up to snuff with the latest dynamic and complex applications. • Ask your vendors if their tools handle Dynamic clients, RESTful APIs and Single Page Applications. Learn why this is important and what questions you need to ask vendors in this quick video. 5. Different industries have different enemies. • Know who and what you are defending against. Grudge or Money? 6. There are so many free and fabulous resources. Use them! • Get involved with OWASP today! How Rapid7 Can Help Rapid7's AppSpider, a Dynamic Application Security Testing (DAST) solution finds real-world vulnerabilities in your applications from the outside in, just as an attacker would. AppSpider goes beyond basic testing by enabling you to build a truly scalable web application security program. You can watch an on-demand demo of AppSpider here if you are interested in learning more. Deeper application coverage The AppSpider development team keeps up with evolving web application technologies so that you don't have to. From AJAX and REST APIs to Single Page Applications, we're committed to making sure that AppSpider assesses as much of your applications as is possible, so that you can rely on AppSpider to find unvalidated inputs and a host of other vulnerabilities in your modern web applications. View our quick video to learn how to achieve deeper web application coverage with your web app scanner. Breadth of web app attack types From unvalidated inputs to information disclosure, with more than 50 different, we've got you covered. AppSpider goes way beyond the OWASP Top 10 attack types, including SQL Injection and Cross Site Scripting (XSS) - we test for every custom attack pattern that can be tested by software. This leaves your team more time and budget to test the attack types that require humanlike business logic testing. Application security program scalability AppSpider is designed to help you scale your application security testing program so that you can conduct regular testing across hundreds or thousands of applications throughout the software development lifecycle. Dynamic Application Security Testing (DAST) earlier in the SDLC AppSpider comes with a host of integrations that enable you to drive application security earlier into the SDLC through Continuous Integration (like Jenkins), issue tracking (like Jira) and browser integration testing (like Selenium). Our customers are successfully collaborating with their developers and building dynamic application security testing earlier into the SDLC. You may also be interested in these blog posts that also offer perspective on the 2016 Verizon DBIR: Social Attacks in Web App Hacking - Investigating Findings of the DBIR 3 Web App Sec-ian Takeaways From the 2016 DBIR 2016 DBIR & Application Security: Let's Get Back to the Basics Folks The 2016 Verizon Data Breach Investigations Report (DBIR) - A Web Application Security Perspective

Social Attacks in Web App Hacking - Investigating Findings of the DBIR

This is a guest post from Shay Chen, an Information Security Researcher, Analyst, Tool Author and Speaker. The guy behind TECAPI , WAVSEP and WAFEP benchmarks.Are social attacks that much easier to use, or is it the technology gap of exploitation engines that make social…

This is a guest post from Shay Chen, an Information Security Researcher, Analyst, Tool Author and Speaker. The guy behind TECAPI , WAVSEP and WAFEP benchmarks.Are social attacks that much easier to use, or is it the technology gap of exploitation engines that make social attacks more appealing?While reading through the latest Verizon Data Breach Investigations Report, I naturally took note of the Web App Hacking section, and noticed the diversity of attacks presented under that category. One of the most notable elements was how prominent the use of stolen credentials and social vectors in general turned out to be, in comparison to "traditional" web attacks. Even SQL Injection (SQLi) - probably the most widely known (by humans) and supported attack vector (by tools) is far behind - and numerous application level attack vectors are not even represented in the charts.Although it's obvious that in 2016 there are many additional attack vectors that can have a dire impact, attacks tied to the social element are still much more prominent, and the “traditional” web attacks being used all seem to be attacks supported out-of-the-box by the various scan engines out there.It might be interesting to investigate a theory around the subject: are the attackers limited to attacks supported by commonly available tools? Are they further limited by the engines not catching up with the recent technology complexity?With the recent advancements and changes in web technologies - single page applications, applications referencing multiple domains, exotic and complicated input vectors, scan barriers such as anti-CSRF mechanisms and CAPTCHA variations - even enterprise scale scanners have a hard time scanning modern application in a point-and-shoot scenario, and the typical single page application may require scan policy optimization to get it to work properly, let alone get the most out of the scan.Running phishing campaigns still requires a level of investment/effort from the attacker, at least as much as the configuration and use of capable, automated exploitation tools. Attackers appear to be choosing the former and that's a signal that presently there is a better ROI for these types of attacks.If the exploitation engines that attackers are using face the same challenges as vulnerability scanner vendors - catching up with technology - then perhaps the technology complexity for automated exploitation engines is the real barrier that makes the social elements more appealing, and not only the availability of credentials and the success ratio of social attacks.How about testing it for yourself?If you have a modern single-page application in your organization (Angular, React, etc), and some method of monitoring attacks (WAF, logs, etc), note:Which attacks are being executed on your apps?Which pages/methods and parameters are getting attacked on a regular basis, and which pages/methods are not?Are the pages being exempted technologically complex to crawl, activate or identify?Maybe complexity isn't the enemy of security after all.

2016 DBIR & Application Security: Let's Get Back to the Basics Folks

This is a guest post from Tom Brennan, Owner of ProactiveRISK and serving on the Global Board of Directors for the OWASP Foundation. In reading this year's Verizon Data Breach Investigations Report, one thing came to mind: we need to get back to the basics.…

This is a guest post from Tom Brennan, Owner of ProactiveRISK and serving on the Global Board of Directors for the OWASP Foundation. In reading this year's Verizon Data Breach Investigations Report, one thing came to mind: we need to get back to the basics. Here are my takeaways from the DBIR. 1. Remain Vigilant Recently, data relating to 1.5 million customers of Verizon Enterprise were for available for sale. Some would say this is ironic, but what it means to me is that everyone is HUMAN. SEC_RITY requires “U” to be vigilant in all aspects of its operations from creation, deployment, and use of technology.  I was very happy to see the work of the Center of Internet Security (CIS) Top 20 Security Controls referenced. These are important proactive steps in operating ANY business and I'm proud to be one of the collaborators on this important project. CIS Top 20 Security Controls 1: Inventory of Authorized and Unauthorized Devices 2: Inventory of Authorized and Unauthorized Software 3: Secure Configurations for Hardware and Software on Mobile Device Laptops, Workstations, and Servers 4: Continuous Vulnerability Assessment and Remediation 5: Controlled Use of Administrative Privileges 6: Maintenance, Monitoring, and Analysis of Audit Logs 7: Email and Web Browser Protections 8: Malware Defenses 9: Limitation and Control of Network Ports, Protocols, and Services 10: Data Recovery Capability 11: Secure Configurations for Network Devices such as Firewall Routers, and Switches 12: Boundary Defense 13: Data Protection 14: Controlled Access Based on the Need to Know 15: Wireless Access Control 16: Account Monitoring and Control 17: Security Skills Assessment and Appropriate Training to Fill Gaps 18: Application Software Security 19: Incident Response and Management 20: Penetration Tests and Red Team Exercises 2. All software security issues are software quality issues. Unfortunately, finding fault is what some humans do best, having adequate controls is what IT defending is actually about. The sections in the Verizon report that discussed attack vectors should remind everyone that not all software quality issues are security issues, but all software security issues are software quality issues. Currently one of the greatest risks to software is third party software components. 3. What type of Attacker are you Defending Against? What has not changed since 1989 when I first used ATDT,,,  to wardial by modem off an 8-bit for the first time is that it's STILL people behind the keyboards. People on a wide ethical spectrum are still using keyboards to harm, steal, deface, intimidate, and wage cyber attacks/wars, and ALL criminals need is means, motive, and opportunity. Every organization needs to be asking what TYPE of attacker are they defending against (Threat Modeling). For example: "My business relies on the internet for selling widgets, the adversary is an indiscriminate bot/worm, or a random individual with skills, or a group of skilled and motivated attackers. This is where OWASP's Threat Risk Modeling workflow can really help when proactively defined with OWASP's Incident Response Guidelines. Modern and resilient businesses should conduct mock training exercises to educate and prepare the team. Business is about taking risks, and not all survive. Some lack the number of customers they need to survive, others struggle to move enough product, and now for many, the eventuality a business could be hacked and unable to recover is a concern whether you are a Small Business or sitting on the Board of Directors of a Fortune 50 organization. You can use insider threat examples, outsider and 3rd party vendor risks, all are different and based on a tolerance threshold decisions need to be made. 4. OWASP - Get Involved! It's free and it's helpful! As the 2016 Verizon Data Breach Investigations Report shows, web applications remain a primary vector of successful breaches. I encourage everyone to get involved with the OWASP Foundation where I spend a great deal of time. OWASP operates as a non-profit and is not affiliated with any technology company, which means it is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a community of like-minded professionals, OWASP issues software tools and documentation on application security. All of its articles, methodologies and technologies are made available free of charge to the public. OWASP maintains roughly 250 local chapters in 110 countries and counts tens of thousands of individual members. The OWASP Foundation uses several licenses to distribute software, documentation, and other materials. I encourage everyone to review this OPEN resource and ADD to the knowledge tree. I really enjoyed the 2016 Verizon DBIR for the data. Their perspective in this report is based on wide array of both customer engagements and data from nearly 70 partners. The average reader that uses a credit card at a hotel, casino, or retail store may feel uneasy about the risk of trusting others with their data. If your business is dealing with confidential data you should be concerned and proactive about the risks you take. If you haven't already, take a look at the Defender's Perspective of this year's DBIR, written by Bob Rudis.

3 Web App Sec-ian Takeaways From the 2016 DBIR

This year's 2016 Verizon Data Breach Report was a great read. As I spend my days exploring web application security, the report provided a lot of great insight into the space that I often frequent. Lately, I have been researching out of band and second…

This year's 2016 Verizon Data Breach Report was a great read. As I spend my days exploring web application security, the report provided a lot of great insight into the space that I often frequent. Lately, I have been researching out of band and second order vulnerabilities as well as how Single Page Applications are affecting application security programs.  The following three takeaways are my gut reaction thoughts on the 2016 DBIR from a web app sec-ian perspective: 1. Assess Your Web Applications Today Not tomorrow, not next week, today. I don't want to see talented geeks jump on board a hot startup and hear, “Oh, we don't have a security program.” I look at this report and the huge increase in web application attacks wondering how ANYONE could still not be taking their web application security program seriously. Seriously? Let's get serious for a slim second. There has been a dramatic rise in web application attack patterns across all industry verticals as covered in the research. Though three industries: entertainment, finance, and information, have seen a larger jump. Web application attacks make up 50% or more of the total breaches, with a notable jump in the finance industry from 31% to 82% in 2016. However, it is suggested that this jump is due to sampling errors introduced from the overwhelming data points linked to Dridex. 2. Fun, Ideology, or Grudge drove most incidents. Money motivated most theft. Few spies were caught.  Although at first eye numbing stare, it appears that all web application hacking motives of 2015 were from grudge wielding, whistle blowing people with no real secret agent spying going on, though admittedly with a sizable criminal element.  When this same data is filtered through ‘confirmed data disclosure,' 95% of the resultant cases appear to be financially motivated, and it becomes much more apparent that data disclosure is all about the money. 3. “I value your input, I just don't trust it.” (p. 30) Unvalidated input continues to be one of the most fundamental software problems that lead to web application breaches.  From the dawn of client/server software to the now modern Single Page Application framework, we have been releasing applications with partially validated inputs despite the fact that we have known about validating inputs for decades. Unfortunately, this fundamental cultural development flaw will likely not be leaving us anytime soon. Please, if you learn anything from the DBIR, make sure to validate input, folks! In terms of the top 10 threat varieties of 2015, SQL Injection (#7), and Remote File Inclusion (#9) are ever present and are a direct result to trusting input in an unsafe manner. The ‘Recommended Controls' for Web App Attacks section in the DBIR states, "validate inputs, whether it is ensuring that the image upload functionality makes sure that it is actually an image and not a web shell, or that users can't pass commands to the database via the customer name field." This is not to say validation of output is not also of high importance. Rather, it indicates the place where most initial damage can occur, whereby output validation reduces the available information able to be gathered on the target. That's it for my take on the 2016 Verizon Data Breach Investigations Report. Be sure to check out the Defender's Perspective, written by Bob Rudis.

The 2016 Verizon Data Breach Investigations Report (DBIR) - A Web Application Security Perspective

The 2016 Verizon Data Breach Investigations Report (DBIR) is out and everyone is poring over the report to see what new insights we can take from last year's incidents and breaches. We have not only created this post to look at some primary application security…

The 2016 Verizon Data Breach Investigations Report (DBIR) is out and everyone is poring over the report to see what new insights we can take from last year's incidents and breaches. We have not only created this post to look at some primary application security takeaways, but we also have gathered guest posts from industry experts. Keep checking back this week to hear from people living at the front lines of web application security, as well as commentary from several of our customers who provided some quick takeaways that can help you and your team. Let's dive into four key takeaways from this year's DBIR, from an application security point of view. 1. Protect Your Web Applications Web app attacks remain the most common breach pattern underscoring what we already know - that web applications are a preferred vector for malicious attackers and they are difficult to protect and secure. The figure below shows that 40% of the breaches analyzed for the 2016 DBIR were web app attacks. 2. Stop Auditing Like It's 1999 We've said this before and we'll say it again. Applications are evolving at a rapid pace and they are becoming more complex and more dynamic with each passing year. From web APIs to Single Page Applications, it's critical that your application security experts not only understand the technologies used in your applications, but also find tools that are able to handle these modern applications. As we pay our respects to the dearly beloved, Prince, please, stop testing like it's 1999. Update your application security testing techniques, sharpen your skills, and make sure your tools understand modern applications. 3. No Industry is Immune No industry is exempt from web app attacks, but some are seeing more breaches than others. For the finance, entertainment, and information industries, web app attacks are the primary attack pattern in reported breaches. For the financial industry, web app attacks are a whopping 82% of their attacks. These industries, in particular, should be assessing and gearing up their web application security programs to ensure optimal investment and attention. 4. Validate Your Inputs As an industry, we have been talking about invalidated inputs forever. It feels like we are fighting an uphill battle. We strive to train our developers on secure coding, the importance of input validation and how to prevent SQL Injection, XSS, buffer overflows, and other attacks that stem from invalidated and unsanitary inputs. Unfortunately, too many application inputs continue to be vulnerable and we are swimming against a steady stream of new applications written by developers who continue to repeat the same mistakes. That's our take on the 2016 Verizon Data Breach Investigations Report. We would love to hear your thoughts in the comments! Please check back throughout the week to hear what some of our favorite web application security experts have to share about their key takeaways and reactions from this year's DBIR. For more perspective in this year's DBIR through an application security lens. Check out the rest of the blogs in this series. 3 Web App Sec-ian Takeaways From the 2016 DBIR Social Attacks in Web App Hacking - Investigating Findings of the DBIR 2016 DBIR & Application Security: Let's Get Back to the Basics Folks Be sure to check out The 2016 Data Breach Investigations Report Summary (DBIR) - The Defenders Perspective, by Bob Rudis (aka @hrbrmstr).

Modern Applications Require Modern Dynamic Application Security Testing (DAST) Solutions

Is your Dynamic Application Security Testing (DAST) solution leaving you exposed? We all know the story of the Emperor's New Clothes. A dapper Emperor is convinced by a tailor that he has the most incredible set of clothes that are only visible to the wise.…

Is your Dynamic Application Security Testing (DAST) solution leaving you exposed? We all know the story of the Emperor's New Clothes. A dapper Emperor is convinced by a tailor that he has the most incredible set of clothes that are only visible to the wise. The emperor purchases them, but cannot see them because it is just a ruse. There are no clothes. Unwilling to admit that he doesn't see the clothes, he wanders out in public in front of all of his subjects, proclaiming the clothes' beauty until a child screams out that the Emperor is naked. Evolving Applications If there is one thing we know for sure in application security, it's that applications continue to evolve. This evolution continues at such a rapid pace that both security teams and vendors have trouble keeping pace. Over the last several years, there have been a few major evolutions in how applications are being built. For several years now, we have been security testing multi-page AJAX driven applications powered by APIs and now we're seeing more and more Single Page Applications (SPAs). And this is happening across all industries and at organizations of all sizes. Take Gmail for example, in the image below, you can see one of the original versions of Gmail compared to a more recent versions. Today's Gmail is a classic example of a modern application. So, as security professionals, we have built our programs around automated solutions, like DAST, but how are DAST solutions keeping up with these changes? DAST Solutions - The Widening Coverage Gap Unfortunately, most application security scanners have failed to keep up with these relatively recent evolutions. Web scanners were originally architected in the days of classic web applications when the applications were static and relatively simple HTML pages. While scanners have never and will never cover an entire web application, they should cover as much as possible. Unfortunately, the coverage gap has widened in recent years forcing security teams to conduct even more manual testing particularly of APIs and Single Page Applications. But with over-burdened and under-resourced application security teams, testing by hand just doesn't cut it. Closing the Gap with Rapid7 AppSpider Of course, we don't think manual testing is an acceptable solution. Application security teams and application scanners can and should close this coverage gap with automation to improve both the efficiency (reduce manual efforts) and effectiveness (find more vulnerabilities) of security efforts. If this is something that interests you, you have come to the right place! Keeping up with application technology is one of our specialties. The application security research team at Rapid7 has been committed to maximum coverage since AppSpider was created. Our customers rely on us to keep up with the latest application technologies and attack techniques so that they can leverage the power of automation to deliver a more effective application security program. If your solution isn't effectively addressing your applications and you are looking for a way to test APIs, dynamic clients and SPAs more automatically. Download a Free Trial of AppSpider today! To learn more, visit www.rapid7.com. For more information on how to reduce your application security exposure, check out these resources: 7 Questions to Ask your DAST Security Vendor Whiteboard Wednesday: Keeping up with Application Complexity Blog: AppSpider's Got Swagger

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More

Toolkit

Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now

Podcast

Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now