Wow, this past week has been a pretty long year for Zoom.
As the COVID-19 global pandemic moved the whole knowledge-working world abruptly to work-from-home, virtual meetings are rapidly becoming de rigueur for pretty much everyone I know. As a result, Zoom's stock price hit an all-time high in mid-March (despite the stock market being depressed overall), trading on massive volume and suddenly seeing a PE ratio well north of 1500x. Thanks to its insanely easy-to-use cross-platform video conferencing service, Zoom was a successful B2B company well before the pandemic. Today, it's a household name beyond airport billboards as apparently everyone has been transformed into consumers and fans while we're all steering clear of other humans.
Well, almost everyone. Turns out, all this recent success has painted a huge target on Zoom's back, and people are falling all over themselves every time a new security issue is discovered in the platform, no matter how minor. Because of this, I wanted to take a little bit of your time—and mine—to discuss the Zoom-related scuttlebutt that has surfaced over the past few days. But, if you don't have time to get into it, the TL;DR summary is best summed up as:
Yes, Zoom has some security issues. It's complex software. All complex software has bugs. Some of those bugs are security-relevant. The engineers, marketers, and leadership at Zoom are neither dumb nor evil. You can judge Zoom on its response to security issues, more so than on the security issues themselves, within reason.
Before we jump into the issues, a bit of full disclosure: Rapid7 is a happy Zoom customer, and has been for a while now. Personally, I kind of love the software and service, and prefer it over every other video conferencing solution I've had my hands on, pretty much ever. I'm hopeful this doesn't make me a craven apologist, but do let me know if I haven't been able to adequately manage my own biases.
Okay. Here we go, in as close to chronological order of mainstream reporting as I can manage:
Zoom leaking data to Facebook
The gossip: Zoom leaks personal information to Facebook, even if you're not a Facebook user.
The reporting: Joe Cox, writing for Vice Motherboard, reported on March 26, 2020, that Zoom iOS App Sends Data to Facebook Even if You Don’t Have a Facebook Account. The article accurately illustrates what went down with Zoom's usage of the Facebook software development kit (SDK) on Apple platforms that run iOS (namely, iPhones and iPads), and it appears that Joe is the original researcher who first reported the issue to Zoom.
What's happened since: Zoom has pulled the Facebook SDK from its iOS app for Apple platforms by removing the "Login with Facebook" feature.
The updated Motherboard article does indeed confirm that Zoom acted within days by removing the leaky feature. (Zoom's blog post claims it was under two days, but who's counting?) It does not appear that Zoom was transmitting this information on purpose (let alone, selling it), and the personal information being leaked was basic diagnostics about the phones and tablets—things like screen size and storage space, along with some coarse location data. Importantly, it wasn't things like usernames, passwords, phone numbers, or nuggets of information from conversations. Motherboard also reports that Facebook was, in turn, setting a cookie-like unique ID for users, presumably for advertising.
To me, this reads like a success story, for the most part. Researcher Joe Cox found an information leak vulnerability, reported it, and it got fixed inside of a week. Technically, Motherboard dropped 0day on Zoom by publishing prior to the fix, but I don't know how that conversation went and what kinds of disclosure timelines were agreed upon. In the end, this illustrates one of the dirty secrets of rapid application development: Not all developers are fully aware of the implications of using someone else's SDK. It sounds like all Zoom devs wanted was the ability to use Facebook's SSO, but what they got was that, plus the (usual for Facebook app devs) tracking and sharing back to Facebook.
Oh, and now there's a civil lawsuit. Because there's always a lawsuit.
Zoom and end-to-end encryption
The gossip: Zoom video conferences and calls are vulnerable to eavesdropping.
The reporting: Micah Lee and Yael Grauer, writing for The Intercept, reported on March 31, 2020, that Zoom Meetings aren't End-to-End Encrypted, Despite Misleading Marketing. The article explores and explains what end-to-end (E2E) encryption is and why it's important, and points out some of the claims that Zoom makes on its website about using end-to-end encryption for video conferencing.
What's happened since: It's been two days since The Intercept published their (presumably independently discovered findings), and when they asked Zoom for comment, they got a canned "we take your security very seriously"-style response.
Let me explain ... no, there is too much. Let me sum up: Zoom does not actually enforce E2E encryption in video conferences, despite the company's claims. As far as the court of cryptography law is concerned, this is pretty bad. Like, inexcusably bad. But in fact, for most people, it’s probably okay, though there are obviously exceptions where people genuinely do need actual E2E.
What Zoom is providing is kind of like a qualified form of encryption across the call (which is why they are calling it end-to-end), in which Zoom still has the ability to decrypt. As Yael and Micah explain, at length and with corroboration, real E2E is very different from this approach as far as trust and security are concerned. With E2E, nobody outside of the parties to the video conference would be able to reveal the contents of the call. With transport-layer security (TLS), this power is with any of the parties and Zoom Video Communication, Inc. itself, along with anyone who compromised, or successfully subpoenaed, Zoom. You see the problem here?
To be clear, though, this does not mean your communications are wide open to just anyone. Your ISP is still in the dark, as are people who are on your shared local network, upstream networks, next-door networks—basically, anyone other than people sitting on the endpoints of the clients or wherever Zoom HQ terminates its connections. Assuming all the certificate exchange bits are happening correctly, you have just as much security with a Zoom call as you do with any other HTTPS interaction, which is still pretty good, and certainly good enough for most people in most use cases. After all, people pass secrets over normal old phone calls all the time—and sometimes those calls get wiretapped.
In my opinion, the more egregious element of this is the appearance of false advertising. It's possible that somewhere along the way, there was a miscommunication between Zoom engineering and Zoom marketing (hey, it happens). And, in fact, it should be highlighted that Zoom's text chat functionality is, in fact, E2E-encrypted. It's possible that's what Zoom is referencing on their website and in their security whitepaper. However, they have now acknowledged in their April 1 blog, The Facts Around Zoom and Encryption for Meetings/Webinars that when they use the language “end-to-end encryption,” they are not using it in the technical way that cryptographers do. This seems kind of like they have set themselves up for exactly this kind of fallout. The bottom line is that there is no real, reasonable, or even charitable, reading of the interface elements that indicate the entire videoconference is E2E -encrypted.
While Zoom figures out how to deal with this, if you are looking for the kind of privacy from eavesdropping that E2E provides, Zoom ain't it, and certainly won't be for at least several months, since engineering true E2E is difficult. Incidentally, I don't think anything else comparable enforces E2E, either. FaceTime from Apple supports E2E, but doesn't scale like Zoom, and of course, isn't cross-platform. In the end, each organization will have to decide its own level of risk assessment and acceptance based on what we know.
This cryptoscandal is going to be simmering for a while.
Zoom and UNC paths
The gossip: Zoom can expose your Windows passwords to other users.
The reporting: Lawrence Abrams, writing for Bleeping Computer, reported on March 31, 2020, that Zoom Lets Attackers Steal Windows Credentials via UNC Links. The article seems to have its genesis in a tweet from @_g0dmode from March 23, which states that Zoom chats turn UNC paths, like
\\example.com\, into clickable links on Windows clients. If someone were to click on that link, their Windows username and NTLM credential hash (a crackable version of a password) might be sent across the internet to the site provided by the attacker. A related trick is to point to localhost (the client's own computer) and execute programs that are already there, after some "Are you sure?"-style warnings from Windows. Lawrence attempted to report this issue to Zoom, presumably sometime between March 23 and March 30, but had not gotten a response.
What's happened since: On April 1, Zoom announced a fix for the UNC path rendering issue. Even though addressing this bug was the right thing to do, I'm unconvinced it’s actually a security vulnerability in the way we normally think about security vulnerabilities.
You cannot force someone to reveal their username and password hash with Zoom, as far as we know today. Instead, the attacker needs to get the user to click on the link. Now, that might not be particularly difficult with a standard "Click here for free money" kind of lure, and in fact, exactly this happens all the time in email phishing campaigns. Leaking NTLM password hashes is a favorite mechanism that criminals (and pen testers!) use to collect and crack these hashes, but the attack does require the bad guy to be on the Zoom call first. But, so far, we haven't called this a vulnerability in literally any other product, so this doesn't seem to me like a Zoom problem.
Zoom is primarily for talking to people who are remote, and ideally, those people shouldn't be using SMB across the internet in the first place. In related news, today is a fine day for residential ISPs to start blocking SMB traffic entirely from home, by default, much in the same way residential ISPs have been blocking SMTP on port 25 for years.
Zoom and the OSX system interface
The gossip: Zoom impersonates system prompts to trick users into installing it.
The reporting: Felix Steele, a malware analyst, reported on VMRay's blog on April 1, 2020, Good Apps Behaving Badly: Dissecting Zoom’s macOS installer workaround. The blog goes into technical detail about how Zoom abuses OSX's handling of preinstall scripts in OSX flat pkg files, expanding on his March 30 tweet about the same. Now, "abuse" is a strong word, but in the end, preinstall scripts are supposed to merely check if your MacBook is cool enough to handle whatever's in the package's goody bag; they're not intended to actually do all the installation heavy lifting. But that's exactly what Zoom's package does.
In addition to this, in some cases, Zoom needs extra privileges to do its thing—in which case, it pops up a system-generated (but application-controlled) password prompt. Normally, this prompt would say something boring and normal like, "Zoom needs your password to update the existing application," but in this case, the dialogue box is retitled as, "System need your privilege to change." This sounds pretty shady and like something you'd expect from malware and not a legitimate application.
What's happened since: Felix's tweet took off, clocking nearly 2,000 likes as of this writing, which is pretty amazing for someone who (now) has fewer than 1,500 followers. One of those likes came from Zoom's CEO, Eric Yuan, who replied thanking Felix for the report. As near as I can tell, this tweet was the first report Zoom had of this concern, and I haven't (yet) seen any commitment or action from Zoom to change any behavior here.
What we're left with is this: Does malicious-seeming behavior categorically define Zoom as malicious? I don't think so, but I can absolutely see the reason for concern. For example, I can imagine a scenario where an attacker would give someone a Zoom meeting link, specifically in order to get them to install the client, in order to turn around and exploit some vulnerability in that client. This installation behavior gives that imaginary attacker a convoluted, but possible, attack precursor. Now, I don't think Zoom is using these techniques to aggressively and maliciously install itself on unsuspecting victims—it's using them to get the client installed on machines owned by people who want it in as few clicks as possible. In other words, it's a clever UX hack. A shady, weird-smelling UX hack, but clever.
Also, a tiny, almost invisible part of me wishes that I thought of it first.
Zoom and local privilege escalation
The gossip: Local attackers can use Zoom to install malware.
The reporting: Zack Whittaker, writing for TechCrunch on April 1, 2020, reported that Ex-NSA hacker drops new zero-day doom for Zoom. I know reporters don't often get final say on their headlines, but holy smokes, that title. Stripping away the hyperbole, Zach covers the findings by Patrick Wardle, an OSX security expert, first published on Patrick's blog, Objective-See. In short, these are two local privilege escalation exploits that take advantage of some software architecture decisions made by Zoom. The first subverts the aforementioned shady malware preinstaller technique to launch whatever the attacker wants as root (the best, most powerful kind of user access), while the second uses Zoom's lax local library validation to subvert library functions in order to gain mic and webcam permissions without asking.
TechCrunch reached out to Zoom sometime between March 30 and April 1, though it looks like Patrick didn't attempt to privately disclose this to Zoom before publishing.
What's happened since: After three or four dozen hours of becoming aware of the bugs, Zoom released an update that addresses both of Patrick's issues, so check your updates to fetch them, or do what you usually do and get them automagically.
Now that they've reportedly been addressed, let's talk a little about this "zero-day doom." In order to exploit them, you, the attacker, need to be able to write files on the local MacBook as a user. In other words, the attacker is calling from inside the house, and usually the attacker is you.
While this might sound far-fetched or masochistic, it's pretty normal for malware that you accidentally installed and ran as yourself. I'll let you in on a little secret about computer security: "If I can touch your computer, it's my computer." I don't remember (and weirdly can't find) the source of that quote, but it's from 1999 or so. At any rate, local privilege escalation is slowly but surely getting harder on full computers (and harder still on mobile devices), but ultimately, such attacks are nearly impossible to defend against. This is why we have local passwords and worry a ton about how executable content gets delivered to the end user. But if you're in a position to install and run arbitrary software, you're probably not that far off from root- or SYSTEM-level access when you put your back into it.
Does that make these bugs any less buggy? Absolutely not. These are for-real security issues, discovered and published by a for-real security expert with real-world experience, and I'm happy that some equal-and-opposite effort from Zoom engineering took care of them. Also, uh, don't run malware?
Zoom, China, and more encryption snafus
The gossip: China can eavesdrop on Zoom calls, also Zoom crypto sucks.
The reporting: Micah Lee, writing for The Intercept, reported on April 3, 2020, that Zoom's Encryption is "Not Suited for Secrets" and has Surprising Links to China, Researchers Discover. The piece covers an in-depth technical review of Zoom's encryption capabilities, written by Bill Marczak and John Scott-Railton from the respected Citizen Lab of the University of Toronto, available here. The research is essentially a deep dive on exactly how Zoom encryption works, and, well, mistakes were made. The biggest technical finding was that Zoom is using a form of modern encryption called "Advanced Encryption Standard (AES) in Electronic Codebook (ECB) mode," which is basically the worst possible mode for something like a video call, and makes the encryption key easier to guess. There was also a reiteration that Zoom does not support or enforce E2E encryption.
The headline of The Intercept piece and subsequent coverage of the issues focuses mainly on the threat of Chinese government agencies having the ability to compel Zoom to provide key material and thus, decrypt Zoom conversations. Turns out, a handful of the key servers used to establish Zoom call security are located in China, and Zoom employs some 700 Chinese researchers and product developers.
What's happened since: Curiously, conversations today fail to mention that the vast majority of key servers are located in the U.S., which means that all those keys are subject to FISA warrants and subpoenas from the FBI and other U.S. agencies. The fact that a company is subject to laws in jurisdictions where it operates, and must comply with lawful government surveillance orders, is not new or unique. Plus, previous research already established that Zoom's encryption strategy is not actually end-to-end encryption, so this is unsurprising as well. Zoom is not Apple.
But back to the bad, naughty cryptography. The most significant claims are that the keys are generated and distributed in a weird way, video conferences are wrapped in what appears to be a home-grown encryption scheme, the keys are much shorter than Zoom had advertised (128-bit, not 256-bit), and the ECB mode of AES is unsuitable for video content. While the researchers did not demonstrate cracking the encryption or otherwise pulling useful information from an encrypted data stream, I suspect this capability is in the toolbox of the world's most advanced intelligence organizations.
With that in mind, there is a crucial sentence in Citizen Labs' report:
"For those using Zoom to keep in touch with friends, hold social events, or organize courses or lectures that they might otherwise hold in a public or semi-public venue, our findings should not necessarily be concerning."
In other words, Zoom is not suitable for conversations involving Top Secret clearance topics, highly sensitive intellectual property concerns, dangerous whistleblowing against a repressive government, or planning and executing criminal conspiracies. If these are your remote work activities, Zoom is not right for you.
If you are comfortable with normal telephone calls, public meetings, or casual video calls on pretty much any other platform, Zoom is almost certainly Good Enough. You are just not that interesting to the Chinese, the Americans, or anyone else.
To be absolutely clear: We should, and must, demand better encryption and transparency from Zoom. Over the next 89 days, I expect that significant work will be done here. There are some relatively easy technical wins in the crypto implementation of which Zoom is now acutely aware, so expect another round of Zoom updates shortly.
Incidentally, the Citizen Labs report also mentioned a vulnerability in the Waiting Room feature, but did not disclose details to The Intercept or anyone else. Instead, they're (quite reasonably) working with Zoom directly to fix this issue. Hooray.
CVD, you and me
You will notice that there is a common theme among all these vulnerability reports: The time from private reporting to public disclosure is measured in days and hours, and none of these timelines are over a week. This is not normal in vulnerability disclosure, and I'm worried about establishing this norm. Virtually everyone who's been around the vulnerability disclosure block agrees that some private lead time on vulnerability disclosure is valuable, from CERT/CC to ZDI to Project Zero. It's not about coddling oversensitive vendors—it's about reducing harm to end users.
I get it that Zoom is a particularly tempting target. They’re a $35-billion-plus company that may be perceived as having a not-stellar record when it comes to receiving and patching vulnerabilities—check out the blog I wrote on a Zoom vulnerability disclosure debacle in July 2019. In that situation, they had the chance to work with a researcher willing to deal with things privately, and unfortunately, Zoom blew it. But this was one disclosure gone sour, and so far, Zoom has responded to the recent public disclosures pretty promptly. It’s possible that they have only done so because the disclosures were made public, but I think Zoom probably deserves another chance to handle vulnerability disclosures sensibly and without causing panic among users.
I don't know how many vulnerability reports Zoom fields in a given month, and how many were reported in the Before Times and may be quietly moving along whatever normal triage workflow Zoom has. Since I'm a customer, I'll give them a call and ask. I'm also going to ask them if there's anything I can do to help, since I'm kind of a coordinated vulnerability disclosure (CVD) zealot. I think we generate the most good by privately reporting vulnerabilities, getting fixes out, and coordinating public disclosure at a time when we're doing the least harm. I don't like secret bugs, but I also don't like surprise public pantsing. Your bug, your choice, and journalists are going to journal, but wow, it would be cooler if we could just hold up a sec so we’re not potentially making the situation worse for end users.
To that end, if you have an interesting security finding in Zoom, or anything else, feel free to drop me a line at email@example.com and we can see about getting this taken care of in a sane way using Rapid7's normal security disclosure practices. We know Zoom can act quickly in the face of evidence and public opinion, but I'm not convinced we've given them a chance lately to act responsibly when we're being just a touch more kind about these reports.
Finally, this blog post might turn out longer over time. Since Zoom is everyone's favorite vuln-hunting punching bag, I expect to add more sections as new problems and confusions come to light, regardless of severity. For future updates, just scroll down to the end of this post for a changelog. In the meantime, we will continue to monitor the situation, from home, probably using Zoom. Nothing yet has convinced me there are better alternatives for my usual day-to-day.
Just before this blog post was published, Zoom announced a 90-day feature freeze in order to concentrate on tackling all and sundry security issues. That's an excellent sign that Zoom HQ is, in fact, taking this all seriously.
- April 2, 2020: First post on Facebook, E2E, UNC, password prompts, and local privesc.
- April 2, 2020: Zoom released version 4.6.19273.0402 for Mac OSX. This update probably fixes the pkg preinstall script issue described by Felix.
- April 3, 2020: Update regarding AES EBC and China, as reported above.
- April 15, 2020: Lorenzo Franceschi-Bicchierai reports at Vice Motherboard that hackers are trying to sell RCE bugs for half a mil. "Trying" is the keyword here. There are much better black/gray market bugs for sale at that price point.
- April 15, 2020: EJ Dickson reports at Rolling Stone that a Zoom spokesperson stated that "we use a mix of tools, including machine learning, to proactively identify accounts that may be in violation" of Zoom's AUP that prohibits indecent content. One, this seems technically challenging—I don't think Zoom is running Not Hotdog over video feeds to detect sex parties. But, more importantly, why would you imply this, unnamed Zoom spokesperson? Statements like this are a gutshot to your privacy claims. Also, maybe update your AUP? It's a little out of date.
- April 22, 2020: It occurs to me that I never actually addressed Zoombombing. Sorry about that, it didn't strike me as a vulnerability, but I can see how many people do consider it seriously. Very quickly: In the Before Times, Zoom had short, phone-number length meeting IDs that an attacker could guess pretty easily and then join and cause disruptions. Now, Zoom is strongly encouraging passwords to your meetings, which makes it way harder, but not impossible, just stumble upon random meetings. This is basically the same security model that regular old telephone conference calls use. In the end, if someone joins your call and you don't recognize them, the host of the call can use Zoom meeting controls to kick them out, and if you use passwords and waiting rooms, your risk of strangers dropping in drops to nearly zero.
- June 4, 2020: This week, it was reported in a Bloomberg piece that Eric Yuan, CEO of Zoom, stated "Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose." This lead to a lot of misunderstandings about what the "that" is. Yuan is referring here to end-to-end encryption, which I've covered above, but there was a misunderstanding that Yuan was claiming that free users would not enjoy any encryption at all. That's not true, and for most people, in most use cases, regular encryption is fine (if you're comfortable with banking on the internet or talking on the phone, the regular encryption is good enough). Now, there is legitimate confusion around what this means in terms of Zoom's cooperation with law enforcement for free users, especially given the explicit mention of LE on an earnings call. It's also unclear how this distinction between the E2E haves and have-nots will be implemented. That said, Zoom has opened up their encryption whitepaper for comment, so if you're interested in this sort of thing, that's a good place to get involved, since Zoom technical leadership (as well as legal and policy leadership) seem to be paying attention there.