Last night's disclosure by Google that malicious websites successfully put spyware on potentially thousands of iPhones is not just a one-day news story. It totally changes the game regarding iOS security.
For the past 12 years, iOS has been the gold standard in operating-system security, a standard that the developers of Android, macOS and Windows could only aspire to. You could use the fingers of one hand to count the instances of iOS malware found in the wild that worked on non-jailbroken iPhones — ever.
You could rely on iOS to keep you safe, unless you were a high-profile dissident in a repressive but fairly wealthy country. You could remain unconcerned that Apple does not permit, and did not need to have, antivirus software.
Today, the number of known working in-the-wild iOS exploits pretty much doubled. Suddenly, iOS doesn't seem so secure. If a relatively sloppy group of nation-state hackers could indiscriminately compromise iPhones for more than two years, including phones running the latest versions of iOS, how many other iOS-hacking campaigns are there still out there?
"For this one campaign that we've seen, there are almost certainly others that are yet to be seen," the Google Project Zero blog post (opens in new tab) revealing the malware campaign said.
"This is rather terrifying," wrote Malwarebytes security researcher Thomas Reed (opens in new tab) on Twitter. "iPhone infections are scarier, because there's absolutely no way to identify whether your phone is infected without expert assistance... and even then, maybe not!"
Bluntly: How safe is your iPhone now? It seems a lot less safe than it did yesterday.
So what happened?
To catch you up, Google Project Zero researcher Ian Beer posted a series of long blog posts (opens in new tab) last night at about 8 p.m. Eastern time, or midnight GMT, detailing how Google's Threat Analysis Group had earlier this year "discovered a small collection of hacked websites" that were being used in "indiscriminate watering-hole attacks" against iPhone users.
Reed, in a Malwarebytes blog post (opens in new tab) later on Friday, summed up what the spyware implanted in the iPhones could steal -- "all keychains, photos, SMS messages, email messages, contacts, notes, and recordings" and "the unencrypted chat transcripts from a number of major end-to-end encrypted messaging clients, including Messages, WhatsApp, and Telegram."
Project Zero took a look at the websites and the spyware, figured out what was going on, and told Apple. Apple fixed the underlying flaws that made the attacks possible within a week, with iOS 12.1.4 on Feb. 7. (Other flaws used in the attacks had been patched already, but some iPhones were still vulnerable to them.)
Problem solved? In the short term, yes. But the fact that this went on for so long without anyone noticing, least of all Apple, is what's really concerning.
A market-changing shift?
Working iOS exploits were until now thought to be so rare and expensive that even well-funded nation-state attackers could use them only sparingly and only against the most high-level targets.
Beer makes a cryptic reference to "the million-dollar dissident (opens in new tab)" in his introductory post. He's referring to a human-rights activist in the United Arab Emirates who in 2016 was targeted by someone trying to get him to click on a booby-trapped website that used a previously unknown iOS exploit to "jailbreak" the visitor's iPhone so that spyware could easily be installed.
Such "one-click" iOS exploits that require no action on the part of the target, and no indication that the device has been compromised, have sold privately for up to a million dollars. But their shelf life is short, because if they're discovered, they're quickly patched, as happened in the case of the UAE human-rights activist — Apple patched against the exploit three weeks after the activist found and reported it.
Yet the websites found by the Google researchers used 14 different iOS vulnerabilities, strung together in different ways to create no less than five one-click iOS exploits, and corrupted several websites that attacked not the iPhones of one or a few targeted individuals who were specifically lured to those sites, but the iPhones of anyone who visited the sites.
Beer estimated that these sites "receive thousands of visitors per week." His use of the present tense hints that the sites are still up and running.
He also noted that the implementation of the exploits was shoddy. The attackers made no effort to encrypt the data their spyware was sending back to the attackers' servers, or to disguise the servers where the data was going. Anyone with a copy of Wireshark could have "sniffed" the unencrypted data going out over a Wi-Fi network.
"While the exploits are very complex, the implant is amateur-hour-level stuff," commented malware researcher Jake Williams (opens in new tab) in a blog post today. "This highly suggests that the exploits and implant were not only developed by different teams, but teams with dramatically different skill levels."
This could be an attacker group that doesn't care if it loses millions of dollars in working iOS exploits -- or one that has reason to believe that working iOS exploits are much less rare that we'd thought.
Should Apple have caught this instead of Google?
The expense of deploying all these zero-days so publicly might have been worth it to the attackers, Beer pointed out, despite the risk of discovery.
"I shan't get into a discussion of whether these exploits cost $1 million, $2 million, or $20 million," he wrote. "All of those price tags seem low for the capability to target and monitor the private activities of entire populations in real time."
But that leaves out the issue of how long it took for the exploits to be discovered. Marcus Hutchins, the man who famously stopped the WannaCry ransomware outbreak and ended up serving jail time as an indirect result, thinks Apple may have dropped the ball.
"Maybe I'm missing something, but it feels like Apple should have found this themselves," Hutchins wrote on Twitter (opens in new tab). "Bug bounties are cool and all, but good telemetry" -- the ability to see what your own software is doing on a network -- is significantly more important."
In a conversation with Tom's Guide, Malwarebytes' Thomas Reed countered that Apple might not have been able to.
"I'm not sure that Apple could have spotted this, primarily because the controls on iOS are so limiting that it makes visibility into an infection on the device almost non-existent," Reed told us. "Of course, there may be telemetry sent back to Apple that I'm unaware of that could have tipped Apple off... but I would think not, given Apple's stance on privacy."
That lack of visibility is part of the problem, Reed added. In contrast to Android, iOS is pretty much a black box. Security researchers have had a hard time analyzing it, and iOS users have no idea what the filesystem on their devices looks like, or even how much RAM their devices come with.
"The fact that this wasn't spotted for two years is quite telling, and I think tells an interesting story," he added. "Apple doesn't allow scanning iOS devices in any way, but if that had been possible, it's likely this wouldn't have lasted for two years."
Alex Stamos, formerly head of security at Yahoo and Facebook and now a professor at Stanford, also blamed Apple's lack of transparency and near-total control of the iOS ecosystem -- two things that until now might have been seen as necessary to preserve high security standards.
"Many things to learn from this incident, but one is the safety cost of anti-competitive iOS App Store policies," Stamos tweeted (opens in new tab). "Chrome/Brave/Firefox are required to use the default WebKit/JS [to run on iOS, making them merely skinned versions of Safari]. If Apple isn't going to put in the work necessary to protect users then they should let others do so."
"It's darkly ironic that Apple is the company that is demonstrating the end point of late-90's fears about Microsoft," Stamos added (opens in new tab).
He listed three things that Microsoft was accused of 20 years ago, and which are arguably true of Apple today: "rent-seeking via platform control" such as Apple's 30% cut of iOS app revenue, "content moderation on behalf of autocracies" -- Apple has cooperated with the Chinese government on censorship -- and "risk of software monoculture," the results of which we can see with yesterday's disclosure.
So how do we fix this?
The upshot is that iOS now clearly has a security problem. I didn't expect to ever say that, but the rock of iOS security was already chipped away at a bit -- a different set of Google Project Zero exposed many flaws in iMessage earlier this summer.
We asked Reed if Apple might want to consider permitting third-party antivirus software on iOS devices, as Android has.
"I don't actually think antivirus software running on iOS is the answer," he replied. "Not only do I not think Apple would ever approve that, it would also deliver potentially dangerous capabilities into the hands of iOS developers.
"What I think would be better would be some Apple-sanctioned means for accessing the filesystem on an iOS device," Reed said, specifying that even that should be possible only under tightly controlled conditions.
In the long run, the knowledge that iOS is not that secure may be a good thing. Apple seems to know it too -- earlier this month, it said it would give approved researchers access to special iPhones (opens in new tab) that would be easier to hack into, and it raised the "bug bounty" on iOS flaws that independent researchers discover to a maximum of $1.5 million.
Last night's revelations put Apple's transparency-boosting decisions in a new light. Perhaps Apple realizes that it now needs the hackers on its side.