When the U.S. government couldn’t force Apple to give it access to the iPhone used by the shooter in the San Bernardino massacre, it reportedly paid $1 million for a secret software vulnerability that gave it full access to the phone. These undiscovered software bugs — so-called “zero day” vulnerabilities — are highly coveted by intelligence agencies, which consider them essential tools in the war on terror.
This month’s leak of top secret CIA documents, together with a recent leak of NSA hacking tools, shows that the U.S. government is an avid user of these undiscovered software bugs, and the agencies stockpile them as part of an expanding global cyber-arms race. What they don’t do is disclose those vulnerabilities to the companies that make the products they want to penetrate, such as Apple in the San Bernardino case.
“It would mean unilaterally disarming themselves in cyberspace,” security expert Robert Graham told VICE News. “The biggest use for zero days is for hacking phones — iPhones and Android — because that is what terrorists have as their primary computing platforms. Taking those zero days away from [the intelligence community] would probably have a big impact on what they do.”
But many believe this practice undermines the security of everyone. The very idea that the government, charged with protecting its citizens, is hoarding cyberweapons that undermine citizens’ digital security makes some observers bristle. If the government keeps these vulnerabilities a secret, what’s to stop criminals from also buying them on the black market and targeting them at innocent people?
“The longer you haven’t reported it, the higher the likelihood it will eventually leak,” Jeremiah Grossman, a prolific web security researcher, said last August when a group known as The Shadow Brokers revealed the NSA’s hacking tools.
A new report from the RAND Corporation, however, suggests that most undiscovered software bugs stay secret. The report says there is just a 5 percent chance of someone else independently discovering the same vulnerability, meaning the risk associated with failing to disclose them would be limited.
The suggestion that the U.S. government should disclose its zero-day vulnerabilities makes no sense to Graham. “The argument has no logic to it,” he says, “because if you demand they disclose all vulnerabilities they acquired, then they will just stop acquiring vulnerabilities.”
The RAND report offers a first real glimpse into the world of zero days, after researchers there gained access to a database of more than 200 zero days owned by a company that sells them to governments and other customers on the so-called gray market.
The research reveals that up to 25 percent of zero-day vulnerabilities persist for over a decade, with the average life expectancy of such flaws estimated at 6.9 years — the time from the vulnerability being found to when the vendor discovers it and issues a patch, or when a software upgrade inadvertently fixes the mistake. “I am trying to bring data and science to the discussion,” lead researcher Lillian Ablon told VICE News.
By their very nature, zero-day vulnerabilities are a mystery. No one knows how many of them are out there, no one knows which pieces of software they target, and no one knows how often they are being used every single day or who they’re being used against.
So what do we really know about zero days?
A zero-day vulnerability is a bug in software code that could allow access to that system and that has not been disclosed to the vendor. There are zero days since it’s been made public — hence the name. It means that there is no patch or fix available.
Zero-day vulnerabilities can impact any software, and with estimates suggesting anywhere from 3 to 20 bugs per 1,000 lines of code, there’s a lot of potential for problems. Especially when you consider that Apple’s iOS, for example, is thought to consist of over 8 million lines of code and the U.S. Army Future Combat Systems contains over 60 million lines.
Most of these bugs will be relatively harmless. A vulnerability is a special kind of bug that creates a security weakness in the software’s design, implementation, or operation.
Finding a vulnerability is just the beginning. In order to take advantage of the flaw, you need to weaponize it by creating an exploit — something to infect, disrupt, or take control of a computer. Not all vulnerabilities can be exploited, but when they do work, the ultimate goal is remote code execution, whereby the compromised system runs an attacker’s code without the user’s knowledge.
Who creates them?
Zero-day vulnerabilities are created or discovered by hackers, researchers, the government, and companies that specialize in developing cyberweapons to sell to intelligence agencies and law enforcement.
Many companies run public bug bounty programs where they challenge researchers to find flaws in their systems — including zero-day vulnerabilities — paying them a fee for disclosing them. Even the U.S. Army and the Pentagon are now taking this approach to hardening their systems.
But a lot of people just want to keep zero-day vulnerabilities secret. Companies like Hacking Team and FinFisher, which create spying software for governments and intelligence agencies, covet vulnerabilities that have yet to be disclosed. While both companies do discover their own vulnerabilities, they also rely on a network of independent researchers to do the grunt work and find the flaws in the lines of code.
According to Graham, these are highly-skilled engineers who reverse-engineer source code to find vulnerabilities that can be exploited. These are not run-of-the-mill cybercriminals. “Criminals are quite the opposite of skilled hackers,” Graham said.
Who buys them?
Zero-day vulnerabilities don’t come cheap. According to the RAND report, most exploits in the gray or government market sell for $50,000 to $100,000, and some can go for as much as $300,000. The FBI is reported to have spent $1 million for a zero day that allowed the agency to hack into the iPhone used by the San Bernardino shooter.
The high price makes the market for zero days pretty limited. Essentially only governments and intelligence agencies can afford to pay.
Thanks to the Shadow Brokers leak, which detailed the NSA’s hacking arsenal, and the recent WikiLeaks dump of CIA hacking tools, we know that the U.S. government is in the business of acquiring zero-days. The government doesn’t want to get its hands dirty, so rather than going directly to the engineers finding these vulnerabilities, they use small dedicated companies that the government trusts. “The U.S. government overwhelmingly prefers to buy vulnerabilities from the U.S. people; they don’t like going outside their borders,” Graham said.
The government typically buys well-developed, robust exploits that have been thoroughly tested and that easily integrate with the other hacking tools they use.
For everyone else, there are much easier and cheaper ways of hacking targets. “Phishing attacks and exploiting the human element are a heck of a lot easier. Getting someone to click on a link and making it look convincing is a lot easier than trying to find a zero- day vulnerability in a product,” Ablon said.