Zero-days—vulnerabilities that are not known to the vendor of a product they affect, but that may be used by hackers to break into systems—are a polemic subject. Activists and many technologists say that keeping these vulnerabilities secret to only a small group of people, such as government hackers who use them, it puts the public's cybersecurity at risk.
If a government agency uses a zero-day exploit, what's to stop another group, such as a criminal enterprise, from finding out about the attack too? Shouldn't the government disclose these vulnerabilities promptly so they can be fixed at some point?
But one crucial thing has been missing from the zero-day debate: data. Now, a new study from the RAND Corporation aims to change that by using details on over 200 zero-day vulnerabilities, and tries to shine light on questions such as how long zero-days remain undetected, or what percentage of them are discovered by more than one party.
"This report provides really valuable analysis and, crucially, cold hard data into a debate that often runs on base assertions and anecdotes, rather than statistical evidence and analysis," Matt Tait, the founder of Capital Alpha Security, and a former information security specialist for GCHQ, told Motherboard in an email.
Researchers Lillian Ablon and Andy Bogart write in their report, named "Zero Days, Thousands of Nights": "The results of this research provide findings from real-world zero-day vulnerability and exploit data that could augment conventional proxy examples and expert opinion, complement current efforts to create a framework for deciding whether to disclose or retain a cache of zero-day vulnerabilities and exploits, and inform ongoing policy debates regarding stockpiling and vulnerability disclosure."
According to the report, the dataset spans some 14 years, from 2002 to 2016, and over half of the vulnerabilities included are still unknown to the public.
The researchers don't say who provided the data—only that it came from a vulnerability research group, dubbed BUSBY to protect its anonymity. Some BUSBY researchers have worked for nation-states, the report authors write. The data comprised of dozens of exploits for Microsoft and Linux platforms, and also included attacks against Mozilla, Google, and Adobe products.
The first major finding is that the average life expectancy of a zero-day exploit and its underlying vulnerability is fairly long: 6.9 years, or 2,521 days.
A quarter of the vulnerabilities do not survive to a year and a half, the report continues, and another quarter live for over nine and a half years. The researchers didn't find a clear link between any particular sort of vulnerability—such as the operating system or source code type—that might dictate a short or long life either.
Logan Brown, CEO of Exodus Intelligence, a research firm that sells zero-days for both defensive and offensive purposes, told Motherboard in an email this lifespan is somewhat different to his own experience.
"From what I have seen, the average shelf-life is closer to 1 year, with a minimum being 1 month, and maximum being 3 years," Brown said. Exodus recently provided a Firefox zero-day to a law enforcement customer. Brown also said that the security industry has grown exponentially year by year, and patching, disclosures, and zero-days have become much more prevalent in around the last five years.
The other major finding, and one that is likely to trigger serious debate, is that for a given stockpile of zero-day vulnerabilities, after a year around only 5.7 percent have been discovered by an outside entity.
So, does this low discovery rate mean the idea that if governments don't swiftly disclose vulnerabilities to the affected vendors it puts everyone at serious risk, is not supported by evidence?
"That's 100% correct as an interpretation," David Aitel, a former NSA security researcher and now founder of cybersecurity company of Immunity, told Motherboard in an email. (When looking at the full 14-year period that these vulnerabilities covered, the researchers found a 40 percent overlap.)
Vulnerability disclosure has hit headlines in several cases recently. In February 2015, the FBI used a "non-publicly known vulnerability" to hack over 8000 computers across the world; Mozilla, the creators of the Firefox browser, wanted the FBI to disclose the vulnerability to it (the FBI did not). In March of last year, the FBI paid unnamed researchers to break into an iPhone belonging to one of the San Bernardino terrorists, and seemingly did not tell Apple which security issue the attack used.
Under the Vulnerabilities Equities Process (VEP), a White House process implemented in 2014, government agencies are supposed to disclose vulnerabilities to affected vendors so the issues can be fixed. That is, if a group of agency representatives decides the attacks are no longer needed for law enforcement or intelligence purposes.
The NSA previously claimed it has disclosed 91 percent of vulnerabilities it discovers, although that figure was seemingly regarding its own disclosure process rather than the VEP.
"In this line of thought, the best decision may be to stockpile only if one is confident that no one else will find the zero-day; disclose otherwise," the report reads.