If the FBI learns of a security vulnerability in a piece of software that many people use, is it obligated to report that vulnerability so it can be fixed before hackers take advantage of it?
In the case of a recently-unlocked iPhone belonging to a San Bernardino shooting suspect, the FBI says no—but only because it paid an outside contractor to hack into the phone, and therefore doesn't actually know what the vulnerability is, the agency said.
This distinction is important, because there is a policy that directs the government to disclose information about non-public security vulnerabilities to the companies that could fix them in order to protect their customers.
Should the FBI, or other government agencies, tell Apple when it finds a security hole? What if the subject of the investigation was a smart home alarm system, instead of an iPhone? What if it the vulneraibilty is in the infrastructure behind a city's electrical grid?
Because of the intense secrecy around the VEP, "No one really knows if it's followed in any cases."
The White House disclosure process, a largely mysterious procedure called the Vulnerabilities Equities Process, was set up to deal with these situations. It hasn't had much attention, but now the San Bernardino iPhone—and the FBI's apparent inability to disclose the security vulnerability it exploited—has put it into the spotlight.
What is the Vulnerabilities Equities Process?
The VEP, written in 2010 but not properly implemented until 2014, is how the government decides whether, and when, to disclose vulnerabilities to vendors so they can be fixed, or to hold onto them for their own purposes.
Vulnerabilities which are not known to the manufacturer of the hardware or software are commonly known as zero-days. Governments use zero-days in sophisticated cyberattacks or investigations, surveillance companies discover and sell them, and sometimes criminals take advantage of zero-days too.
Who does it apply to?
The VEP "applies, as written, to all vulnerabilities that are not publicly known, and newly discovered by the US government or its contractors," Andrew Crocker, staff attorney at the Electronic Frontier Foundation, who successfully obtained documents related to the VEP through a Freedom of Information lawsuit last year, told Motherboard in a phone call.
The agency or the component of the government that finds the vulnerability, or buys it, is responsible for introducing it into the process, Crocker said. But, theoretically, agencies must submit any zero-days to the VEP.
What is the procedure?
That agency notifies the coordinator of the VEP, known as the Executive Secretariat. That used to be the NSA, but the job has seemingly been handed over to the National Security Council, Crocker added.
Information about the submission is then filtered down to other agencies that may have an interest or stake—what the government calls an 'equity'—in the vulnerability, for offensive or defensive purposes.
"If it's a vulnerability in a web browser like Firefox, it seems like anyone who uses Firefox or has people using Firefox have an equity in it," Crocker said.
On the flip side, exceptions exist for law enforcement or intelligence use, meaning that those involved in the VEP might decide to not inform the vendor so the vulnerability can be exploited.
If the decision is made to withhold details from vendors, that decision is supposed to be revisited periodically, multiple times a year, Crocker added.
How often does the government follow this procedure?
Because of the intense secrecy around the VEP, "No one really knows if it's followed in any cases," Crocker said. But on Tuesday, Reuters reported that the FBI gave Apple its first vulnerability tip earlier this month.
National Security Council cybersecurity coordinator Michael Daniel, said in an interview with WIRED that the policy's "strong bias is going to be that we will disclose vulnerabilities to vendors."
The NSA claimed it has disclosed 91 percent of the vulnerabilities it discovers, although that figure was seemingly in reference to its own disclosure process rather than the VEP, Crocker added.
"I don't [think] anyone outside of the government knows exactly how they decide who has these equities and how much that changes from situation to situation," he said.
What happens if the government doesn't disclose a vulnerability?
When the US government chooses to not disclose a vulnerability, that naturally has a bearing on more than just the work of other agencies.
Vulnerabilities can affect software used by hundreds of millions of people everyday, from the operating systems of devices, to web browsers, or anything else. When an agency keeps a vulnerability quiet, it is deciding to keep that vulnerability unpatched, leaving criminals or targets susceptible to attack, but everyone else too.