When law enforcement and intelligence agencies in Canada discover flaws in computer software—say, a bug that could help hackers steal messages from a smartphone, or spy on unsuspecting victims via internet-connected webcams—do they disclose those holes to the software's creator so they can be plugged? Or do they keep such flaws secret for their own use in future investigations, with the hope that no one else will find and use them maliciously first?
These types of weak spots, if left unpatched, can pose a very real security risk to users. But unlike counterparts in the US, the Canadian government has never gone on the record about how it handles the disclosure of newly discovered software bugs.
Often referred to as zero day vulnerabilities, such flaws are valuable to spies and police because their existence is not widely known, not even to the companies themselves. Thus, they can be used to gain access to computer networks, smartphones, or other electronic devices again and again, until the vulnerability is discovered, or disclosed and patched.
By keeping knowledge of such bugs secret—not only to consumers, but to the software's creator—critics have argued that the government is compromising the privacy and security of users so that it can build an arsenal of secret bugs for use in future digital attacks. The longer zero-day bugs remain unpatched, the more likely it is that criminals or other governments can discover and exploit them. Some zero day exploits, such as the ones used in the Stuxnet attack on a uranium enrichment facility in Iran, can go undetected for years.
In the United States, there is a policy called the Vulnerabilities Equities Process, or VEP. First introduced in 2010, it determines how and when the US government discloses information about flaws it discovers—or purchases—to the industry at large.
The VEP is supposed to weigh this trade-off between national security and user security by evaluating the implications of whether a bug is disclosed. In Canada, however, there is no publicly available documentation that suggests whether or not a similar process exists.
"I'm not aware of one," said Imran Ahmad, a lawyer at Cassels Brock in Toronto who works with clients on issues related to cybersecurity, privacy and data breaches. "To my knowledge, there's no formal process by which law enforcement regularly communicates with software manufacturers to flag vulnerabilities that they've come across in their own testing, to the extent that there's testing going on."
In an email, the RCMP would not answer questions related to its policy for disclosing software vulnerabilities, nor whether police purchase, discover or use software exploits as part of its investigations. "We generally do not comment on specific investigative methods, tools and techniques outside of court," wrote Sgt. Julie Gagnon.
Ryan Foreman, spokesperson for the Canadian government's cyberspy agency Communications Security Establishment (CSE), wrote in an email that CSE shares "cyber threat information" with government stakeholders that "may originate from CSE's own analysis," but did not not specifically address software exploits, nor whether a policy comparable to the VEP exists.
The VEP isn't perfect. In 2014, The The New York Times reported that the NSA typically reports bugs to software companies—unless those bugs can be used for "a clear national security or law enforcement need." Then, last November, the NSA wrote on its website that it had released "more than 91 percent of vulnerabilities discovered in products" but did not specify when the disclosures were made, nor how long those vulnerabilities had been exploited before being disclosed, if they were exploited at all.
Recent court battles, such as the fight between Apple and the FBI for access to a locked iPhone, and the FBI's mass-hacking of a child predator ring operating on the dark web, demonstrate the extent to which US police have relied on such bugs in their investigations. But as Electronic Frontier Foundation staff attorney Andrew Crocker recently told Motherboard's Joseph Cox, because the VEP is a closely held secret, "no one really knows if it's followed in any cases."
Apple, for example, told Reuters earlier this week that the first time the FBI had ever disclosed flaws in Apple software was April 14. Though an annual report on the VEP's implementation is required, none of these reports have ever been made public.