This story is over 5 years old.


Is the White House Declaring A 'War on Hackers'?

President Obama is pushing a series of new cybersecurity initiatives that could weaken, and even criminalize, independent research into data vulnerabilities.
Photo via US Central Command

Reacting to fear and unease surrounding a raft of security hacks on Sony Pictures and other major US corporations—not to mention the compromised social media accounts at US Central Command on Monday —the White House has spent a lot of time this week pushing a raft of new cybersecurity regulations and initiatives. Officials say the new proposals—which include legislation that would incentivize private companies into voluntarily sharing data with one another and with the government—would provide much-needed protection to stop attacks. However, cybersecurity experts argue the proposals are at best a soft response to the country's core data security issues, and could potentially weaken or even criminalize academic and private sector research and communication on software vulnerabilities.


Let's start with the good. As cybersecurity technologist Davi Ottenheimer notes, the president's plan to streamline the tangle of consumer data breach notification deadlines is a promising new measure. A 30-day timeline for notifying consumers that their data has been breached may strike a balance between consumer transparency and the needs of law enforcement investigators. The administration has also invested in better information sharing that it argues will remove a critical stumbling block that has previously hindered security cooperation between the private sector and national security agencies. No longer will companies fear liability for sharing relevant information with the Department of Homeland Security.

However, information security reporter and analyst Brian Krebs points out a sad but obvious problem with Obama's plan: It does not address the most significant problem with getting private companies to share their data. That is, what is the incentive for private sector entities to embrace stronger security practices? This is especially pertinent in light of the disturbing prevalence of breaches caused by companies failing to adhere to even basic security precautions. The Sony hack in particular, whatever its origin, can be largely explained by the movie studio's lack of appreciation for security engineers' concerns and recommendations. Such a cavalier attitude is neither novel nor difficult to understand—data security is expensive, and consumers ultimately pay for functionality, not protection.


Moreover, it's not clear that greater government and private sector cooperation is the most relevant communication problem facing the US. Experts have observed that a larger and more pervasive problem is that corporations are not very good about communicating with developers and consumers about bugs and privacy concerns. They also have a lot of trouble collaborating among themselves on such issues, which further hurts customers.

Other aspects of Obama's proposals could very well be hurt the country's cybersecurity. Technologists have always loathed the Computer Fraud and Abuse Act (CFAA), seeing it as both dangerously vague and unnecessarily punitive. The current CFAA prohibits three kinds of unauthorized computer access: subverting a technological access barrier (breaking into someone's computer), violating a term of service or an employee contract, and murky "norms-based" actions that violate some generally accepted social practice.

Courts have routinely stretched the CFAA, a prominent example of which can be found in the Aaron Swartz case. Swartz used a pseudonym to bulk-download academic journal articles from JSTOR, taking numerous steps to regain access when he was blocked for doing so. He was slapped with 13 felony counts, some of which suggested he was "recklessly damaging… a protected computer." However, MIT, the academic institution in question, and JSTOR did not have any interest in prosecuting Swartz. The CFAA, on the other hand, provided justification for the feds to intervene, with tragic results.


The CFAA has also impacted information security research and reporting as well. So-called "full disclosure" of online security vulnerabilities—the widespread practice of publicly outing problems in order to force vendors into fixing security issues— constitutes a criminal offense under the CFAA. Indeed, hacker Andrew "Weev" Aurenheimer infamously got into hot water for discovering a vulnerability in the iPad.

Far from addressing the flaws in the law, it's possible that Obama's new security proposals will actually make the problem much worse. In fact, ErrataSec researcher Robert Graham blogged that the new regulations amount to an effective declaration of a "war on hackers." He points out that if the new regulations go through, it would become a felony to intentionally access "unauthorized information" even if it had been posted to a public website. In other words, clicking on links to some of the information leaked during the recent CENTCOM hack could be a crime. Additionally, Graham explains that the new regulations make it a crime to exchange private information—which could be interpreted as merely posting a link.

Unfortunately that's just the beginning. Obama has called for the Racketeer Influenced and Corrupt Organization (RICO) act to be applied to cybercrimes. Originally designed as a blunt instrument to attack gangs and mobsters, law enforcement have applied RICO liberally, using it to dismantle youth crews through social media posts, for example. If applied to cybercrime, Graham observes, hanging around a chatroom with someone the government thinks is a malicious hacker might make you an accomplice.

So while Obama's focus on information-sharing and consumer support is welcome, Sony-like breaches will occur as long as companies neglect investment in security. And the proposals could set a dangerous precedent for information-sharing, limiting information to corporations and the government—neither of which have been very good at figuring out their cyber problems—while silencing and perhaps criminalizing the efforts of security researchers looking to uncover vulnerabilities in the software and systems the country depends on.

Adam Elkus is a PhD student in computational social science at George Mason University and a columnist at War on the Rocks. He has published articles on defense, international security, and technology at CTOVision, The Atlantic, the West Point Combating Terrorism Center's Sentinel, and Foreign Policy. Follow him on Twitter