Keren Elazari is a former hacker and a security analyst, known online as @k3r3n3, she is senior researcher at the Tel Aviv University Interdisciplinary Cyber Research Center.
It's 2016 and hacking is all around us. What was a pastime for curious, smart nerds became big business for the underworld, and a lucrative tool in the arsenal of nations. Everyone is hacking everyone, trying each other's defenses, constantly looking for weaknesses, for loopholes.
Yet contrary to popular belief, not all hackers are bad—and in fact, some are working hard to fix fundamental security problems, while challenging the implicit, blind trust we often place in flawed technology. In a sense, hackers are the immune system for our connected society, forcing us to fix things, or demand something better. Can these hackers actually be the heroes of this fast-changing world? I think so.
In my 20 years in the hacker world, I've seen the many faces our kind wears. Not all of them are menacing grimaces (and it's far too hot in Tel Aviv to wear hoodies, trust me). At times, I have been overwhelmed by the sense of community, solidarity and support the global community of security researchers and independent hackers show each other.
Hackers are the immune system for our connected society
You won't hear about these efforts in the news—or about the many charities that hackers support; the special blood and bone marrow donor drives at DEFCON; or the hundreds of grassroots community events organized by hacker volunteers, such as Security BSides, where I'll be speaking next week. This is because mainstream media tends to overestimate individual hackers' capabilities yet underestimate our ethics. But that tide is turning: Yes, the odds of being involved in a massive data breach, having your personal photos leaked, or just getting hit by ransomware are definitely on the rise, but it's important to understand how much friendly hackers are doing to make our lives safer.
In 2014 I spoke on the TED stage about my idea that hackers are playing the role of a digital immune system. Today, the efforts and advances made by hacker groups and individuals researchers prove that this idea was not as hopelessly idealistic or farfetched as you might think. As the hacker community matures, we are also showing the world the more responsible, intricate and nuanced aspects of our culture, as reflected in the news, and the rise to fame of shows like "Mr. Robot." It finally seems like more and more people are proud to call themselves hackers.
So how do these hackers make a difference?
One area which is close to my heart is the newly-found focus on medical device security, especially where weaknesses have the dramatic potential to harm human lives. Bleeding-edge researchers like diabetic hacker Jay Radcliffe, who hacked his own insulin pump, or security professor turned pacemaker patient Marie Moe have demonstrated the many unseen dangers, design flaws, and security weaknesses in devices people trust their lives with. High profile demos like the live hacking of an insulin pump conducted in 2012 by the late Barnaby Jack prompted much criticism (and some TV plotlines), but more importantly, catalyzed a change in the approach to cybersecurity in the medical device industry. It also helped draw public attention—and government scrutiny. Fact is, only after Barnaby's demo did the US government watchdog, the Government Accountability Office, officially recommend that "FDA develop and implement a plan expanding its focus on information security risks."
These individual efforts are supported by the grassroots movement "I Am the Cavalry." Since 2013 this group has been actively encouraging responsible, ethical and meaningful security research work to help identify and prevent life-threatening security vulnerabilities in areas such as medical devices and automotive technologies.
The powers that be, like governments and traditionally conservative corporations,
are now finally realizing the potential that lies in collaborating with and learning from hackers—and the critical need for such dialogue. This change of approach is helped along by calls from the security community, as signaled by the creation of Five-Star cyber safety framework for the automotive industry and the Hippocratic Oath for medical devices, both voluntary attestations to the commitment of two key industries to join forces with hackers now to create a safer tomorrow.
If you consider that many of us today rely on technology to save our lives, and how much of an impact embedded medical technologies will have in the future, getting hackers to build safer medical devices and safer transportation systems for our future could become the single most important achievement of the security community in this decade, instead of the sensational hacking that is usually highlighted by the media.
Hackers can even help the government: the Pentagon, a bastion of old-school power structures, has opened its arms to embrace hundreds of friendly hackers who competed to participate in the first ever "Hack The Pentagon" bug bounty program. The success of this pilot program is a testament to the adaptive spirit needed at this moment in time.
Such efforts further my belief that it's time for thinking about the ecosystem. It's not us against them, whether you're a government, hacker, or corporation. We're all in this together. That's because securing our future is not just about the connected devices. Cybersecurity is about securing our way of life, not our passwords or our webcams. It's about everyday choices we ALL make every day. The guy who clicks on the malvertising ad, the lady who doesn't update her software, the agencies that horde software vulnerabilities so they can use them, the companies that produce faulty code and take their sweet time fixing it, the legislators and regulators who criminalize legitimate security research, and consumers who couldn't care less: we're all part of one big, vulnerable ecosystem.
That's why I find hope in the fact that more security researchers worldwide are actively seeking ways to push security forward: "Bug bounty" software vulnerability reward programs like Google's show growing participation from regions such as Latin America, sub-Saharan Africa and southeast Asia. Hackers who report to these programs are getting publicly acknowledged, and legitimately rewarded for security research work, often for the first time in their lives, and are now starting their path to be a part of that much needed immune system.
More and more people are proud to call themselves hackers.
Another project that aims to give researchers the power to defend the future is 0patch, developed by a team of European pen testers from Slovenia who got frustrated with how easy their job conducting security assessments has been for the past decade thanks to software vendors being slow to patch bugs. It goes something like this: find a known, recently reported software vulnerability in a popular piece of software; develop and exploit and use it within two months; deploy, and bam, you're good to go—the testers proved they could use the tools available to attackers to easily attack their target.
This process is true for most security testing work, since most of us are too lazy to update software or install patches. The problem is made much worse by the fact most software vendors hold a monopoly over both the patches themselves and the decision on when to update the vulnerable software they develop—a process that is a time-sink for them—so bugs remain well-known, in the wild and unpatched, sometimes for years on end.
That's why 0patch developed a new (currently in beta) technology that actually allows security researchers who find bugs in software to create and suggest a "micro patch" for the vulnerability they discovered, alongside the vulnerability disclosure, as they are reporting it. The potential impact is not only significantly shorter times between a weakness being discovered and being fixed, but also better personal security. The "zero patch" agent lets you download micro-patches that automatically fix vulnerable software already running on your machine—without waiting for the software vendor to create or deploy those patches.
This project is relatively new, but I believe it has great potential to create a new "crowd patching" community, or maybe even patch bounty programs that can incorporate trusted third-party reviewers to just get more software bugs fixed, faster.
Of course, many of the most critical aspects of our digital life have nothing to do with software: they are in hardware, webcams, home sensors, all kinds of embedded devices and other connected technologies. Shodan was designed to expose such connected devices, everywhere: it's a search engine that helps people identify millions of connected, vulnerable devices worldwide—and it even has a Chrome plugin! Another fantastic effort is called Build It Securely, a community project that is connecting security researchers with the huge, commercial and largely insecure Internet of Things ecosystem.
If you accept my argument that the real bad guys in the digital landscape are not cyber criminals, or even government and military agencies, but the flawed technology we rely on, and specifically the many bugs and vulnerabilities that are now part of our world, then you might think there's a very dark future ahead. For as long as humans write code and create technology, we'll have bugs. And even if we allow AIs to create technology for us, we won't know it's safe until we test it ourselves.
Wouldn't it be naive to expect software vendors, technology giants or even regulators to figure out and prevent all of these issues? One thing we hackers have always been very good at is outsmarting machines. A world in which technology is constantly created and updated is a world full of new and worrying dangers. Many in the global hacker community see themselves as part of the solution, not the problem. Hackers can help.
The Hacks We Can't See is Motherboard's theme week dedicated to the future of security and the hacks no one's talking about. Follow along here.