1. Support Public Media, Off and Online
Advertisers are unbelievably empowered online, and this power is in desperate need of a public, non-commercial counterbalance. As things stand, advertisers control the purse strings of practically all of the most visited websites (Wikipedia, the only nonprofit significant numbers of people use regularly, is the exception that proves the rule). Advertisers used to need publishers or television stations or billboards to reach an audience, but now they can follow individuals across the web and through their apps, tracking us to gather data about our proclivities and preferences and targeting us with personalized appeals. As the aphorism rightly states, "If you are not paying for the product, you are the product."As internet users, we often complain about the consequences of commercialism without acknowledging the root cause. We lament surveillance and the death of privacy, without looking at the incentives driving corporate-data collection. We talk about being addicted to or distracted by social media, but not the fact advertiser-driven sites and services need us to restlessly "engage" to boost the metrics that determine how much money is made. (Where Chomsky spoke about manufacturing consent, we now live in an era of manufactured compulsion—because our obsessive clicks, whether we "like" or hate whatever we're looking at, directly translate into cash.)The only solution to this madness is public media, which, contrary to right-wing fearmongering, doesn't mean state-controlled.
2. Don't Forget Our Values
That optimism was too often accompanied by a fundamental disdain for law and politics, which were seen as messy obstacles rather than guarantors of shared values. The masters of the surveillance economy managed to exempt America's tech sector from the standard consumer protections provided by other industrialized democracies by arguing successfully that it was "too early" for government regulation: It would stifle innovation. In the same breath, they told us that it was also "too late" for regulation: It would break the internet. Meanwhile, the high priests of the security state not only concealed their radical expansion of government surveillance from the public, but also affirmatively lied about it—while ensuring that legal challenges were shut down on secrecy grounds. In both sectors, there was little thought given to the possibility that we might one day hand the keys to these systems to malevolent actors with contempt for democratic norms.One person who did try to warn us was Edward Snowden, who in 2013 presciently anticipated our current moment: "A new leader will be elected, they'll find the switch, [they'll] say that 'because of the crisis, because of the dangers we face in the world, some new and unpredicted threat, we need more authority, we need more power.'" Snowden worried that a leader with authoritarian tendencies, perhaps emboldened by public reaction to terrorist attacks, would turn the government's colossal surveillance systems inward, against its own minorities or dissidents—a nightmare scenario that seems a lot less far-fetched today.How many of today's entrepreneurs are imagining how their innovative products might be used against their own customers?
3. Rethink Our Governing Ideals So We Can Reconnect with Natural Systems
4. Start with Tech Companies
Technology companies, and in particular social media companies, have to wake up to the fact that their platforms are being used for illegal activities with, at times, devastating consequences. These companies need to take more seriously their responsibility for the actions being carried out on their platforms. And these multibillion-dollar companies need to harness their power to more aggressively pursue solutions.The current wave of troubling online behavior is not new. In the early 2000s, in response to the growing and disturbing abuse of young children, the Department of Justice convened executives from the top technology companies to ask why they are not eliminating child predators from their platforms. These companies claimed that a range of technological, privacy, legal, and economic challenges was preventing them from acting.
The broader point is that technology companies—with the exception of Microsoft—were less than forthcoming in their assessment of their ability to remove this horrific content and in the impact that it might have on their business. Despite this lesson, we continue to hear the same excuses when technology companies are asked to rein in the abuses on their platforms. Most notably, in the fight against online extremism, technology companies have been largely indifferent to the fact that their platforms are being used to recruit, radicalize, and glorify extremist violence (although major technology companies recently announced a proposal to act, they have not yet done so). This is particularly frustrating given the fact that technology similar to photoDNA could go a long way to finding and removing extremism-related content.Technology companies should be motivated to act not just for the social good but also for their own good.
5. Educate and Be Inclusive in the Cognitive Era
6. Utilize Autonomous Vehicles to Rethink Mobility
7. Take Power Back
Although some abuses of technology can be frightening, the pros often outweigh the cons, and with some conscientious thought, we can make better decisions about how and what technology we integrate into our lives. A simple backup plan can make all the difference. If you have data that you find important to you, be it emails, pictures, or documents, then create your own backups. If your data is ever deleted from the cloud by mistake or by malice, you can be confident you have a local copy. In the case of a local disaster such as fire or theft, using an inexpensive cloud-based backup service is also sensible.Does your toaster need to be on the internet via your personal network at home? Does your car really need to send you email, and if it does, should it use your personal email account?
8. Hold Technology Accountable to the Public
Now to the more general question: How can we make technology work better for us? The answer, as above, depends on who we are.The truth is, technology is already working great for those who own it. Facebook might be distorting our sense of reality and creating an uninformed, hyper-partisan citizenry with its automated feeds and trending topics, but it's also making record profits. So anyone among the tiny group of Facebook co-founders, or the small group of Facebook stockholders, might say technology is working great. But if we represent the public at-large and are concerned with an informed citizenry, not so much.Let's make technology work better for us by making the public an explicit stakeholder whose benefit or harm is taken into account.
We've spent too long welcoming technology with open arms and no backup plan. We especially deify big data and machine-learning algorithms because we believe in and are afraid of mathematics. We expect the algorithms and automated systems to be inherently fair and unbiased because—well, because they're mathematically sophisticated and opaque. As a result of this blind faith, we seem more afraid of Facebook hiring biased human editors than of Facebook's greedy algorithm destroying democracy.It's time to hold technology accountable to the public.The obstacles to doing this are real. The complexity, the opacity, and the lack of an appeals system team up to make us feel the inevitability of automation. Facebook and Google don't have customer-support numbers to call, just as Michigan didn't have a mechanism to appeal a MIDAS fraud decision. Moreover, the harms are often diffuse—externalities that are nearly impossible to precisely quantify—whereas the profits are concentrated and very easily counted. Even so, we have enough examples of deep harm to know that secret, powerful, and unaccountable technological systems are damaging us.
But there's good news. These are design decisions, not inevitabilities. In the MIDAS example, it could be reprogrammed to weigh a false accusation just as damaging as a fraud gone undetected, making it much more sensitive to that particular wrong, even at a slight loss of income in fines. Social media could be designed to promote civil discourse instead of trolling and could expose us to alternative views, instead of forcing us to live in echo chambers, even if it might mean our spending slightly less time shopping online.Let's make technology work better for us by making the public an explicit stakeholder whose benefit or harm is taken into account. We could stop getting better at facial recognition, online-tailored advertising, automated romantic partnering, and all other kinds of creepy predictive analytics for the next ten years and simply focus on what kind of moral standards we want our AI to subscribe to and promote, and we'd be better off as a society.
9. Empower Health-Data Stewards
I've spent much of my research career thinking through unique ways to bypass the disconnected health data landscape and still produce insights on disease at a global scale. Today, these technologies are leveraging the exhaust of our very public and increasingly digital lives to construct detailed characterizations of individual and population health. Combined with our clinical data, the power to influence health becomes very personal and one where consumers of healthcare are demanding progress.For the father of a sick child in France, the need for shared health data was the most personal of journeys. Early last year, this father took to Twitter desperately searching for someone to diagnose his ailing child—his tweet listing a number of genetic markers thought responsible for the disease. A physician at Boston Children's Hospital noticed the post by happenstance, and within hours, a team of genetics experts and physicians was engaged. Within weeks, the patient's whole exome was sent to Boston for interpretation and ultimately a diagnosis.On its face, many will blame technical complexity for the slow movement toward integrated personal-health data sets, but interoperable standards are increasingly commonplace. In reality, the likely culprit lies in reimbursement models that lack incentives for reducing cost and increasing value delivered. It took a realization of new efficiencies and pressure to reduce risk, enabled by technology, for the banking industry to open frictionless data-sharing networks mediated by industry data stewards. In healthcare, the economic benefits of open sharing will need to pressure integration—and integration will create a new economy of health-data stewards trusted by the health consumer.The solution to open sharing? We, healthcare consumers, must empower trusted data stewards to aggregate all aspects of our very personal health data. Achieving this level of trust will take work on all fronts: creating legislation that protects consumers, expanding reimbursement models that favor value, and building trusting relationships among healthcare providers and the companies entrusted with medical data. Only as these economic and social pressures grow will we see a future where data stewards enable patients to own and port their health information freely.The best part of achieving a future of frictionless, open health-data sharing? Well, personalized medicine becomes a reality for everyone. Data sets previously thought unrelated to health become the early-warning indicators that prevent disease. Our environments begin acting intelligently to promote wellness, and detecting disease between doctors' visits becomes the work of advanced, always-learning algorithms operating in the background.This future is already being realized in isolated systems constructed to test the power of integrated health data. When our team from Boston Children's Hospital and Merck began testing the relationship between insomnia and Twitter use, patterns emerged showing a relationship between Twitter usage and sleep patterns. Surprisingly, the trends observed were somewhat counterintuitive.
In diabetes management, researchers are already integrating passive data from wearable devices to understand the relationship between activity data and insulin dosing. And companies with remote patient-monitoring solutions are already integrating the first wave of consumer-device data sets so clinicians can begin seeing patients' progress between visits in a more definitive way.The aggregation of all aspects of personal health will enable a future where advanced machine learning and intelligence can be applied on a patient level. These experiments in data sharing are just the beginning of what's already an evolved capacity for machine learning begging for integrated data at scale.This next frontier in understanding and bettering human health is within sight, but a refocusing on the role of data stewards is needed to get us there.