Frank White is the author of The Overview Effect: Space Exploration and Human Evolution. He is working on a book about artificial intelligence.
Recently, two tech heavyweights stepped into the social media ring and threw a couple of haymakers at one another. The topic: artificial intelligence (AI) and whether it is a boon to humanity or an existential threat.
Elon Musk, founder and CEO of SpaceX and Tesla, has been warning of the dangers posed by AI for some time and called for its regulation at a conference of state governors in July. In the past, he has likened AI to "summoning the demon," and he founded an organization called OpenAI to mitigate the risks posed by artificial intelligence.
Facebook founder Mark Zuckerberg took a moment while sitting in his backyard and roasting some meat to broadcast a Facebook Live message expressing his support for artificial intelligence, suggesting that those urging caution were "irresponsible."
Musk then tweeted that Zuckerberg's understanding of AI was "limited."
The Musk/Zuckerberg tiff points to something far more important than a disagreement between two young billionaires. There are two distinct perspectives on AI emerging, represented by Musk and Zuckerberg, but the discussion is by no means limited to them.
This debate has been brewing under the surface for some time, but has not received the attention it deserves. AI is making rapid strides and its advent raises a number of significant public policy questions, such as whether developments in this field should be evaluated in regard to their impact on society, and perhaps regulated. It will doubtless have a tremendous impact on the workplace, for example. Let's examine the underlying issues and how we might address them.
Perhaps the easiest way to sort out this debate is to consider, broadly, the positive and negative scenarios for AI in terms of its impact on humankind.
The AI pessimists and optimists seem locked into their worldviews, with little overlap between their projected futures
The negative scenario, which has been personified by Musk, goes something like this: What we have today is specialized AI, which can accomplish specific tasks as well as, if not better than, humans. This is not a matter of concern in and of itself. However, some believe it will likely lead to artificial general intelligence (AGI) that is not only equal to human intelligence but also able to master any discipline, from picking stocks to diagnosing diseases. This is uncharted territory, but the concern is that AGI will almost inevitably lead to Superintelligence, a system that will outperform humans in every domain and perhaps even have its own goals, over which we will have no control.
At that point, known as the Singularity, we will no longer be the most intelligent species on the planet and no longer masters of our own fate.
In the scariest visions of the post-Singularity future, the hypothesized Superintelligence may decide that humans are a threat and wipe us out. More hopeful, but still disturbing views, such as that of Apple co-founder Steve Wozniak, suggest that we humans will eventually become the "pets" of robots.
The positive scenario, recently associated with Zuckerberg, goes in a different direction: It emphasizes more strongly that specialized AI is already benefiting humanity, and we can expect more of the same. For example, AIs are being applied to diagnosing diseases and they are often doing a better job than human doctors. Why, ask the optimists, do we care who does the work, if it benefits patients? Then we have mainstream applications of AI assistants like Siri and Alexa, which (or who?) are helping people manage their lives and learn more about the world just by asking.
Optimistic observers believe that AGI will be difficult to achieve—it won't happen overnight—and we can build in plenty of safeguards before it emerges. Others suggest that AGI and anything beyond it is a myth.
If we can achieve AGI, the optimistic view is that we will build on previous successes and deploy technologies like driverless cars, which will save thousands of human lives every year. As for the Singularity and Superintelligence, advocates of the positive scenario see these developments as more an article of faith than a scientific reality. And again, we have plenty of time to prepare for these eventualities.
The AI pessimists and optimists may seem locked into their own worldviews, with little apparent overlap between their projected futures. This leaves us with tweetstorms and Facebook Live jabs rather than a collaborative effort to manage a powerful technology.
However, there is one topic on which both sides tend to agree: AI is already having, and will continue to have, tremendous impact on jobs.
Speaking recently at a Harvard Business School event, Andrew Ng, the cofounder of Coursera and former chief scientist at Baidu, said that based on his experience as an "AI insider," he did not "see a clear path for AI to surpass human-level intelligence."
On the other hand, he asserted that job displacement was a huge problem, and "the one that I wish we could focus on, rather than be distracted by these science fiction-ish, dystopian elements."
Ng seems to confirm the optimistic view that Superintelligence is unlikely, and therefore the thrust of his comments center on the future of work and whether we are adequately prepared. Looking at just one sector of the economy, transport, it isn't hard to see that he has a point. If driverless cars and trucks do become the norm, thousands if not millions of people who drive for a living will be out of work. What will they do?
As the Musk/Zuckerberg argument unfolds, let's hope it sheds light on a significant challenge that has gone unnoticed for far too long. Forging a public policy response represents an opportunity for the optimists and pessimists to collaborate rather than debate.
Get six of our favorite Motherboard stories every day by signing up for our newsletter.