How AI Can Help Us Understand Pain and Suffering

Pixar technical director Alonso Martinez sees AI as a tool for empathy.

This article originally appeared on Motherboard

Pretty much the only successful aspect of our societal response to the United States opioid crisis is that we're actually talking about it. That didn't have to be. But because we're talking about it—that we've acknowledged the problem—doesn't mean we're very good at talking about it. We aren't. That's because we rarely talk about pain. Mostly, the opioid conversation imagines a predatory pharmaceutical industry and a population that's been tricked into thinking that its pain is not actually very important.


But pain is a problem. And we don't talk about it. Part of that is just that we don't know how to talk about pain. And a large part of that is that pain is just hard to understand in an empathetic way. Pain is not easily observed in others. In medicine, it's a self-reported symptom. The main tool for this self-reporting is a simple scale from one to 10, often accompanied by cartoon faces in various states of anguish.

Pain and suffering was an unlikely topic at this week's Re-Work AI Assistant Summit in San Francisco, but Alonso Martinez was an unlikely speaker. Martinez is a technical director at Pixar Animation Studios and a "not research-grade" roboticist on the side, but his presentation wasn't technical at all really. Instead, he presented a thesis of sorts about suffering. (Note that the opioid angle is my own but seems like an appropriate entry-point into a conversation about pain in 2018.) Martinez wants a new and better definition of suffering, one that's less reliant on human-to-human subjectivity and is instead enabled by artificial intelligence.

Martinez pointed out that we have an extremely accurate technological approach to pain quantification in the form of MRI scanning, but that this is expensive and highly invasive. We can also imagine it being unnecessary in an era where machines can parse vast troves of data to build predictive models of once unquantifiable things, such as suffering reflected in the microexpressions of a face, for example.

Martinez wasn't terribly concerned with the implementation of this new understanding of suffering. Rather, it just seems like something we should be able to do, or that we will be able to do. And this ability might mean some other interesting things for how we view the world through artificial intelligence. "Through data, we will show insights into ourselves," he said.

Martinez finds hope in the AI pursuits of style transfer and Go playing, both abilities that he sees computers exhibiting real creativity. Real creativity is itself a form of learning, or of learning through discovery. If computers can do this, they can begin to understand even more difficult human experiences, such as our emotions.

This was a 15 minute presentation, but it could probably have been unpacked over the course of days. The field needs more people like Martinez, that are interested in using AI not to take us away from ephemeral human experiences, but to bring us closer to them.