Recently, a Boston-based organization called the Future of Life Institute composed an open letter warning of the "potential pitfalls" of artificial intelligence. Their concern? That, given the vast and growing power of artificially intelligent systems, one day, a rational AI might decide humanity should no longer run shit. The Institute put it slightly more tactfully, asking that technologists engage in "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial," cautioning that, "our AI systems must do what we want them to do."
The letter comes across as science fiction, and if not for the names underwriting it, it very well might have been dismissed as such. However, when people like Stephen Hawking and Elon Musk—along with various Oxford scholars, researchers at Harvard, and employees of Google's DeepMind project—sign your open letter warning that the computers might take over, people tend to pay attention.
The possibility that this might happen is alarming, certainly: nobody likes being told what to do, especially when the entity that's telling you what to do is your mechanical overlord. But if AI systems retain human qualities—if they have feelings and the ability to express those feelings—would it really be so terrible if they replaced us?
This is, in many ways, a hopelessly human question. It anthropomorphizes systems that are vastly different from us. Still, I can't help but to be curious about the potential for AIs to develop interiority, if only because it makes the threat they pose less horrifying. Can artificially intelligent systems ever "feel" as humans do? And will they prove that those feelings exist in the same way we do? Will AIs ever be capable of, or interested in, creating art?
Related: Space Barbie
The capability question is easier to answer, at least when it comes to producing art. Artificially intelligent systems and even robots have been used in the creative arts for decades, and many have even made their "own" art.
In 1973, the artist Harold Cohen programmed a machine to paint on canvas and christened it AARON. Over the years, he built up its artistic capability, programming instructions and data into its system. By giving it information about the proportions of the human body, he granted it the ability to paint fairly impressive pictures of humans. Cohen would argue that, like an art teacher instructing a toddler, he taught AARON the rules of art, but that the system then went on to express its own agency with what it had been taught.
Contemporary artists have followed in Cohen's footsteps. Ben Grosser, an artist and professor at the University of Illinois, has his own painting system. Grosser spoke explicitly of the machine's agency to VICE.
"We always talk about artificial intelligence in systems that are in service to us," he said. "What might these machines want to do for themselves? What might a machine paint for its own purposes? What are its aesthetic desires?"
Fine art isn't the only medium in which artificially intelligent systems have produced novel and interesting work. In the realm of AI-created music, a group of researchers collaborated on a project in which they used multiple machines to create a "Deep Belief Net" that could then produce improvisational jazz licks. Professors at Canada's Brock University have used AI to help choreograph dance steps.
The systems that scientists and artists are using in these productions are a form of artificial intelligence. They conform to the definition of the warning letter, which focuses on machines' ability to make rational decisions. But there are many artists, engineers, philosophers, and scientists who would say that Cohen and Grosser's machines and those like them are not truly artificially intelligent. They do not have agency, autonomy, or self-awareness. Their human creators have enforced their interest in art.
The latter argument seems to be the opinion of Golan Levin, a professor at Carnegie Mellon University who specializes in generative art, in which the artist creates an autonomous system that aids him in the creation of art, either by generating the work itself or by aiding the artist in the making of art. It's a category to which many would relegate artists like Cohen and Grosser. Levin related a phrase, one that he says is commonly known, about the slipperiness of the very term "artificial intelligence":
"Once artificial intelligence exists, it's just engineering," he told me. "It's no longer intelligent."
We imagine that machines with certain capabilities will be artificially intelligent. Once they exist though, we can't help but to continue seeing them as machines. Cohen, Grosser, and various others have created machines that perform tasks we associate with expression. But it's very difficult to argue that these systems are making art of their own accord, and that's essentially true of all the art-making AIs you can find today.
Neil Jacobstein, the co-chair of the Artificial Intelligence and Robotics Track at Singularity University, acknowledges the distinction between contemporary art-making AIs and the potential for autonomous systems with the ability to express a real interiority. The two types fit into preexisting categories that he recognizes as "weak AI" and "strong AI."
"The weak version is to have machines solve problems that only humans could solve previously, in narrow paths or domains," he told VICE. "Strong AI or general intelligence tends to refer to AI that emulates the broad, deep, and subtle intelligence that we associate with human intelligence or beyond. In general, we haven't built systems that exhibit artificial general intelligence yet, but we're working on it."
It's unclear how strong AI might interact with art in the future, or even if it would. In principle, it's possible to have an AI that values everything we value and has similar aesthetic preferences, but it's far more likely that it will have vastly different aesthetic preferences, if it has any whatsoever. As Nick Bostrom, a philosopher at Oxford and one of the world's foremost experts on the dangers presented by AI, puts it in his new book Superintelligence: Paths, Dangers, Strategies: "Human psychology corresponds to a tiny spot in the space of possible minds."
That doesn't stop people like the accomplished visual effects artist Kevin Mack, who has been dreaming of creating strong artificial intelligence for decades. An adherent to the computational theory of mind, which suggests that human brains themselves are essentially biological computers, he believes that strong AI would have emotional intelligence and thus would be likely to create art.
"Feelings and emotion are an integral part of the thinking process," Mack told VICE. "Emotions are sort of like thermostats. They're threshold gates that when some parameter reaches a certain level kick in extra incentive, excite certain neurons and inhibit other neurons."
Mack believes that strong AI is an inevitable development as technology continues to improve. He's particularly interested in neural networks, a technique in which algorithms are designed to mimic the pattern recognition of the human brain.
"It would be much easier to teach a neural network to make art of its own design, of its own taste and agency than to get it to learn to speak a human language or to understand human behavior," he said.
But even if Mack sees his dreams become a reality with the development of more sophisticated machine intelligence, there will always be those who doubt that computers can have any agency, or interiority. One of the most famous opponents of the computational theory of mind is John Searle, a philosophy professor at Berkely whose "Chinese Room" argument—which more or less states that even when a computer engages in extremely complicated tasks, without an innate understanding of those tasks it cannot be understood to be intelligent—challenges the very concept of strong AI.
The question of whether AIs will ever truly make art comes down to these knotty philosophical puzzles over interiority and agency—ones that are likely to remain unsolved even as the superintelligences that we have been warned of begin to emerge. In a way then, the question of whether they will make art is equivalent to asking whether they will have agency.
But at the same time, the art they do make may serve as an expression of interiority, just as it does for humans. For this reason, scientists like Jacobstein think that it is useless to get bogged down in the semantics of who created what. He prefers that we focus on the actual capabilities of artificial intelligence, both now and in the future.
"For those people who are actually interested in AI and interested in art, the real question is what will the two be able to do in concert," he said. "The answer is 'a hell of a lot.' And some of it is quite beautiful and extraordinary."
Follow Jonah Bromwich on Twitter.