This story is over 5 years old.

How the Internet's Collective Human Intelligence Could Outsmart AI

An interview with French philosopher Pierre Lévy on why Elon Musk is wrong about AI.
May 26, 2015, 3:38pm
Screengrab: YouTube

What if computers could take the words we type on the internet and convert them into a language that describes what they actually mean? Analyzing data pulled from social media would reveal insights into the deeper questions about our real motives and feelings, instead of mere statistics.

Pierre Lévy, a French philosopher who's been writing about cyberspace since the 1990s and who is the Canada research chair in collective intelligence at the University of Ottawa, is working on software that can do just this. He's done the math and annotated the entire French dictionary with a language—or, as he calls it, a hyper-language, since it describes words that already form a language of their own—that he calls IEML, or the Information Economy MetaLanguage. All that's left is to do the actual coding to turn it into an automatic system.

"[Collective intelligence] is the opposite of artificial intelligence"

IEML works by describing every word in a given language with symbols that can be arranged to indicate meaning along several axes: empty, virtual, actual, things, beings, and signs. An algorithm capable of recognizing and computing these symbols builds a network of semantic meaning within an IEML text, and computes its relations to other texts. By building software that can convert natural language into this code, and letting computers communicate with each other using it, the nature of online communication and what we can learn from analyzing it would be totally altered.

Everyone from the NSA to online advertisers would love such a capability, no doubt, because it would allow agencies to scour the web for meaning itself. But what are the implications of computers communicating with each other on the level of real meaning, in terms of artificial intelligence?


I caught up with Lévy at the IX Symposium for immersive experiences after he gave a talk on IEML to find out.

Motherboard: Can you tell me a little bit about what you mean by "collective intelligence" on the internet?
Pierre Lévy: [Fake sobs] Okay, collective intelligence is deeply engrained in animals; the collective intelligence of bees and ants is well-known. You also have collective intelligence in social mammals. They signal each other about danger and where food is. But in the case of humans, our collective intelligence is much more powerful because we have language, we use technology, and we have complex social institutions. It's a higher level of collective intelligence, and it's fundamentally based on symbolic manipulation.

In human history, there is an increase in collective intelligence based on media that increase our ability to manipulate symbols. Currently, we are at the state of algorithmic symbol manipulation, but it's just the beginning of this new era. My work is devoted to the improvement of collective intelligence by exploiting this new medium.

There are a lot of misunderstandings about collective intelligence. The first is that we have to create it; we don't have to, it already exists. The second idea that should be avoided is that collective intelligence is group think. It's just the opposite of group think. It is the integration of variety and singularities in the philosophical sense, not the Kurzweilian. It's not uniform, it's thinking together.

"Just because a lot of very well-known people say something, it does not mean that we should repeat it"

Finally, the most common misinterpretation is to say, "Ha! Collective intelligence? But there are so many stupid things on the internet." Once, there was a philosopher, if you can imagine, who said to me [high, mocking voice], "More like collective stupidity!" Alors. I said, collective intelligence is not the opposite of stupidity. It is the opposite of artificial intelligence!

People say, "We are going to make computers very smart," and they get all the funding. But what about making humans smarter? Collective intelligence is a research project about making people smarter with computers, and not making computers smarter than people. That's the real definition of collective intelligence.


As you've noticed, one question means one long answer, so choose your questions very well [laughs].

What would you say to the people who are worried that artificial intelligence will one day wipe us out?
I disagree with them. Just because a lot of very well-known people say something, it does not mean that we should repeat it. I strongly disagree. Computers, or intelligent software, or artificially intelligent programs, will never take power. Never. If there is no one to maintain them, they will disintegrate. Technically, it is impossible.

Also, it is a subtle way to refuse responsibility. The machines are built by people, the software are programmed and designed by people, and so on. They are just the media of our will, our intentions, and so on. They have no will and no intention, they merely extend our intentions, and our mind, and so on. They have no responsibility whatsoever.

What about when human knowledge is encoded into a machine?
Okay, you have studied philosophy?

Yes, a little bit.
Okay, so Plato's Phaedrus is a very important dialogue. He said the exact same thing that you are about to say, but not with machines—with writing. He said, "What? You are going to put all the information we have into a library?" So, you have a library—say, the Library of Alexandria—and the totality of human knowledge is encoded here, and so we don't need professors anymore. Everyone is becoming unemployed. It's terrible! [laughs]. You know, there were more than 3,000 people in Paris in the 18th century whose job it was to bear water to people. Then, they invented plumbing, and ah! They lost their jobs! It's terrible!

"It should be encouraged, not feared"

So, no. I don't think that we can look at algorithms in such negative ways. I, myself, have worked in what was, and still is, called artificial intelligence. I had to design expert systems. So, I was interviewing a team of experts, trying to understand their expertise. Then, I transformed, with a creative act of transformation, their knowledge into something more formally organized so an algorithm could use it; a hierarchy of rules. And then, what happened is not at all that their knowledge was disqualified. Just the opposite. They became the masters of this expert system. They maintained it once they had built it with my help. It was very useful to distribute their practical knowledge throughout the whole organization.

It's just like printing or writing; it can help to disseminate knowledge. Putting knowledge into software form is a very good way to disseminate knowledge. It should be encouraged, not feared.


Watch more from Motherboard: Dawn of the Killer Robots

How can IEML make humans smarter using computers as a tool?
You have this immense database that is the world wide web. They have one addressing system at the physical level—you can reach every piece of information by the address of the server and so on—but there is no universal system of categorization. What makes a library useful is its categorization system; without it, how do you find the right documents?

We have a lot of different classifications, we have ontologies, and we have many different ways to classify language and in different natural languages. Practically, it becomes difficult to find information when you don't know what you are looking for. When you know what you are looking for, Google is perfect. When you try and navigate into knowledge, it becomes harder.

One good way to mitigate the problem is to use Wikipedia, but it is organized in the same way as the Encyclopedia Britannica of the 19th century—the same divisions and classifications. All the transdisciplinary aspects are not really there, and you still have the problems of natural languages and so on. My idea is to use a universal categorization system that would be very supple like natural language; you can say anything you want to describe a document. You are not obliged by any kind of rule to describe it this way or another. But all the descriptions are made in the same language, and a language you are not obliged to learn, because you can interact with it with natural language. And this language has a fantastic property because it is an algorithmic code: it is able, for every phrase, to display its internal semantic network, and it computes the semantic relationships between a text and all the other texts it is related to.

"What we will be able to do is have a kind of map, where the nodes are ideas and the connections will be computations"

And, if you categorize data with this system—all the data are organized by their semantic relationships—and use it to do your data curation and using it to describe what you are doing online, there will be an emergence of ideas. The ideas that these people are creating together, and the organization of ideas that they are creating together; an ecosystem of ideas emerging from their communication. That's why I am speaking about reflexive collective intelligence: collective intelligence already exists, especially with the internet, but we still don't have any idea of what we are doing together, really.

You know this famous image of the internet, with big networks with connections like neurons, all the colours, and so on? You take nodes by their geographical position, and connection by the quantity of flux between the nodes—you have no idea of the meaning. What we will be able to do is have a kind of map, where the nodes will be ideas and the connections will be computations and such. It makes sense.


Many people confuse computation with quantity. That's not true. Computation can be about quality, even mathematics can be about quality. If we are able to mathematize structural relations, we may be able to compute semantic relations. But, we need the right code.

Image: Flickr/brewbooks

I just have one more question. You're theorizing a way of organizing information that would appear to fundamentally alter how information on the internet is collected and indexed. That would allow whoever operates this system to gain unprecedented insight, at the level of meaning, to data on the internet. Do you think there's a political dimension to this?
Yes, of course. Currently, the power of analyzing big data is in the hands of these powerful entities, like multinationals—I don't know, Google, Facebook…

And so on. Okay? Big governments and big business. They are mainly the people who use these big data algorithms, and it gives them less insight than they are trying to pretend, in fact. I know what is behind it: statistics, basically. Statistics can give you information, but not so much, really.

At the political level, my idea is to empower people; exactly the kind of idea from the activists in Silicon Valley in the 1970s who wanted to give computers to everybody. And they did it! Everybody has computing power. The power to analyze and makes sense of these huge quantities of data should be in the hands of everybody. This is one of the main ideas.

How to do it? There are basically two main tools. First, the language itself, and two, the software that implements the language, which should be open and free. It would be published under the GPL license version three [a prominent free software license]. And, as Richard Stallman told me, by making every move that we do with IEML transparent.

Not everybody will be able to contribute to the dictionary of IEML because you need specialized knowledge—linguistic skills, mathematical skills, and so on—everything that happens at this level will be completely transparent, nothing hidden. And the creation of new tags and so on will be completely open to everybody. Of course, and even more, the application of these semantic tags to data is, by definition, completely free and open.

I do not know what I can do, more than this. You cannot force people to be free, but you can give them all the tools. That is the maximum that I can do.