FYI.

This story is over 5 years old.

Tech

Man As Machine: Checking In With Kevin Warwick

At the forefront of the cyborg world, from both a technical and ethical standpoint, is Kevin Warwick.

One day, probably soon, we’ll all be cyborgs. Honestly, we’re already pretty close. As it is, we can’t live without our computers, we’ve already got our phones glued to our heads and the world of prostheses has already evolved from peg legs to robot territory. Is it so hard to fathom that all of the tech in our lives will get faster, smaller and installed in our brains?

At the forefront of the cyborg world, from both a technical and ethical standpoint, is Kevin Warwick. A professor at the University of Reading, Warwick carried out his first cyborg experiment a full 13 years ago. Called Cyborg 1.0, the study used Warwick himself as the test subject, who had an RFID chip implanted in his forearm so he could be recognized and tracked by systems in his labs and offices which automatically opened doors and the like for him. That work evolved into Cyborg 2.0 in 2002, when an electrode array was implanted in his arm that transmitted his nerve signals to robot arms. Since then he’s done things like send his neural instructions from New York via the Internet to control his robot hand in Reading and connected neurally with his wife to send messages in Morse code from brain to brain. The list of awards and honors he’s received is beyond impressive.

Advertisement

It’s been nearly a year since Motherboard made a documentary on Warwick, so we gave him a call to see how things are going. He was kind enough to chat away with us about how he’s making biological brains that control robots, Michael Crichton, his appearance in 2009’s Transcendent Man, Terminator and the future of man and machine. Talking to him, it’s not a matter of if humans evolve into cyborgs. The real question is if we survive it.

Want to chat with the cyborg yourself? Head over to Reddit, where he’ll be fielding some of your burning questions.

Motherboard: I was really happy when you mentioned Terminal Man by Crichton, because to me that’s exactly what your research seems based on. In the novel, there are two facets: first, the man connected to computers to regulate his emotions, but then there are also computers being made with biological brains. Now, you’re actually doing that, right?

Oh yeah, tick the box on that one. At least the basics of it, yes. It’s amazing how Crichton was inspirational. I don’t know that he gets, or got, anywhere near as much credit as he should have gotten for that.

It’s a simple brain. It’s two-dimensional at the moment. It’s one hundred thousand neurons, so it’s not the hundred billion of the human brain. But to be perfectly honest, [to advance] it’s more just a technical build-up. There’s nothing stopping us going to three dimensions. In fact there is a team we’re working with on that now. [The three dimensional brain] takes the numbers up to thirty million. We have human neurons to use, which we are about to use. We haven’t actually put the humans neurons in, we want to make sure that when we have them ready to go we can do it in three dimensions.

Advertisement

To me it’s really just a technological issue of how big the technology is. The technology we’ve got will allow us to put thirty million human neurons into a robot body. We just need that technology to be made a bit bigger so instead it can be one billion. As it happens, I don’t see any reason why we can’t build biological brains that are much larger.

I think we’ve got all sort of issues with it. Even with one hundred thousand neurons, I think the brain gets bored. We’ve got hardly any sensory input to it. If you can imagine only having an ultrasonic input giving you a signal every two minutes or so, and that’s the only sensory input you have, you’d be bored stiff in a matter of seconds or in a few minutes. I think we need to increase the sensory input and give the brain more abilities. We have to think what it’d be like if you were the brain and bored stiff. It’s not just building the brain, but it’s giving it some input and output as well.

So is this an actually living brain you’ve connected to robots? Do you have to feed it?

Typically we’ll have about 25 brains on the go at one time and they have to share the body. We can only really [have one at a time]. It’s the interface. We do have four or five of the robots but practically there’s only one robot connected to one brain at any one time. It’s a body-sharing exercise. The brains develop differently depending on how much time they have in the body. It’s a Bluetooth connection; I do have a student who’s working on a mini-incubator to put on top of the robot so we could have the brain and the robot together but there are mechanical issues, like vibration. The human brain is temperature controlled, there’s vibration damping with the liquid around the brain. There’s a lot of mechanical design in there as well and we need to improve on that.

Advertisement

So at the moment, there are 100,000 neurons connected to the robot through Bluetooth. It is living, it’s in an incubator at 37 degrees centigrade. We have to feed it every couple of days with minerals and nutrients. It actually excretes, we have to clear away waste products and whatnot. It is living material, living neural material, connected up to a technological robot body.

Do you program the brain, or does it respond on its own to stimuli that you give? Do you have any control of it at all?

Well, um, limited control. After about one week we see how it’s developed. We can stimulate different electrodes. It’s the same kind of biphasic stimulating pulse that we used for my nervous system, the same type anyway. So we stimulate different electrodes and we see how the brain has developed as to where it is giving a response, if you like, a considered response, on the other electrodes. We use that inherent [trait] that’s developed over the first week or so to connect it to the robot body. When the robot gets close to an object, a pulse is provided into the brain, the brain then thinks about it, and will change direction when it puts a pulse out on another of the electrodes. We don’t program it, but we do use the way it’s developed.

What we’re looking at is how that decision making improves over time. We can actually look under the microscope and see the neural pathways getting stronger and we can see the performance of the robot improving in the way it moves around. It doesn’t bump into walls. After two weeks from when we connected it up it’s bumping into the walls quite often, after two months, for the robots where the brain has learned well, it’s moving around and it never bumps into the wall at all. It’s absolutely perfect. It’s a bit boring when it does that.

Advertisement

We don’t sort of program it, we let it develop itself neurally. It’s exciting to see that, how the neural pathways strengthen themselves.

So if I could just sum this up, you have a learning, living brain that you’ve created that connects to a robot body through Bluetooth. That’s what you just told me.

That’s right. Exactly that.

That is absolutely out of this world.

(Laughs) Well it’s not, that’s what we’re doing. It’s in the world.

I think the question is, if you’ve got humans interfacing with robots and robots with biological brains, where the nexus is in the future. You were in the documentary Transcendent Man , which was dedicated to discussing that topic. What do you think about the competing views in that doc, which ranged from Ray Kurzweil saying that with continuing tech advances we’ll end up living forever to people thinking cyborgs will bring on the apocalypse?

I quite like Kurzweil, he stirs things up a bit. I put a challenge out to him, he’s a nice guy but he’s a guru, so I put a challenge out to him to have a try himself. Have an implant or two, which I don’t think he’s had yet but hopefully he will do. I think that the experience, practically, is important. The fact that we do actually build robots with biological brains. The fact that I have personally had implants and experienced what it actually means. I do find it interesting.

I know that people get criticized, Ray gets criticized, Hugo de Garis gets criticized, I get criticized, by people who don’t really know too much. I think the folk that were brought together by Barry Ptolemy, the producer, in Transcendent Man were people who are involved with the technology. They’re not just futurists. They’re involved, they talk to other people that are involved, so I think most people had a practical concept of what could and what is likely to happen.

Advertisement

In saying that, you get different conclusions. In terms of the longevity, how long people are going to live, I think my issue with it… I can see that we already are living longer, and we’re going to live longer still because of the technology. But at the same time, neurally, we’re seeing all sorts of problems: a lot more people with Parkinson’s disease, a lot more people with dementia. Hence, [in] a lot of my research, I work specifically looking at the technology to alleviate a lot of the problems of Parkinson’s disease. The work with the neurons in the robot partly is to try to figure out how to retain memories or how to overcome the problems of dementia. With our little bottle brain we can start messing around and try to find results.

I hopefully am inputting technically into the debate or discussion on living longer. I have a view on that, but I hope that we’re going to not only live longer but live longer and healthier. I don’t just want to see us living to the age of 2,000 but getting dementia at the age of 50, where you live for 1,950 years with dementia. That would be absolutely awful I think. I’m more concentrating on the healthier than the longer in that respect.

In terms of the apocalypse, I think anybody who researches into machine intelligence — artificial intelligence — either you start with the philosophy that the human brain is the best there is and all we can try and do with a machine brain is to try to copy it. Marvin Minsky, for example, I know that is his philosophy, and a lot of the old-school artificial intelligence people [have that philosophy]. If you have that, you can drink your cocoa at night. You can relax and say that there’s not going to be an apocalypse in this sense, and that’s fine.

Advertisement

However, if you say machines can be intelligent, that machines have a different [intelligence] to that which humans have, and potentially it could outperform human intelligence — and I’m one of those that say that — then the apocalypse scenario is a realistic one. You can’t put it out as long as intelligence machines have the power, have the capability ultimately to make their own decisions. Just to quote science fiction scenarios, you’re into the Terminator scenario.

I think there is a drinking cocoa, relaxing option if you assume that machines can’t have that [intelligence], but I think most researchers now, the vast majority in artificial intelligence, would not be of the cocoa drinking fraternity. They would be looking at machines having the potential to be much more intelligent than humans or having that sort of powerful capability. And in that case, the natural conclusion, although some of them would not like to admit it because it’s not politically OK to admit it, is that the apocalypse scenario could happen.

I think the view that machines can and will be smarter than humans is more common these days, but I also think that a lot of people that aren’t involved with research in the field think that the only possible outcome is the Terminator scenario: computers gets smarter and smarter until they decide they don’t need humans any more. With your work blurring the line between human and robot, do you think at the Singularity, the point when computers are actually smarter than humans, that fear is a moot point because we’ll already be part computer anyway?

Advertisement

I think if you look at the apocalypse scenario, I’m one that would certainly not want it to happen, and hence if there are AI researchers that are just going ahead with putting more intelligence in [machines] I think they’re the ones who are being the irresponsible ones. Looking at combining with [machines], upgrading humans, I think is a very, very sensible and practical option. Hence, that’s where the main focus of research should be.

I know Stephen Hawking, I mean he’s a very eminent, well respected scientist around the world, he has said exactly the same thing: that the threat of the apocalypse — I don’t know if he used these exact words — but the apocalypse threat is a real one, particularly with Moore’s Law. The fact that computing power is doubling every year and so on, it’s a realistic threat to humans, and therefore we should really focus our efforts on combining humans and technology in a mental sense, a neurological sense.

I think Ray says similar things from time to time as well. So a lot of the really eminent people in the field are pointing to that as a realistic option. In that case, it might be a moot point where we don’t even get into the scenario where machines are deciding to go their own way. The Terminator scenario as portrayed in the film doesn’t appear because we’re upgrading as humans and going forth as cyborgs, as part human, part technological entity. Probably [we] won’t look anything like Arnold Schwarzenegger, but more that we’ve got a regular human body of some form [and] maybe our brains are networked in.

Advertisement

We know what path we’re heading down, with computers getting ever faster and smarter. Ethically, how do you control what is basically an intelligence arms race? What do you think is our best path?

That’s a good question, and in a sense I will just chicken out from answering. I think there will be commercial interests that will come on board quite heavily. People talk about the digital divide in society. I don’t think we’ve got a digital divide at the moment. It really stretches society, [but] it’s still one society even from the poorest person who’s even starving in Africa to the richest person who’s living in the lap of luxury in Manhattan. I think it stretches society, but with this, with cyborg technology, I do think it can create a divide. The people who have their brains linked into a machine brain network, like an AI system, intellectually will be so different, so superior if you like, to the humans that haven’t got such a link that it does create a divide. And ethically, there are different issues.

I think ethically, if you’re fighting for the rights of humans, then the cyborg technology is not a good thing. But if you’re looking at the rights of cyborgs, then of course the technology is a good thing. And of course you get into a sort of arms race as to who’s got the better loaded brain and who’s got the better network, so you’ve got those technological issues. But that’s a cyborg problem.

There will always be people who refuse to adopt a trend. Do you think that in this case those people will become so marginalized that they can’t even compete any more?

Well, Hugo de Garis describes these people as Terrans. The Terrans, in a sort of Luddite fashion, would fight against the ones that want to upgrade and go forward. I think Hugo might be right, that there could be Terrans, but I think realistically they would not have the technology… I can’t see them competing for five minutes without the technology. There may well be Terrans but I don’t think that they’ll last very long. I think that they would be very quickly burned out or left behind, simply like that. Yeah, there will probably be a few people who grumble about it.

However, if we look at, for example, how cell phones have taken off. In the first instance there were some people carrying around cell phones that were the size of a shoe box next to their head and they were all a bit geekish, but the technology moved forward, the technology was made smaller, and now there is so much social pressure to have a cell phone almost fixed to your head wherever you are. You have to have it switched on all the time. If you haven’t got a cell phone then you are almost an outcast. So, it could well be that if the technology takes off, even the whole cyborg thing with implants and what not, there will be so much social pressure in the Western world that you have to go for it. You will be seen to be some strange sort of person, a Terran as Hugo calls them, if you don’t go for it.

So from your standpoint, at some point we’ll basically all be cyborgs, right?

Well, a large number of people will be, yes. People that maybe think now ‘Oh, I will never do something like that’ will get caught up in the tide and will have to go along with it whether they like it or not. I think [with] a lot of technological things you have that. If you went back about 15 years, maybe not even that far ago, if we’d been talking about blasting lasers into your eyes you’d say ‘What? No, I’m never going to do that!’ and now you’ve got people saying ‘Yeah, I’m going to have laser eye surgery!’ as if it’s something really cool. They’re special because they’re having it done. It’s the same people that 10 years ago wouldn’t even have considered it. I think people shift in their opinions on this, and even now there are people happy to have implants, but I think that number will increase dramatically when they see how much more power they can make their intellectual capabilities.

@derektmead

Image: Chris Beaumont