FYI.

This story is over 5 years old.

Tech

Jacked-In Soldiers and Military Neuroethics: An Interview with Bioethicist Jonathan Moreno

When our brains pretty much define who we are, what happens when you start adding tech in there? And what happens when you take it away?

Think of how you treat your computer or smartphone now: It lets you access and store more information, more quickly, than the brain alone could at any point in human history. Now imagine that you could do all that without lugging around a laptop or worrying about your battery dying. It would all be in your head, all the time. Sure, it kinda sounds creepy, but it also sounds amazing, right?

While we’re not all the way there yet, we’re headed in that direction. The neuroscience field is brimming with new research into how the brain works and how to control it. And we’re getting close to brain upgrades: breakthroughs in mobility assistance for quadriplegics are coming all the time.

Advertisement

Who’s driving a lot of neuro research? The military. Much of it is health related, like figuring out how to make prosthetics work more seamlessly and helping diagnose brain injuries. But the military’s involvement highlights the basic ethical quandary of neurological development: When our brains pretty much define who we are, what happens when you start adding tech in there? And what happens when you take it away?

Jonathan Moreno is quite possibly the top bioethicist in the country, and along with Michael Tennison, recently penned a fascinating essay on the role and ethics of using neuroscience for national security. He also recently updated his book Mind Wars, a seminal look into the military’s work with the brain. I had the chance to chat with him about brain implants, drones, and what will happen when military tech hits the civilian world.

Let’s just jump right in: In terms of the military, what’s the next step in neurological advancement?

I don’t think that there’s a single next step. I wouldn’t put it that way. I would say that there are different things happening on different fronts, and some of them are already happening, others are on a farther front.

I mean what’s already happening is the way that the internet is changing the combat situation – just in terms of information that can be sent into the cloud and pulled down on an iPhone app. You might not call it neuroscience, but it affects the way people work together in groups in a very intense situation. That kind of stuff is already out there, and is being used in counter-intelligence situations.

Advertisement
Jonathan Moreno.

Near term, there’s a lot of research into interrogations, right? What are the ethical boundaries of interrogating someone?

Coming out of this big debate about torture, in my view, it’s a very unprofessional and misleading way of doing interrogation. Interrogation is done everyday. It’s done by police everyday. People have completely lost track of the larger context of interrogation.

Interrogation is mainly about gaining information through a rapport with the subject. It’s not about torture. That’s a lot of bullshit. Interrogation is basically police work and psychology.

So we are now trying to learn more about what makes people tick with respect to disclosing information and sharing information about whatever group that they are associated with. That’s more of the soft side of psychology.

There’s an interesting speculation about whether you can manipulate certain naturally occurring brain hormones that are involved in trust and social relationships. The question is, if you can boost the production of those things, whether or not that would be acceptable.

Will we be able to recreate that creativity with algorithms so that devices like drones can work autonomously without human control?

Wow.

It’s interesting, but still very much in the experimental stage. There’s also research into different measures for keeping people awake and alert. It used to be caffeine and nicotine, and then speed, and now we’re in this new era of experiments actually used in clinical study with a drug like provigal. There’s a huge market potential market for what I’ll call “anti-sleep medication.” I think that is an area that has a lot of potential money there and I think you’ll see some development there.

Advertisement

What brain tech is coming farther in the future?

Somewhat further out, there’s external neurostimulation, direct current stimulation, or transcranial magnetic stimulation. There’s some evidence that if you put people under a magnet – a cheap, portable magnet actually — that you can change the polarity of ions, help them actually learn faster, and maybe wake them up. You might see that in the next 10 years in vehicles; maybe in 20 years in helmets. There’s a lot of interest in that area.

Much further down, in terms of devices, the question is whether you can make an artificial brain organ, like the hippocampus, which is associated to learning and memory. Maybe then you could, as one science fiction author said, “jack in,” and have a permanent port on your skull. You pop something in and you can download information that way. I think that’s very interesting but very speculative and it’s brain surgery. We’re not doing that anytime soon!

This is the near future of neuro development.

What about the idea of brain-powered drones, or neural implants used for controlling interfaces. That’s obviously far away, but is this something you’re discussing in terms of ethics? Or are there closer things you’re more focused on?

I think a brain-machine interface is something that we are going to see a lot of work on. The interesting question here is whether the direct connection of a device to your brain — rather than through impulses, or pressing a button and pulling a joystick — does that interaction create practical advances? Or is it not advantageous?

Advertisement

Pushing ahead a little farther forward is the idea that, if we can deconstruct the way the brain works and understand its creativity and spontaneity, can we write computer programs that have that kind of creativity? Will we be able to actually recreate that creativity with algorithms so that devices like drones can work totally autonomously without human control?

How would that be advantageous?

One of the things that the human brain does that computers can’t is to understand context. The question then is how do we mimic the outputs of the brain so we see how it makes those outputs? What’s inside of the black box? How does it work?

So if we learn enough about the brain to write computer programs that actually mimic the way the brain works, maybe that can include the creativity of the human brain. And then you could have an autonomous war fighter (maybe in the air, maybe on the ground) that would have some creativity, but would also have some advantage over human war fighters because it wouldn’t be carried away by its emotions. You could program it to have the rules of engagement in its system.

It would not go out and, like in the case of Robert Bales, massacre some people because of psychological problems. It wouldn’t feel anger or remorse over losing a buddy and seek revenge, it wouldn’t engage in rape and pillage.

There are some real ethical advantages to that. But that’s way, way out in the future I think. There is a real debate right now, in the military establishment, at high levels at the war colleges, over whether autonomous lethal engagement is even acceptable from the point of view of the military ethos. It’s definitely a conversation that people are having.

Advertisement

I feel like that’s long been the holy grail for any military: to have someone who is creative and smart but is emotionally unattached. Where do these two things, having more creative robots and having humans that are more integrated with machines — where does that intersect? Do you think that the military will bring on the singularity?

Well, where you’ve got me hesitating is the meaning of singularity — I don’t really know what that is. I mean, there are 18 different definitions. Some people would say we already are at the singularity. Try to imagine running the world without the internet.

Some people mean the more Steven Spielberg kind of singularity, where you have a self-conscious, self-aware creature that is not biological but is also human. That has obviously been great fodder for science fiction for a long time.

But I’m not sure what exactly singularity means, but I don’t think it’s going to happen in one dimension. This is multi dimensional. We’d like to reduce it to Robin Williams playing the android, but that’s not the way it’s going to look. It’s going to be all sorts of different devices with all sorts of different properties, and some of them are going to be mixtures of biological and cybernetic.

What if all this was jacked directly into your skull?

There already are, with pace makers and people with prosthetic devices. There are going to be very strong arms, there are going to be exoskeletons. This is going to happen in all sorts of shapes and sizes. That, in a way, makes it more interesting, and a little harder to get your arms around when you write about the ethics and legal and social implications of the singularity.

Definitely. But it’s a big leap to go from a drone piloted by a human to something completely autonomous. What is your view of the ethics of fighting war with autonomous robots?

Advertisement

I’m not even sure that taking the human being completely out of the loop, although it’s tempting, is advantageous. Americans have a reputation in the military world for liking gadgets, for pushing gadgets as far as possible. But I’ve also heard officers say that relying on electronics for everything is a really bad mistake because there are huge advantages in having a very diverse array of people in the military, lots of different personalities, because different personalities can be important in different situations.

You want somebody that’s got a great sense of humor, and you want the over-the-hill macho guy cause you never know when those properties are going to be needed, when those properties are going to be called upon given the situation. Once an operation starts, all hell can break loose, you know — nobody knows what the variables are going to be.

You know, in Star Wars the robot army always loses because they’re all the same. That doesn’t help you in conflict. You need a wide variety of kinds of creatures, and that’s going to be as true of autonomous creatures as it is of human creatures. On top of that, you’re going to want both autonomous and human creatures.

Mind-controlled drones don’t seem as sci-fi as they used to.

So how do you think these types of developments can trickle down to the civilian world?

Oh, well they do constantly. Just take the case of the internet… or over in the field of psychochemicals, my favorite example is LSD, which started out as an object of concern by the CIA in the 50s and the army in the early 60s (and not only our army, but other armies too) and then turned into the iconic drug of the 60s counterculture.

Advertisement

The thing about neuroscience is that it is inherently dual use. Just about everything we’ve been talking about has a potential civilian or medical application. That’s why it’s hard to decide what society ought to do about advances. It’s dual use. So the same medication you’d use for a soldier, you’d use for a truck driver to keep him awake, which is already being done.

It’s one thing to assemble a computer and play games, it’s another to put something into somebody’s head and then start playing games with them.

The same remote neuroimaging device that you could put in a special operations person to see how they’re managing stress in the mountains of Afghanistan when they’re all alone, you’d put in a person with Parkinson’s disease to see how they’re doing between doctor visits. That’s what’s so interesting to me about this area of research is that it’s all dual use, and it trickles back and forth between civilian and military purposes. Of course, that’s why it makes regulation sort of difficult.

Compare the civilian crossover of a very strict military technology, like a more advanced computer, to a neurological advancement. From your view, is there an ethical difference between those types of trickle down?

Well, one ethical difference is that, for a brain implant, experiments have to be carried out on actual people. People tend to forget about this. There’s always a first user. Therefore, there are very strict rules about how you do such an experiment.

Advertisement

It’s one thing to assemble a computer and to start playing games with it, it’s another to put something into somebody’s head and then start playing games with them. So that’s one really big difference.

And then of course there’s a general philosophical point: the brain is a very important organ. We do think that it is the basis of human personality, it is where the self is located, so whenever you mess around with the brain, you’re getting in some pretty heavy territory ethically. You could be changing someone’s personality in a way that is not reversible.

And then, of course, we haven’t talked about the privacy issues. There’s the problem of when somebody leaves the service and they’ve been accustomed to a certain device that expands their consciousness. What happens when they have to give it up?

A lot of veterans, people who’ve done 3 or 4 tours, when they have to turn in their rifle, the first thing they do is go out and buy a gun. The whole ethical question of what you can do to a soldier or a fighter, and then what you can take away from them, is a lot different from messing with a computer.

That reminds me of something I was thinking about earlier: What happens when, say, a drone-human interface is inside your head? Like a rifle, at some point, the military would have to take it away. I mean, can you turn that off? Is that something that could become part of your brain function automatically?

Advertisement

We don’t know. I mean, the honest answer is, we just don’t know. It’s too soon, and it’s going to take years to sort that out.

The guys who dropped the bomb on Hiroshima, that moment changed their lives. So that can happen to drone pilots. In some way it will, of course. It’s a pretty intense experience, so in some way it will, but we don’t know exactly what it means.

I think that’s a good segue into what was a big part of your essay: you talk about researchers whose goal is to secure funding, do good work, and publish in big journals. But when that work is being done for the military, do the researchers have a responsibility to take a step back and look at what they’re doing?

Right. Well, there’s the example of bird flu research in the biology world. They were able to make apparently a more virulent bird flu. Can you publish that or not? Will scientists now start to ask questions about their own work? How far does their responsibility extend?

There’s the problem of someone becoming accustomed to a device that expands their consciousness. What happens when they have to give it up?

The truth is that most of the neuroscience that is supported by DARPA is basic science. We’re probably learning more about the brain itself than about how to integrate neuroscience with the military.

You know, those applications may come down the road or they may not come at all. That’s just the nature of science.

Advertisement

My favorite example of this problem is that Albert Einstein had to be convinced that his special theory of relativity could be relevant to making a bomb. That wasn’t something he perceived until a bunch of his colleagues approached him in the late 30s and asked him to sign a letter to FDR to warn him about the potential. Even someone as smart as Einstein didn’t necessarily see all the implications of his basic theory.

I think the only thing we can really do is approach it with as much care as we can, but we don’t stop. We probably have to. I think we do. I think you will see more conversations about international conventions for using some of these developments.

The big international treaties right now – the Geneva Convention, the Declaration of Helsinki for Human Research, the chemical weapons convention, the Biological and Toxic Weapons Convention, the International Declaration of Human Rights – we’ve got a body of international law, but it doesn’t necessarily fit the stuff you and I are talking about.

So the last question I have for you is what’s going to happen to us in the future, in terms of the brain? Do you think we’ll all be jacking in and drinking specialized energy cocktails, or is there a point where people say enough is enough?

I don’t think that people will ever say that everything goes. We are going to put up roadblocks at some point. That doesn’t mean that people will never try to jump over them, because they always do. But there will definitely be new social conventions about what is acceptable and what isn’t.

It might be that we create different institutions where you can do some things and not other things. I mean, you can’t be a nudist everywhere, but you can be a nudist at nudist parks, right?

You’re not allowed to take steroids, but on the other hand, maybe there’ll be special athletic events at which you’ve accepted the risks and you will be able to take steroids and jump higher, faster and so forth.

I think it’s going to be a combination of conventions of what you can and can’t do, but then there are always going to be other institutions that will be safety valves where we let people do other things that aren’t normally acceptable. I think that’s what we’ve done in the past. I think that’s how we’re going to handle this in a very general way in the future.

Follow Derek Mead on Twitter: @derektmead.