Think of how you treat your computer or smartphone now: It lets you access and store more information, more quickly, than the brain alone could at any point in human history. Now imagine that you could do all that without lugging around a laptop or worrying about your battery dying. It would all be in your head, all the time. Sure, it kinda sounds creepy, but it also sounds amazing, right?
While we’re not all the way there yet, we’re headed in that direction. The neuroscience field is brimming with new research into how the brain works and how to control it. And we’re getting close to brain upgrades: breakthroughs in mobility assistance for quadriplegics are coming all the time.
Who’s driving a lot of neuro research? The military. Much of it is health related, like figuring out how to make prosthetics work more seamlessly and helping diagnose brain injuries. But the military’s involvement highlights the basic ethical quandary of neurological development: When our brains pretty much define who we are, what happens when you start adding tech in there? And what happens when you take it away?
Jonathan Moreno is quite possibly the top bioethicist in the country, and along with Michael Tennison, recently penned a fascinating essay on the role and ethics of using neuroscience for national security. He also recently updated his book Mind Wars, a seminal look into the military’s work with the brain. I had the chance to chat with him about brain implants, drones, and what will happen when military tech hits the civilian world.
VICE: Let’s just jump right in: In terms of the military, what’s the next step in neurological advancement?
I don’t think that there’s a single next step. I wouldn’t put it that way. I would say that there are different things happening on different fronts, and some of them are already happening, others are on a farther front.
I mean what’s already happening is the way that the internet is changing the combat situation – just in terms of information that can be sent into the cloud and pulled down on an iPhone app. You might not call it neuroscience, but it affects the way people work together in groups in a very intense situation. That kind of stuff is already out there, and is being used in counter-intelligence situations.
Near term, there’s a lot of research into interrogations, right? What are the ethical boundaries of interrogating someone?
Coming out of this big debate about torture, in my view, it’s a very unprofessional and misleading way of doing interrogation. Interrogation is done everyday. It’s done by police everyday. People have completely lost track of the larger context of interrogation.
Interrogation is mainly about gaining information through a rapport with the subject. It’s not about torture. That’s a lot of bullshit. Interrogation is basically police work and psychology.
So we are now trying to learn more about what makes people tick with respect to disclosing information and sharing information about whatever group that they are associated with. That’s more of the soft side of psychology.
There’s an interesting speculation about whether you can manipulate certain naturally occurring brain hormones that are involved in trust and social relationships. The question is, if you can boost the production of those things, whether or not that would be acceptable.
It’s interesting, but still very much in the experimental stage. There’s also research into different measures for keeping people awake and alert. It used to be caffeine and nicotine, and then speed, and now we’re in this new era of experiments actually used in clinical study with a drug like provigal. There’s a huge market potential market for what I’ll call “anti-sleep medication.” I think that is an area that has a lot of potential money there and I think you’ll see some development there.
What brain tech is coming farther in the future?
Somewhat further out, there’s external neurostimulation, direct current stimulation, or transcranial magnetic stimulation. There’s some evidence that if you put people under a magnet – a cheap, portable magnet actually — that you can change the polarity of ions, help them actually learn faster, and maybe wake them up. You might see that in the next 10 years in vehicles; maybe in 20 years in helmets. There’s a lot of interest in that area.
Much further down, in terms of devices, the question is whether you can make an artificial brain organ, like the hippocampus, which is associated to learning and memory. Maybe then you could, as one science fiction author said, “jack in,” and have a permanent port on your skull. You pop something in and you can download information that way. I think that’s very interesting but very speculative and it’s brain surgery. We’re not doing that anytime soon!
What about the idea of brain-powered drones, or neural implants used for controlling interfaces. That’s obviously far away, but is this something you’re discussing in terms of ethics? Or are there closer things you’re more focused on?
I think a brain-machine interface is something that we are going to see a lot of work on. The interesting question here is whether the direct connection of a device to your brain — rather than through impulses, or pressing a button and pulling a joystick — does that interaction create practical advances? Or is it not advantageous?
Pushing ahead a little farther forward is the idea that, if we can deconstruct the way the brain works and understand its creativity and spontaneity, can we write computer programs that have that kind of creativity? Will we be able to actually recreate that creativity with algorithms so that devices like drones can work totally autonomously without human control?
How would that be advantageous?
One of the things that the human brain does that computers can’t is to understand context. The question then is how do we mimic the outputs of the brain so we see how it makes those outputs? What’s inside of the black box? How does it work?
So if we learn enough about the brain to write computer programs that actually mimic the way the brain works, maybe that can include the creativity of the human brain. And then you could have an autonomous war fighter (maybe in the air, maybe on the ground) that would have some creativity, but would also have some advantage over human war fighters because it wouldn’t be carried away by its emotions. You could program it to have the rules of engagement in its system.
It would not go out and, like in the case of Robert Bales, massacre some people because of psychological problems. It wouldn’t feel anger or remorse over losing a buddy and seek revenge, it wouldn’t engage in rape and pillage.
There are some real ethical advantages to that. But that’s way, way out in the future I think. There is a real debate right now, in the military establishment, at high levels at the war colleges, over whether autonomous lethal engagement is even acceptable from the point of view of the military ethos. It’s definitely a conversation that people are having.
Read the rest over at Motherboard.