Ever wanted to download a copy of your own brain? Say you went through a serious car crash, for example—wouldn't it be nice to take out your damaged brain and replace it with a replica you'd downloaded and stashed away prior to the accident? Or perhaps over time you could even build a collection of brains, each storing different memories, thoughts, and dreams that would equate, in a sense, to different versions of you? Something like that might come in handy when you're trying to throw off various neuroses, like a fear of asking out hot people or an anxiety about bungee ropes, or a reluctance to believe that scientists could one day pull something like this off.
There are people trying to make this a reality. Last month, a Japanese supercomputer managed to simulate one second of human brain activity; last summer, some German scientists unveiled a remarkably high-res 3D digital model of the human brain; and last April, the Obama administration announced the BRAIN Initiative, a research endeavour projected to cost hundreds of millions of dollars and take over a decade to complete. Its humble goal? To map every single one of the tens of billions of neurons in the human brain, creating a "connectome"—a comprehensive diagram of the brain's neural connections.
Theoretically, a complete connectome of an individual's brain would constitute a copy of the pathways between every memory, thought, and experience that person had ever had. The implications of this kind of precise knowledge of a brain are far-reaching, but at this point still largely speculative.
Current procedures for brain imaging on a micro level tend to be incredibly time-consuming, costly, and require the destruction (via slicing and/or dyeing) of the brain being studied. But with the freakish, robotic march of progress, the technology required is being built and improved upon, and some futurists suggest that humans will be able to download and store copies of their brains within the next two decades. Naturally, labs the world over want to get there first, but I couldn't find many that are already trying to sell the tech to you.
One I did find is Brain Backups. Headed up by 32-year-old Russell Hanson, the neuroinformatics startup based out of Cambridge, Massachusetts, aims to map human brains without destroying them. While other research groups are being formed and funded through government grants, Brain Backups hopes to crowdsource a great deal of its research costs by offering the future storage of all your neurons and synapses. I gave Russell a call to find out his thoughts on the matter.
VICE: Can you explain—in the simplest possible terms—what your company does or proposes to do?
Russell Hanson: Our team is developing the tools to image the brain non-destructively and non-invasively. The earlier methods in this field required slicing the brain very thin and imaging it on an electron microscope, which is both extremely slow and extremely expensive. We wanted to do this faster so researchers can learn how the brain changes over time, without destroying the brain every time they wanted to make a measurement.
OK, and how are you going to research this?
Obviously we’re talking about animal experiments here. We're a small company with a big goal. We have some very talented engineers, scientists, and designers from MIT, Harvard, the Danish Technical University, UCLA, biotech, and pharma, and also the synthetic biology community in Boston. Our goal is to do this cheaply and non-destructively, so that anyone can image their brain, like they can map their genome affordably using a [personal genomics testing] service like 23andMe. I got into this a number of years ago when I asked how much space is needed to store the contents of the human brain in a class at MIT. It’s only become more interesting since then.
How much space is needed?
It depends a lot on how detailed the information you want to store is. The range is somewhere between 1,000 terabytes to 10,000 terabytes. With compression, this can be much smaller—this is an estimate of the uncompressed size.
Does the technology you want to use even exist yet?
The actual technology does exist, but it is cumbersomely slow and prohibitively expensive. Our equipment is quite real—we're not working with hypothetical equipment. It's incremental; we can do a certain set of things now, and we want to do a certain set of additional things tomorrow. And it's just getting easier, just like building anything. Ford didn't start out with their 2013 model, they started out with the 1908 Model T—the first car affordable to the middle class. And before that there were prototypes—19 of them, in fact, before they got to the Model T. The whole goal when I started this at MIT was to make the personal brain map affordable on a middle-class income.
A PET scan of a normal brain. Image via
At the moment, how much would it cost to back up your brain, and what exactly would that get a prospective buyer?
Please understand this is the current “research and development” price, not the price of the product, which will be much lower. The current estimate is in the range of $1.5 million to $3 million for a destructive, knife-edge scanning, optical microscope imaging of a human brain. It would give, essentially, a complete brain map, but of course would destroy the brain in the process. This would provide the set of images that can be used to do a whole brain circuit reconstruction.
There are other methods that use nanoparticles, synthetic biology, X-rays, or MRI that can reduce this cost significantly, and that do not require destroying the brain during imaging. The price for high-throughput genome sequencing has come down to $3,000 to $4,000 recently, and there are methods that are in development to use this inexpensive method to get high resolution brain connectivity information. Getting this cost down significantly, making the data more useful and easily understood, and building the interface and platform are the foci of our work.
Currently, you have to have a non-living brain for imaging, right? How far are you from being able to map a brain without destroying it?
It’s all about the resolution. Currently we can map the brain’s activity using fMRI non-destructively. Newer special purpose MRI machines with higher power and animal MRI machines have greater resolution than older medical MRI machines. Determining exactly what is needed for different types of brain maps apart from “everything” is an open research topic. What is the minimal amount of information needed to accurately characterize or model a brain, and in what way? Adapting these methods from animal experiments to safe methods that can be used with human subjects is where much of the new Obama BRAIN initiative and many research labs are heading.
So once a brain has been imaged, can you effectively play back that information, like a tape?
A single snapshot is a static image, so you can’t play something back that doesn’t have a time series associated with it. Conceivably, you could "rewind" just as you can peer back in time into your memories. The way different people access different pieces of their memories is hierarchical and everything is built upon prior experience, so you would have to build a special kind of "relative knowledge engine" that needs to construct the mechanism of accessing the memories for each person individually. Research has shown that the brain is very poor at telling wall-clock time, and is affected by all sorts of things, like whether we caused an event or not. So no—you can’t really "play back" the information in the kind of frame-by-frame or second-by-second manner we’re used to with audio or visual recordings.
The connectome, from my understanding, is simply the documentation of connections, but provides no information about what is being passed between neurons at these points. If you can't play back or otherwise access the information in your brain, what's the use to the average person of having a map of their brain's pathways?
The goal of the work is to build the infrastructure to make this data usable and interesting. It is pretty clear that having the brain map is a necessary first component to "playing back" or "running" a meaningful dynamical simulation of a brain, whether it's a mouse, fly, or human. We decided to tackle this engineering challenge first before the other one—that's being worked on by other very capable groups. In its simplest form, this research will surely inform treatments for devastating diseases like Alzheimer's, Parkinson's, autism, depression, and others—research that the governmental funding agencies have a long history of supporting.
A 16th century diagram of how to prepare the skull for brain surgery. This is the kind of thing Brain Backups would like to avoid. Image via.
Who do you think might be interested in "backing up" their brain, and what might be the benefits of having a copy?
"Backing up" the brain is really just a short way of saying "getting the relevant information on cellular structure, neuronal connectivity, etc, at a very high resolution and recording all that information to a computer or hard drive." There are lots of compelling reasons why getting this brain backup is useful. I think one of the most compelling ones is that it’s like an insurance policy, a backup of something you value. You could get in a car accident tomorrow morning and really wish you could just rewind. The medical benefits of having this detailed personal information are also huge: a doctor could know exactly which treatment you should receive for depression or Alzheimer’s or epilepsy without having to guess or rely on crude measurements.
What do you make of the suggestion that the brain can't possibly be uploaded or stored in its entirety because its important features are the result of unpredictable, nonlinear interactions among billions of cells? Are the brain and the human experience it processes too random to be computerized?
This is essentially a computability problem. All of the information in the brain is a finite set of finite-precision numbers. It is well known that any finite set of finite-precision numbers is computable. From a chemical or biochemical perspective, having enough data about the biochemical interactions—i.e. that these proteins, genes, RNA, etc, are used in this neuron and in this way—is all the data that is needed to determine the neuron’s function. Gathering the appropriate dynamical and time series data with the appropriate metadata and also gathering the chemical and biochemical data without destroying what is being imaged is a technology problem, not an intrinsically intractable system. There are already many neuron modeling computer programs that can model experimental neuronal firing data very accurately.
What are the implications of having someone's brain content downloaded somewhere, in terms of identity theft or large-scale life tampering?
I think it is very unlikely. For example, anyone can steal your DNA by just getting a sample of your saliva. I can’t think of anyone who thinks twice about spitting because they fear someone is going to come along and harvest their DNA, which is all the information needed to make them. These days, people are uploading all kinds of information about themselves, including their genome, because they realize this data is important and can benefit society. Some people are uploading their genomic information in the hopes that, because it is available, someone will use it to fix the ailments that affect them personally, or that affect their families. This is happening at hospitals in controlled environments, but also on the open internet. Right now it is a purely hypothetical problem of online genomic identity theft. It is too expensive, and the skills required are very specialized.
Regarding protecting the data, encryption is the industry standard. If you steal someone’s data, decrypt it and that data is used to impersonate someone—and that data you are using to impersonate them is everything they know—the problem becomes a little bit more tricky.
Yeah, I can see that you might run into a few problems there. Finally, can you map the brain successfully without mapping the consciousness? A lot of the criticisms of brain backup research seem to rest on the idea that machines can’t possibly process phenomenal human experience.
Most of the work on this tends to be philosophical. It is a classic philosophy vs. science debate. I am not much of a philosopher. In my view, and in the view of many others, consciousness arises from biological, chemical, and physical interactions. This isn’t to say that there aren’t many interesting philosophical issues of mapping the consciousness; there are. Deciphering the neural codes that are used to communicate with the nervous system has shown that they are indeed very much like machine codes.
Follow Monica on Twitter: @monicaheisey