Memory Editing Technology Will Give Us Perfect Recall and Let Us Alter Memories at Will

This story appeared in the March Issue of VICE magazine

“There was a piano there, and someone playing. I could hear the song,” the patient, S.B., said as his neurosurgeon touched an electrode to the surface of his exposed brain. To treat epilepsy, Wilder Penfield, an early-20th-century neurosurgeon, would remove sections of brain tissue while his patients—fully conscious but locally anaesthetized—told him what they experienced as he administered small shocks to different areas of their brains. When he stimulated one section, they saw shapes, colors, textures; another, and they felt sensations in various parts of the body.

Videos by VICE

But when he shocked one particular area of the cerebral cortex, patients relived vivid memories. With another jolt to the same area, S.B. recalled more of the piano memory: Someone was singing the Louis Prima tune “Oh Marie.” As Penfield moved the electrode over, S.B. found himself strolling through some neighborhood of his past, “I see the 7 Up bottling company, Harrison Bakery.” S.B. wasn’t alone—other patients also recalled moments of their lives in intense detail. Nothing striking, nothing they’d planned to memorize: the sound of traffic, a man walking a dog down the street, an overheard phone call. They were more vivid and specific than normal memories, more like a reliving than a recollection. Penfield was convinced he’d found the physical site of memory, where memories were locked in place by tissue. “There is recorded in the nerve cells of the human brain a complete record of the stream of consciousness. All those things of which the man was aware at any moment of time are stored there,” Penfield said in a 1958 Bell Labs film, Gateways to the Mind. “It is as though the electrode touched a wire recorder or a strip of film.”

Imagine being able to scroll through your memories like your Instagram feed, to perfectly recall everything you’ve ever learned, to immediately access every section of your life history. You would be efficient, insightful, luminous. Would you be human?

Penfield’s idea, that a perfect transcript of each person’s whole life is recorded in the brain, waiting to be awakened with a gentle electric current, has not proved true. But the idea that stored memories exist as physical changes within the brain has—and recent research is cracking open an array of possibilities for the editing and improvement of human memory. Even as our basic understanding of how memory is encoded, stored, and retrieved remains extremely limited, two separate teams of scientists have made breakthroughs in the field of memory study, successfully implanting false memories, changing the emotions attached to memories of trauma, and restoring the ability to form long-term memories in damaged brains in mice and other animals. One has already reached the human-experimentation phase. And though these new developments are years away from going to market, they point to a future where humanity will have control over memory—conquering dementia and PTSD, perhaps even improving on healthy memory function.

Interest in the field is already widespread. The research arm of the Department of Defense, DARPA, has invested $80 million toward developing a wireless memory prosthetic to help people who suffer memory loss as a result of TBI (traumatic brain injury), a condition increasingly common among military personnel. And a new startup company, Kernel, has hired a leading scientist to help develop a prosthetic memory device for commercial use, envisioning a day in which this kind of tech will be widely available, part of a future in which silicon memory chips are offered not just as medical treatments but as on-demand cognitive enhancements.

As these technologies develop, they bring plenty of technical and ethical questions with them: How will these devices work, and who should have access to them? Can a person have an edited memory and a “real” self? What happens when human recollections are mediated by machines? To find the answers, I set out to talk to two of the men guiding these breakthroughs: Steve Ramirez, a neuroscientist at Harvard, who has successfully implanted false memories in mice, and Bryan Johnson, the tech baron who owns Kernel. As I spoke to them, I found the perspectives from the laboratory and the tech startup diverged on many of these points, raising another, more disquieting question: As human memory changes from an intractable mystery to something that can be engineered, who will get to decide how it works?

***

1486126604859-AndrewWhite_Vice_Memory_002_300

Steve Ramirez, whose team has successfully implanted false memories in mice. Photos by Andrew White

In the first year of his neuroscience PhD program at MIT, Ramirez went through a breakup. As he glumly ate ice cream and listened to Taylor Swift, he found himself thinking about how happy memories of a former loved one become upsetting overnight. He knew that the feeling of sadness—the emotional component of the memory—and the information about the person—the strict contents of the memory—were from different parts of the brain. And so he wondered—what if he could separate them?

“It’s not like I came up with these experiments based on this experience,” Ramirez told me when I visited him at his new office at Harvard, just down the Charles River from his graduate school digs. But it was a formative experience in thinking about the different components that compose a memory, and about how the emotional tone of memories can change over time. “Imagine a memory as a sketch in a coloring book,” Ramirez said, “and emotions are like the colors that color in that particular memory. And they’re almost inextricably linked.”

For Ramirez and his late research partner, Xu Liu, the first step toward working with these different elements of memory would be finding the physical locations of the memories themselves. “This idea has been around in the field for a long time,” Ramirez continued, “the idea that memory leaves an implant, a physical change—sometimes they call it a trace.” But Ramirez and Liu were the first to pinpoint the “trace” and activate a memory from within the brains of mice. The process they were trying to replicate happens naturally all the time—some stimulus triggers a cascade of memories and associations. “If you go outside and walk past a bakery, you might smell a cupcake that reminds you of your 18th birthday,” Ramirez told me. “We wanted to do that from within the brain.”

Instead of cupcakes, Ramirez and Liu used lasers (see infographic at bottom of this page). They began with a mouse, using a genetically engineered virus to “trick” the brain cells associated with memory formation into being sensitive to light at select moments. Then, after making these cells light sensitive, they gave the mouse a mild foot shock, so that it would encode a memory of that shock. After shocking the mouse, they fired a laser into the hippocampus, the cashew-shaped brain area that’s central to memory encoding. They theorized that the light from the laser would activate only the set of light-sensitive cells associated with the foot-shock memory and trigger a recollection.

It worked. When Ramirez fired the lasers into the mouse’s hippocampus, the animal exhibited classic fear behavior, just as it would if recalling or reliving a memory of the foot shock.

A year later, Ramirez and Liu started working on what they called Project Inception—attempting to implant a false memory in mice. To do this, they placed a mouse in a box and administered a foot shock. At the same time, they laser-activated a neutral memory of a box that mouse had been in earlier, but was not in at that moment. The next day, the mouse was afraid of the neutral-memory box—it had never actually been shocked there, but it had a false memory that it had.

1486126763801-AndrewWhite_Vice_Memory_003_300

The laser Ramirez and his team used to activate memories in mice can be seen projected on the wall underneath Michael Jackson.

It was Christmas Eve of 2012 when Ramirez first saw the mice exhibit this response. “My parents were outside waiting to go to Christmas dinner,” he recalls. “There were a few people in the lab, of course—science never rests—but I remember being in the room by myself and being like, This is the best Christmas present ever. This is amazing.

This implantation was just the beginning of the ways that Ramirez’s lab is attempting to alter memories in mice. He recently tweeted about some preliminary findings that, though they haven’t yet been peer-reviewed, point to the ability to change the fear associated with traumatic memories.

In mice that have a memory of a foot shock, the fear associated with that memory can be dialed up or down by recalling the memory using the laser and applying the laser to different points in the hippocampus. When they activated the fear memory by lasering one part of the hippocampus, the mice became more upset. But they were surprised to find that activating the same memory with a different laser placement made the memory less frightening. “We found this aversive memory in the top part of the hippocampus, and then we repeatedly reactivated it over and over. Then when we put this animal in the environment that it should have been afraid of, it wasn’t afraid of it anymore.”

As we sat in his office, Ramirez compared the breakthrough to his breakup. After dumping his girlfriend in his favorite cafe, he said, the place came to hold a painful memory for him, even though he had loved it for its peanut butter-honey-banana sandwiches. But he visited it again and again, and with time, the pain associated with the cafe faded. Reactivating the mouse’s fear memory in this way, he said, was similar to his repeated visits to the cafe. It’s not that far off from the logic of exposure therapy, in which patients encounter the objects of their phobias in safe circumstances until the fear wears off—except those outcomes are achieved with time and behavior instead of lasers.

In the immediate future, Ramirez plans to continue working in animal experiments and begin applying these memory-manipulation techniques as treatments for psychiatric disorders—working first with animal analogs, then progressing to humans (at which point, he said, the treatment wouldn’t necessarily involve lasers). In PTSD, for example, he said, “We can turn down the emotional negative oomph associated with a traumatic experience.” Ultimately, he wants to change how we think about memory. “Can we see memory not just as this cognitive phenomenon, but as a potential antidepressant or anxiolytic? Or can we see memory manipulation as a therapy for things like PTSD?” he asked. Of course, the transition from animal brains to humans is significant. “If we are the Lamborghinis, then animal brains are the tricycles,” Ramirez said, “but there’s still some common ground there in terms of how the wheels spin and how you steer and so forth.” Ramirez is confident that “to the extent that we can do this in animals, it’s actually tractable.”

He understood how this kind of research could seem potentially sinister. But while it might hypothetically be used to create fearful memories in settings like torture or conversion therapy, he said that “we can do the same thing to activate positive memories and updating the contents of a neutral memory with positive stimuli. It can work in both directions.” He continued, “This always begs the question, what if it gets misused? The example I like is water: It’s the most nourishing thing we can think of, without it we die. And yet it can be used for waterboarding. Everything can be used for good and bad.”

***

1486126796376-AndrewWhite_Vice_Memory_004_300

Ramirez’s apparatus for performing brain surgery to implant optic fibers into mice

You don’t have to have bright blue eyes and an unsettling stare to lead the tech company that wants to revolutionize the human brain, but it probably helps. Such is the countenance of Bryan Johnson, the founder and CEO of Kernel, a newly established startup that bills itself as “a human intelligence (HI) company.”

Kernel’s headquarters in the techie Los Angeles neighborhood known as Silicon Beach looks like a standard-issue internet firm. White guys in jeans and hoodies and sneakers or sockfeet pad through the open-plan office to various standing and sitting desks. Bright LA sun pours through windows and skylights over the front lounge area, where the coffee table has a high-end speakerphone and a novelty sculpture of a skull, and rolling whiteboards offer cryptic science-cum-business thoughts like “CapitalismMoore’s Law” in dry-erase-marker scrawl. But Kernel isn’t a run-of-the-mill tech company. Its humble goal is to develop products that blend human intelligence and artificial intelligence, enabling humanity to enhance its cognitive function and ultimately to direct its own evolution as a species.

Johnson, who made his fortune by founding and later selling an online payment company to PayPal for $800 million, has invested $100 million of his own money in Kernel. He aims to raise a total of a billion dollars and bring four human-intelligence products to market in the next ten or 15 years. He’s also invested $100 million in OS Fund, a venture capital fund designed to “rewrite the operating systems of life,” investing in biotech companies that tinker with things like genetics and longevity—the fundamentals of biological systems. Though Johnson doesn’t call himself a transhumanist, like Peter Thiel and Ray Kurzweil do, his fundamental goal is very close—enabling human intelligence to coevolve and keep up with machine intelligence.

One of the central areas of Kernel’s work—along with motor function, learning, and some other areas Johnson said he was not yet ready to talk about—is memory. Kernel’s chief science officer, Theodore Berger, a professor of biomedical engineering and neuroscience at USC, is working on a memory prosthesis that could help patients who have trouble forming long-term memories. When Johnson and Berger first met, Johnson said, “We lost track of time.” The two had a shared vision. “Berger sees the same thing that I do,” Johnson explained, “the potential programmability of neural code—working with our neural code to achieve certain outcomes.” (There are overlaps in the language that’s used to discuss brains and computers—”circuitry” and “wiring” are common in both spheres—but describing neural activity explicitly as code is unusual. It’s an indication of the firm’s Silicon Valley–centered mind-set.)

People who have lost the ability to create new long-term memories—whether because of dementia, stroke, aging, or epilepsy—are all suffering from damage or malfunction of the hippocampus, which converts short-term memory to long-term memory and sends the long-term memories out to other parts of the brain to be stored (it’s the same cashew-shaped brain piece that Ramirez shot with lasers during his research). Representations of memories in the brain exist as what Berger calls space-time codes, not unlike Morse code (a similarity Berger illustrated with a series of rhythmic beeps during our call from a conference room perched upstairs at the Kernel loft).

“I look at the velocity of the development of artificial intelligence, and I look at the velocity of the development of human intelligence, and I don’t like the difference,” Bryan Johnson said.

That code, Berger said, looks one way when it comes into the hippocampus from the sensory systems (like hearing, touch, and sight), and it looks a different way when it flows out of the hippocampus for long-term storage. With this in mind, Berger created mathematical models that mimic this transformation, even without understanding why the transformation happens. “It’s like trying to identify the rules for translating Russian into Chinese, when you don’t know Russian or Chinese,” Berger has said.

In animal experiments, Berger has been able to re-create the processing of these memory codes in rats and monkeys through use of an implant that runs his algorithm, acting as a kind of prosthetic hippocampus (see infographic at bottom of this page). To test the device, Berger implanted it into rats and monkeys that had had their hippocampuses disabled. The rats were trained to pull a series of levers to receive a reward; the monkeys performed more complicated memory tasks using a computer screen. Though both sets of animals were unable to naturally form long-term memories, the rats, when later placed in front of the same set of levers, again pulled the levers in the correct sequence—as if they’d recorded the memory naturally. The monkeys performed similarly well, relying on memories that had been processed by the device.

In presentations in recent years, Berger, who finished his PhD at Harvard in 1976, has often said that he can’t believe the research has progressed so successfully, through rat experiments and monkey experiments. The complexity increases with each move up the species ladder. “They’re larger, they’re more complex, as you might expect—so the modeling became harder,” he told me. Wired has since reported that human experiments are currently under way.

“They told me I was nuts a long time ago,” Berger told MIT Technology Review in 2013. Johnson told me that the kind of skepticism Berger encountered is widespread among people in the field, who have seen how long it takes for our understanding of the brain to grow—”it’s appropriately cautious,” he said. But his view is different. As someone brand-new to neural science, coming to the field as an entrepreneur rather than from the lab, Johnson said that he has “a level of optimism that others don’t have. I’m under no illusion [about the difficulty]—I just think it’s doable and that we should do it.”

On his blog, in a post about his first trip to Burning Man, Johnson wrote that he worried he was “too conservative and buttoned-up” to enjoy the desert freak-out festival the way others did. And while he may indeed be more clean-cut than the average Burner, this conservatism in personal style does not extend to his business ideas, which are nothing short of radical. He wants to enhance human intelligence to ensure that we won’t be left in the dust by the machines we’ve made. “I look at the velocity of the development of artificial intelligence, and I look at the velocity of the development of human intelligence, and I don’t like the difference.” He’s not an AI alarmist, he said; he’s not worried that the machines are coming to get us. But he believes that enhancing human intelligence ought to be a global priority. Instead of using our outdated brains to create new tools, he wants to update the brain itself.

1486126812290-AndrewWhite_Vice_Memory_005_300

The microscope Ramirez and his team use when surgically implanting optic fiber into the brains of mice

Johnson dreamed up his current career path at the age of 21, after returning from a two-year mission trip to Ecuador. “I came back to the US with this burning desire to improve the lives of others,” he said, arriving at the field of human intelligence because he believes it to be “the most precious and powerful resource in existence.” “If I survey the world around me, and I include in the calculation the scarcity of time and resources, what is the most audacious goal I can imagine to pursue?” he asked. “That’s my orientation.” Kernel is a direct product of these two prime directives: Work on something bold and do something to improve human intelligence.

Both Johnson and Ramirez spoke about who should get memory-enhancing or editing technology, but they didn’t see eye-to-eye. “If this ever becomes a thing,” said Ramirez, “ideally we’ll keep it in the realm of medicine, in the context of disorders of the brain. If you’re a good psychiatrist, you don’t give Prozac to the entire population of Massachusetts—you give Prozac to the people who are actually riddled with depression.” The same logic ought to hold, he believes, for any memory-editing technologies that his research might lead to. While they might be appropriate for those suffering from PTSD or certain psychological disorders, “you don’t give it to [some guy] who can’t get over a breakup.”
Johnson arrived at a different conclusion. Although he knows the tech will necessarily start out as therapeutic remedies for people with cognitive deficits, he hopes it will eventually grow beyond that. Far beyond. “My objective with Kernel is to provide this to billions of people,” he says. Ultimately, he hopes that devices like the memory prosthetic that Berger is developing will be available for anyone who would like to be mentally enhanced. Though his goal is a moonshot—the idea of bringing such a device to market even in ten years seems optimistic at best—his demeanor is anything but moony. Johnson expresses his plans and ideas with rigorously analytical precision. “There are already low-resolution forms of cognitive enhancement,” he points out. “If somebody puts their child into private school over a poorly funded school system, that’s a form of cognitive enhancement. A private tutor is a form of cognitive enhancement.” To Johnson, improving one’s mind by use of technology rather than education is a difference of degree and not of type.

And he believes others will come around to this point of view. “If you contemplate a scenario where I’m enhanced and you are not,” he said, “or my child is enhanced and yours is not—it’s an intolerable state.” The idea of people flocking to add machinery to their brains sounds farfetched—until you think about how eagerly the non-diagnosed masses scramble for Adderall to bump their productivity, Xanax to soothe their anxiety, crosswords and Sudoku and any number of cellphone apps to ward off senility’s mental fog.

Johnson’s stepfather has symptoms of Alzheimer’s, and seeing his decline—as Johnson puts it, “watching him lose his humanhood”—has motivated his work with Kernel. Whatever just-finished-Westworld unease one might feel about the possibility of technology-mediated memory, it’s hard to argue against the development of such technologies for the treatment of deleterious diseases.

More than ten years ago, when the idea of memory enhancement was an even further-off dream, faintly twinkling as a possibility in some fruit flies that had been altered to have photographic memories, philosopher and author Michael Sandel wrote “The Case Against Perfection” in the Atlantic. In the ethics of enhanced memory, there was, he pointed out, “the worry about access,” the class differences that could arise as a result of such extreme cognitive advantages. But it was something more fundamental that really bothered him about the idea: “Is the scenario troubling because the unenhanced poor would be denied the benefits of bioengineering, or because the enhanced affluent would somehow be dehumanized?” he asked. Imagine being able to scroll through your memories like your Instagram feed, Black Mirror-style, to perfectly recall everything you’ve ever learned, to immediately access every section of your life history rather than stumbling through a soupy fog of half-remembered faces punctuated by the sharp clarity of important moments. You would be efficient, insightful, luminous. Would you be human?

***

1486126837308-DSC_8646-1_300

Bryan Johnson, the founder and CEO of Kernel, a startup that’s working to produce an implant to improve human memory and other brain functions. Photo by Sergiy Barchuk

In February 1975, around 140 scientists, as well as philosophers, journalists, and lawyers, gathered at a conference center in Asilomar State Beach in California. They were there to produce a set of guidelines for a new technology—experiments in recombinant DNA. The conference was organized by Paul Berg, a molecular biologist who voluntarily put his work on hold after co-workers became concerned that he might create a virus spliced with E. coli that could escape the lab environment and cause a cancerous outbreak.

In 1975, the general population was not familiar with the concept of gene splicing—the term “genetic engineering” had been introduced for the first time in the 1950s, not in a scientific paper but in a sci-fi novel. Berg’s experiments and other contemporaneous attempts to manipulate DNA were very much frontier science, much in the way that Ramirez and Berger’s memory research is today. It was clear to practitioners that they were on the edge of something that was going to revolutionize their field. What was less clear was how they could explore it without dangerous risks to “workers in laboratories, to the public at-large, and to the animal and plant species sharing our ecosystems.” So they got together to try to figure it out. Talks were impassioned. Berg later wrote that “heated discussions carried on during the breaks, at meal times, over drinks, and well into the small hours.” Those conversations yielded a nuanced set of guidelines prescribing different levels of caution and containment for different types of genetic experimentation, and just as important, launched a public conversation that enabled regulations and social norms about genetic manipulation to develop alongside the technologies themselves.

The Asilomar Conference and the ensuing debate over genetics relied on the precautionary principle—the idea that when introducing a product or technology that puts the health of humans or the environment at stake, the burden of proof falls on advocates of the new advancement to prove that it’s safe. It’s a long-winded version of “first do no harm”—an ethic prioritizing safety over speed. It’s the philosophy of physicians and environmentalists, not of venture capitalists.

It’s likely that as memory-enhancement technology gradually develops, we’ll become acclimated to it, as we did with incremental developments in genetics; 20 years ago, headlines compared Dolly the sheep to Frankenstein’s monster; today, we calmly accept mail-order DNA ancestry kits and nuanced discussions of epigenetics. But a certain setting is necessary for these advancements to march forward in a way that’s safe and fair. In 2008, Berg wrote an essay in Nature, recalling the Asilomar Conference and the way it was able to set the stage for decades of safe and productive research in genetics. He wondered whether another similar meeting of the minds would help solve new issues around genetic engineering. Surprisingly, he concluded it would not. Not because of differences in the technology itself, but because of the settings in which the scientists themselves were working. At Asilomar, in the 1970s, most of the scientists were coming from publicly funded institutions. They could, he said, “voice opinions without having to look over their shoulder.” He was concerned that as science became increasingly privatized, economic self-interest would get in the way of frank discussions about the risks and benefits associated with different areas of research.

Unprompted, both Ramirez and Johnson brought up the parallel between memory technology and genetic engineering. “The Human Genome Project took years to be sequenced, but by then, there was enough legislation that the whole world didn’t turn into Gattaca,” Ramirez says. “Ditto with this, we’re having this conversation decades before these things are possible, so that the world is already ready.”

Johnson, too, finds a parallel with genetics, but lands at a different conclusion—that perhaps the US took too conservative a position on that technology. “When we realized we could modify genetic codes and potentially create designer babies, we as a society had a big discussion—is this something that we want to do? We in the United States said, It’s not really in line with our values. Meanwhile, China said, Interesting…”

He mentioned this just a week after news had broken that scientists at Sichuan University had used CRISPR gene-editing technology to treat cancer in a human patient, injecting the patient with edited white blood cells. In December 2015, an international coalition of scientists had called for a voluntary moratorium on using CRISPR in a way that could cause genetic changes that could be passed on to patients’ children until the risks are better understood—but the Chinese scientists, who hadn’t signed on to the moratorium, went ahead and did it. It’s true, of course, that if there’s a treatment for cancer in another country that Americans were too priggish to pursue, that situation would be—to borrow Johnson’s wording—intolerable. At the same time, it’s chilling to hear someone who is actively steering biological research say that the takeaway message from the decades-long debate over genetic engineering is that the US was too cautious. And as complex as the genome is, the brain is more so—its 86 billion neurons forking and signaling in ways we are only beginning to understand. When manipulating a giant system that directs everything from eye dilation to intellect, cautiously seems the only way to tread.

1486126867830-AndrewWhite_Vice_Memory_006_300

The laser source for Ramirez’s memory experiments

Johnson believes that human intelligence will emerge as “one of if not the largest markets ever. We’re dealing with our own capacity to learn, and memories, and our own evolution and communication with each other—it’s going to be a very big market. We can build successful projects and huge profits there.” The counterpoint to Johnson’s optimism is that, given the brain’s complexity and the early stages of the research, the kind of super memory Johnson describes will be difficult to achieve, to say the least, and on top of that, we can’t say for certain what long-term effects memory-enhancement technologies will have on the brain, casting the idea of this technology as a guaranteed, unmitigated good into doubt. As these technologies are developed, it will be crucial to have full and frank conversations about both the benefits and the risks—conversations that, as Berg pointed out, are only possible when scientists can talk openly about their work and its repercussions without endangering their funding.

As advances like the silicon memory chip and the laser editing of memories slide out of sci-fi and into reality, society will have to decide how to manage them. What’s needed is a modern-day Asilomar Conference, with scientists, clinicians, and entrepreneurs and ethicists together weighing the risks and benefits of these new technologies. But in today’s corporatized research environment, it’s deeply unlikely to happen.

Neurologist Julie Robillard, who wrote about the rise of memory manipulation in the American Medical Association’s Journal of Ethics in December, told me via email that it’s important for researchers and ethicists to work closely together at the start of the research process, and that the perceived tension between ethics and scientific progress—the idea that ethical considerations stand in the way of research—is a myth. Just as the technology has great potential benefits, she said, it also comes with potential risks—both to individuals, as the long-term effects of memory manipulation aren’t known, and to society. She raised questions like, “How can you report a crime if it is erased from your memory?” And, “Will prisoners be coerced to undergo a potentially risky memory-manipulation procedure if it decreases chances of recidivism?” She said that memory manipulation—and all new biotechnologies, for that matter—”must take place in an interdisciplinary environment.”

Kernel currently has 20 employees—computer scientists, neuroscientists, engineers. When I asked Johnson if there were any ethicists on the team, his answer pointed toward possibilities in the future: “Not yet.”