Subscribe to San Francisco Magazine

Mod Lux Feeds

Now Playing

The sheets of electrodes that Edward Chang removes from his patient’s brain help him pinpoint the source of her epilepsy and provide valuable data for his brain-computer research. Here, on the table in an operating room at UCSF, is his surgical headlamp.  Photo: Eric Millette 

Illustration:  Leandro Castelao 

Thanks to cutting-edge brain research at UCSF and UC Berkeley, paralyzed patients who are unable to speak may soon be able to translate their silent thoughts into spoken words.

1.  A patient thinks the word “cat,” producing particular brain-wave patterns in the speech and motor areas of the brain.
2.  An array of microsensors picks up signals from those areas, including the ones likely to contain “cat,” and translates them into digital information.
3.  A processor compresses the data and wirelessly transmits it through the skull and scalp.
4.  A decoder receives the brain signals and uses complex algorithms called spectral analysis and pattern recognition to interpret the digital language and translate it to “cat.”
5.  A speech synthesizer converts the digital signal into sound: “cat!”

Illustration:  Leandro Castelao

The “jet pack” on this beetle contains wires that connect to the insect’s brain and allow the researcher to direct its movement by remote control.

Photo: Hirotaka Satontu 

Thinking Makes It Go

It’s the stuff of science fiction: a marriage of brain and computer that allows the disabled to walk, the mute to speak, and all of us to control our reality with our thoughts alone.  A Wi-Fi implant in the brain? If anyone’s going to deliver, it’s the visionary scientists at the Center for Neural Engineering and Prostheses, the Bay Area’s bold new research hub.

   With the precision of a violin maker, Dr. Edward Chang pulls a row of staples from the mostly shaved scalp of Annette Graves, a 30-year-old epilepsy patient. She is lying, anesthetized, on a table in Operating Room 9 on the fourth floor of the University of California, San Francisco, Medical Center. Her head peeks out from a mound of blue sheets, and the remainder of her blond hair, entangled with wires, drapes over the table. Chang peels back Graves’ scalp and eases off an iPod-size piece of skull, revealing six transparent plastic sheets, dotted with electrodes, that lie across the top of her gently pulsing brain.
     Ten days previously, the 36-year-old surgeon, a leader in cutting-edge brain research, had installed the electrodes on the surface of Graves’ cortex, with the goal of recording the electrical firings of her brain cells and pinpointing the part of her brain that’s causing seizures. Three hours later, the operation has come a long way. Chang carefully slices and vacuums tissue in Graves’ brain, working just a fraction of an inch from her brain stem. Were he to slip, or cut a little too deep, Graves could be paralyzed—or worse. Chang’s hands are steady, though, as he pries Graves’ right hippocampus free and snips it from her brain. (Everyone has a pair of hippocampi. When one is removed, the other can usually do the jobs—memory processing and spatial navigation—of both.) Chang may perform brain surgery several times a week, he says, but he never loses his sense of the privilege of working inside a place so sacred.
     This time it is particularly inspiring, since Graves has bravely volunteered to be a guinea pig in Chang’s groundbreaking research program, conducted with his colleagues at UCSF and UC Berkeley’s Center for Neural Engineering and Prostheses (CNEP). The center, which was founded two years ago, largely over discussions at Saul’s Restaurant and Delicatessen in Berkeley, now unites 19 of the Bay Area’s top neuroscientists, computer scientists, and electrical engineers. Chang, who grew up in Palo Alto and is one of CNEP’s two codirectors, combines many of the talents of all three. “Eddie’s a triple threat,” says Mitchel Berger, chairman of neurological surgery at UCSF. “He’s a gifted surgeon accomplished way beyond his years— but what makes him so unusual is his innovative thought processes as a researcher; he’s always thinking way ahead of the curve.”
     One thing he’s thinking about is how to marry brains and computers to help disabled people function better. If he and his colleagues succeed, spinal cord–injury patients will be able to control prosthetic limbs, operate computers, and speak through synthesizers using their thoughts alone. (The electrodes in Graves’ brain have contributed to the study of the speech part of that equation.) Millions more, those who have suffered brain traumas or been left paralyzed by strokes, could be granted new ways to move and communicate.
     Later generations of brain implants could help nondisabled people, too, making films like The Matrix and Avatar seem less like fiction and more like everyday life. People could communicate “telepathically,” controlling avatars, drones, and other wireless devices with their thoughts. Someday, say the field’s most futuristic thinkers, we may even be able to upload video games and other entertainment directly into our brains through Wi-Fi implants. Or connect to roving avatars or nanobots and directly experience the inner lives of others.
     It’s no surprise that all this is happening here, home of the silicon revolution and a frontier in neuroscience. In the past few years the two revolutions in human knowledge have been growing closer, and are now colliding: Neuroscientists study computers and employ them for nearly every aspect of their research, and computer scientists look to the human brain as the ultimate information processor.
     Given the Bay Area’s brainpower, says Michel Maharbiz, an assistant professor in the Department of Electrical Engineering and Computer Science at UC Berkeley and a researcher at CNEP, “the center almost created itself. It’s got people thinking and working at such a high level. The magic is here.”
“Brain-computer interface” refers to anything that directly connects the brain’s information flow, via computer, to an external device. The concept dates back to the 1920s and the invention of the electroencephalograph (EEG), which first recorded and printed out the subtle electrical fluctuations of subjects’ gray matter. (See “The Marriage of Brain and Machine: Past, Present, Future,” page 74.) The science proceeded gradually for 60 years, leading to the development, largely by Michael Merzenich of UCSF, of the multichannel cochlear implant, which today is the most widely used brain-computer interface (BCI). The implant taps into healthy nerves in the inner ear and converts sound into electrical impulses that provide deaf and partially deaf people with a new sense of hearing.
     Today, thanks to huge innovations in computer science, brain mapping, and engineering, braincomputer interfaces can read, in astonishing detail, the signals emitted by the billions of cells in our brains that shape our every thought and action. Using complex algorithms, brain-computer interfaces can tell if someone is trying to move a finger or say the word “cat.” They then convert those interpretations into a digital language that can make a robotic hand move or a peripheral device, such as a word processor, write “cat.” And finally, they send feedback to the brain so it can adapt to and become more efficient in controlling its new mechanical appendage.
     Chang himself is focused on mapping the language centers of the brain and developing a “speech prosthetic” that would restore speech to patients made mute by paralysis or disease. By interpreting the neuronal signals associated with language and speech, such a device would help patients channel their thoughts through a computer into a synthesized voice. Chang also hopes to solve fundamental mysteries about the brain: how it focuses on particular aspects of the world and disregards others; how language emerges from the brain and contributes to a coherent, conscious mind.
     His big advantage over his academic colleagues is that he is an “inside man”: As a brain surgeon, Chang is already under the hoods of his epilepsy patients, who tend to require extensive surgeries. So—with their permission, of course—he can further his BCI research while he’s in there for a therapeutic purpose. (People like Graves have long been drivers of progress in neuroscience for this reason.) On the morning before Graves’ surgery, Chang offers a description of one of his experiments, which he’ll be using to study her.
     He explains that other top research groups working on brain-computer interfaces (such as those at Stanford and Harvard) often rely on electrodes planted deep in patients’ brains. The proximity of the electrodes to key neurons results in much more precise readings of specific areas than are possible with space-helmet-like EEGs, which measure brain waves from outside the skull. But deep electrodes are invasive and require more damaging surgery to install, and their electrical signal degrades over a period of months. In addition, brain don’t like to have foreign materials stuck in them, so they eventually insulate the electrodes with scar tissue.
     Again, this is where Chang’s surgical résumé has proved golden. By working daily with instruments and devices inside the human brain, he has developed a strong intuition about what kinds of things are safe to put in there. If deep electrodes are too invasive and if EEGs aren’t sensitive enough, then new electrocorticography (ECoG) arrays that Chang and his group are developing seem to be just right.
     Unlike deeply implanted electrodes, ECoG grids sit right under the skull and on top of the cortex, between the two layers of protective tissue known as the dura mater and the pia mater; they don’t penetrate the brain itself. And ECoG arrays listen in on tens of thousands of neurons in columns beneath each sensor. It’s like crowdsourced brain reading, focused not on single, fallible, exhaustible neurons but on the wisdom of the masses. Most important, the ECoG grids designed by Chang promise to deliver higher-resolution pictures of what’s going on beneath them.
     A few hours before Graves’ surgery, the ECoG grids are doing their thing inside her brain. An attractive woman with blue eyes and a Cleopatra nose, Graves is sitting up in her hospital bed. From her bandaged head sprouts a dreadlock of rainbow wires connecting an ECoG grid in her brain to a computer next to her bed. A video camera is now trained on her face, which is marked with blue lipstick and red dots on her chin and nose to help the camera track her lips and jaw, whose coordinated movements are crucial to the complex orchestration of speech.
     Graves is remarkably calm about her upcoming surgery and sanguine about being a guinea pig in Chang’s research. “I might as well give something back to science while I’m at it,” she says.
     Chang and UCSF’s Nima Mesgarani are studying “the cocktail party effect,” the human ability to zero in on one voice while ignoring many others, a key to understanding how brains receive and process language. Graves is instructed to switch her attention back and forth between two recorded voices talking at the same time, one male, the other female, while Chang watches a monitor that shows electrical waves charting the activity in her brain.
     Chang explains that Graves’ neural activity reveals “not just what her ears heard, but what her mind heard. We call it ‘mind reading.’ It’s astonishing how completely the brain filters irrelevant information and just focuses on what it wants to hear.” Pinpointing the neural activity associated with processing language and speech, Chang says, is key to channeling that activity through a computer and a voice synthesizer and allowing a patient to speak, even one with a condition as severe as “locked-in syndrome,” a state of full paralysis in which a person can move only her eyes (it was dramatized in the book and film The Diving Bell and the Butterfly).
     In their quest to develop the most advanced brain-computer interfaces the world has yet seen, the scientists at CNEP have embarked on a series of experiments to map and mine as many of the brain’s powers as they can. “It’s really about how much information we can get out of the brain,” Chang says. “That’s what my lab is based on.”
     Currently, Maharbiz is trying to devise ECoG grids that take up less space under the skull and interfere as little as possible with the natural workings of the neurons they connect with. He is most famous, though, for another project, in which he implanted electrodes into the brains of living beetles, which allowed him to direct the course of the beetles’ fl ight by remote control. (The project was funded by the U.S. Department of Defense; flying insects could make nifty little spies.)
     Maharbiz’s magic is in the “plumbing,” as he calls it, of getting electrodes and neurons to work together as seamlessly as possible. In one recent experiment, he inserted a porous electrode into a pupating larval beetle’s eye tissue. The tissue grew around and through the electrode until it “was like a part of the eye itself,” he says. Although related human experiments are down the road, the objective is the same: a brain-electrode link that can provide a faster and more reliable information flow.
     Maharbiz works with Jan Rabaey, who is also a professor of electrical engineering and computer science at UC Berkeley. Rabaey is developing a wireless ECoG chip that can be planted in the brain and transmit information directly from each electrode, or network of electrodes, unlike the current ECoG arrays, which transmit through a bunch of wires like the ones protruding from Graves’ head. Since no wires would pass through their skulls, patients would be less prone to brain infections, which can be deadly.
     A paralyzed patient operating a prosthetic arm with wireless implants would imagine grabbing a cup in front of her. Her brain’s electrical impulse would be picked up by the micro–ECoG grids, which would send radio waves through her skull to a receiver, probably worn on her head. The receiver would interpret the signals and direct the arm to grab the cup. Like your electric toothbrush stand, the receiver would also recharge the tiny batteries running the electrodes.
     But a couple of key technical issues need to be resolved before brain-computer interfaces blast off, notably the algorithms and computers that interpret the meaning of the brain’s electric signals. Enter Jose Carmena, an associate professor of electrical engineering, cognitive science, and neuroscience at UC Berkeley and codirector, with Chang, of CNEP. Carmena is known for landmark studies showing that when monkeys learn to use a prosthetic device, the device over time becomes integrated into their motor memory, just as if it were a part of their own body. The experiments demonstrated the brain’s plasticity, its ability to adapt to new conditions, which, according to Chang, “is going to be the secret to unlocking its ability to adapt to the machines we will attach to it.”
     This revolutionary idea was developed by UCSF’s Merzenich, who showed that the brain’s basic MO is to try to make sense of whatever information it is receiving from the world. That’s as true with prosthetics as it is with our eyes and ears. Plug a robotic device into the brain, and if the device sends clear and consistent signals headward, the brain will figure out how to run it. Carmena is now working on getting prosthetic devices to adapt more efficiently to the information they receive from the brain, so people can operate their robotic arms or speech synthesizers as if they were natural parts of themselves. The current prosthetics are so hit-and-miss that many people actually prefer to use the old pirate hooks, which are at least simple to manipulate.
     One of the center’s most forward-looking projects is a design that would be more powerful than Chang’s sheets of electrodes that sit on the cortex. “We call it ‘brain dust,’ ” Maharbiz says, and it consists of wireless nanosensors, each the size of a dust mote, that would be distributed throughout the entire brain like artificial brain cells and wirelessly communicate with computers in the outside world. “It’s still a vision for the future, but we think it could work,” he says.
Rabaey says he foresees a day when “our environments and our bodies are dotted with a swarm of sensors, including ‘brain dust,’ measuring all kinds of information.” When that day arrives, he adds, “people would seamlessly interact with content, the environment, and one another through a collection of interconnected sensors and actuators. While we now experience the world through five senses, there is no reason we could not have many more.”
Sonar, anyone? Telescopic vision? Immersive empathy? Telepathic orgy?
    The success of such mind-bending technology could well mark the beginning of the age of cyborgs long envisioned by science fiction. Techno-geeks, rejoice! But the flip side of that, the idea that someone, or something, could manipulate human brains, evokes the dark elements of films like Surrogates and The Matrix, in which central authorities—or renegade gangsters—hack civilians’ brains and gain control of their thoughts and behavior.
     “The idea of augmenting humans by adding technology, which makes perfect sense to me, also creates a lot of scare factor,” admits Rabaey. “There’s always that boundary between the comfortable component of what we as humans are used to and what we are afraid of.”
     Rabaey, though, isn’t worried. For one thing, “the center’s clinical research is driving the development of its technology,” he says. “No one who doesn’t need one is going to say, ‘Gee, I’m going to get an implant.’ ” He thinks that even when ECoG-level sensors become simple enough—and the possibilities for other kinds of applications become a lot broader—the opportunities for human progress will continue to outweigh the dangers.
     Still, the path to cyber-Eden (or cyber-hell) will have to navigate around critics, who question some of the center’s basic principles. Robert Burton, former chief of the Division of Neurology at UCSF Medical Center at Mount Zion and the author of the forthcoming A Skeptic’s Guide to the Mind: What Neuroscience Cannot Tell Us About Ourselves, insists that a brain-computer interface, contrary to Chang’s predictions, will never read minds.
     “Impossible,” Burton says. “No matter what word emerges from a brain scan or machine, you can’t interpret it unless you know what the patient means. Take ‘hammer.’ Does that mean the patient wants a tool from a toolbox? That he once beat his friend in tennis? The vast majority of what we express is colored by a combination of memoriesthat we can’t or don’t readily recall.”
     Burton is concerned that neuroscientists could be giving families of the disabled false hopes about their loved ones’ future ability to communicate. “It’s remarkable that a brain-computer interface can identify the neurons or group of neurons that cause a disabled patient to raise his middle finger,” he says. “But that doesn’t mean a neuroscientist can ever read that patient’s mind. Maybe the patient is raising his middle finger as a reflex. Or maybe he’s giving his doctor
the finger.”
     Chang understands the criticisms: Reading another person’s intentions will always be a challenge. At the same time, he says, nobody knows the limits of mind reading machines. One thing he does know is that brain-computer interfaces are entering a steep part of the learning curve. This summer, for the first time, Chang and UCSF’s Karunesh Ganguly will implant electrodes in the brain of a quadriplegic patient in San Francisco as part of a clinical trial of their new ECoG-based brain-computer interface, which will include tests of prosthetic limbs and a mind-reading voice synthesizer.
     While Chang remains open-eyed about the future, looking at the big picture and ready to entertain any question about brain-computer interfaces, he insists that his own focus is always on his patients.
     “What drives me is my interest in how the brain works, but also how that can be practically applied to human health,” Chang says. “We’re looking for ways to allow extremely disabled people to regain some functional capacity in the world through prosthetics. We are doing this for them. We have to remember that.”

Gordy Slack is an Oakland-based science writer currently working on a book about epilepsy. his most recent book, about evolution, is The Battle Over the Meaning of Everything.

 

Additional stories:

Hollywood got there first

The marriage of brain and machine:  PAST, PRESENT and FUTURE*