Now Playing

Sensory Overload

TV host, TED Talk visionary, HBO advisor, and entrepreneur David Eagleman is the closest thing neuroscience has to a rock star. But have his big ideas, pumped up with Silicon Valley cash, made him a brain-oil salesman?

SLIDESHOW

(1 of 5)

The chief science officer of NeoSensory, David Eagleman has invented the Versatile ExtraSensory Transducer, which translates sounds into vibrates.

(2 of 5)

Volunteer Greg Oxley tries on the VEST for Al Jazeera.

(3 of 5)

Eagleman on his six-hour PBS series, The Brain with David Eagleman.

(4 of 5)

Eagleman demonstrates his sensory-input device, the VEST, during his March 2015 TED Talk.

(5 of 5)

 

David Eagleman’s neurons are going off like kernels in a popcorn machine. Sitting at a curved conference table on the second floor of the sleek offices of NeoSensory, his Silicon Valley startup, the 46-year-old neuroscientist can barely contain his enthusiasm as he explains how a vest-like, vibrating device he invented will endow users with an entirely new perception—a literal sixth sense. People who acquire this new sense, Eagleman says, will be feeling something never experienced before by a human being. “If you were experiencing that, you wouldn’t even have a shared word that you could use to talk about it with me,” he says, his face lighting up in wonderment. “It would be as if you were the only sighted person in the world of the blind.”

Creating a new perception would be one of the greatest scientific achievements in human history—a fact Eagleman is well aware of. “If I’m going to win a Nobel Prize in anything, it’ll be this piece right here,” he says. “We have no idea what the limits of this are. Can you have a sixth sense? Yeah. Can you have a seventh? Probably. Can you have an eighth? I mean, who knows?”

David Eagleman is good at thinking big, and he’s even better at communicating those big thoughts. The scientist has built a thriving second career as a mass-market popularizer, able to convey the complexities of the brain to the public. As the creator and host of a six-hour PBS series, The Brain with David Eagleman, he combines the warm charisma of Neil deGrasse Tyson with the manic energy of Ross Geller on Friends. Nor does Eagleman confine himself to the world of facts and footnotes: He wrote a bestselling collection of short stories, Sum, in which he imagines a universe where God is the size of a microbe, and an afterlife in which humans choose how they’d like to be reincarnated.

After a meandering early career that included serving in the Israeli army, pursuing work as a stand-up comic and screenwriter, and getting a BA in literature from Rice University, Eagleman earned a PhD in neuroscience from Houston’s Baylor College of Medicine in 1998. He spent 13 years as an assistant professor of neuroscience at Baylor, where he also ran his own lab. He has published more than 100 articles in scientific journals like Nature, Science, and the Journal of Cognitive Neuroscience. He’s probably the only Guggenheim Fellow and recipient of the Society for Neuroscience’s Science Educator Award who has appeared on The Colbert Report, where his use of a Pink Floyd lyric led Colbert to ask him, “Are you high?” On the cover of a 2011 issue of Italy’s Style magazine, Eagleman looks broodingly handsome and perfectly at ease. (The prior month, the cover featured Brad Pitt; the month before that, Mick Jagger.) In short, he’s probably the closest thing the neuroscience community has to a rock star.

He’s also one of the few neuroscientists who is a Silicon Valley entrepreneur. Eagleman’s move from the cautious footpath of academia onto the 120-mile-an-hour startup freeway was jump-started by a March 2015 TED Talk he gave in Vancouver, in which he painted a mind-blowing picture of a future in which humans would be unconstrained by their biology, and extolled the virtues of a new device called the VEST—or Versatile Extra-Sensory Transducer—which he claimed pointed the way to humans developing entirely new senses. Just after he stepped o the stage, Eagleman was approached by several VCs eager to invest in the idea. Led by San Francisco’s True Ventures, investors ponied up $4.2 million to fund a new company called NeoSensory, of which Eagleman is a cofounder and the chief science officer.

The next year, he accepted a teaching post as an adjunct professor at Stanford. For Eagleman, descending the academic prestige ladder was easily outweighed by the chance to become a start-up entrepreneur. And Silicon Valley’s unconstrained, go-getter ethos was a perfect fit for him. “Since about 2013, I’ve been wanting to come out here,” he says. “Everybody out here is up to something. It’s a totally different level in terms of people aspiring to do big things.”

They’re also willing to fund big—and unproven—things that mainstream scientific foundations are not. In 2014, Eagleman submitted a 25-page grant application to the National Institutes of Health and the National Science Foundation, describing the VEST and how it works. Both of them rejected his proposal. “One of them said—and I’m not making this up, I can show you the rejection letter— ‘This is not incremental enough,’” he says. “I always thought of incremental as being a bad word. In other words, it was too far-out. It wasn’t a baby step. So they rejected it.”

Eagleman faced no such skepticism from the venture capitalists who signed him up at TED. He describes discovering the world of entrepreneurship as unearthing the other half of a spectrum, one that he hadn’t noticed before. “You put your money where your mouth is and pour your effort into doing a company,” he says.

The VEST is the futuristic invention that catapulted Eagleman out of the incremental universe and into the quantum-leap one. A wearable device that looks like a high-tech tank top, it uses 32 pulsating motors to translate sound into vibrations on an individual’s torso. Its immediate, practical application is as an aid to the hearing-impaired and deaf, who can use it to identify different sounds. But Eagleman has far loftier ambitions for it. He believes it will be able to do nothing less than rewire the human brain.

 

Eagleman’s NeoSensory headquarters in Palo Alto are peak Silicon Valley, almost to the point of parody: skateboards and mountain bikes strewn about the floor, an acoustic guitar propped against a railing, a hammock stretched across the entryway. Animated renderings of the company’s prototypes—from the VEST, studded with glowing orbs, to a similar if less ambitious device, a wristband called the Buzz—are plastered on the wall. When Eagleman greets me on the second floor, he’s working on his laptop. “So, tell me what we’re doing today?” he asks, flashing a broad smile.

We start with a quick tour of the NeoSensory office, strolling past 3-D printers and a sewing machine before Eagleman stops abruptly. “Actually, there’s something really cool that we’re doing right now,” he says, “which is making a little miniature VEST for a two-year-old girl who was born deaf and blind.”

The prototype VEST looks like a teeny, double-sided maroon apron with translucent motors on the front and back. “National Geographic is coming out to film it,” he tells me, “so we’re super hopeful that it works like gangbusters.” Just over a week after my visit on September 1, Eagleman will announce on his Facebook page that he’s “thrilled to host Clarisa”—the two-year-old girl—“here in Palo Alto later this month” and claim that with the help of the patterns of activity in the device’s motors, “she’ll finally be able to tap into the auditory world around her.” On a Facebook page in late September, her parents will post a photo of a smiling Clarisa wearing the custom-size vest, three of the motors across her torso emitting a pale-blue glow. It’s a three-month trial, and “so far, [she] enjoys the tactile experience,” they’ll write.

Just like its adult counterpart, the miniature prototype works by translating incoming sound waves into vibrations. First, an app downloaded onto a smartphone or tablet uses the device’s microphone to pick up sounds from the surrounding environment and relay them to the VEST via Bluetooth. Then the device translates the audio into a set of distinct frequencies, with each frequency setting off one of the 32 small, oscillating motors spread across the back and chest. The idea, Eagleman explains, is that with time and practice, the brain learns to unconsciously interpret those patterns of vibration.

“We have these games where [your] phone presents the VEST with two words, and you have to select which word it was,” he says. “When you select the word, you get feedback on whether you’re right or wrong. At first you have no idea what you’re doing, but it’s just like these foreign language learning programs, where you’re making guesses and getting feedback. People’s performance improves over time, drastically.” When I ask if Eagleman has worn the VEST enough to get a sense of that process, he deflects the question. “It’s harder for a hearing person to learn the language of the VEST than a deaf person,” he says. “The reason I wear it all the time is to test things like battery life [and] safety.”

More than 100 test subjects have tried the VEST. According to Eagleman, they’ve improved each day, going from being able to interpret sounds that require no learning curve (a knock on the door) to understanding more complex noises (a bird chirping). Right now, that improvement remains to be demonstrated—Eagleman is still working on several peer-reviewed studies—but nonetheless, he plans to bring the product to market by 2019 at under $1,000. As a “noninvasive” product, it will not require approval by the FDA.

Last spring, a demonstration of the VEST’s capabilities generated some publicity, although it didn’t do anything to demonstrate the product’s scientific merit. Eagleman went to L.A. to film a BuzzFeed video of people, both deaf and hearing, wearing the VEST to help settle a rap showdown—whether Drake’s “More Life” or Kendrick Lamar’s “Damn” felt better. (For what it’s worth, Kendrick won.)

Eagleman is just as enthusiastic about NeoSensory’s newest gadget, the Buzz. Like the VEST, it works by capturing incoming sound and converting it to patterns of vibration. While Eagleman believes the VEST is capable of enabling deaf individuals to learn “the entirety of language,” the wristband has a more modest, if still ambitious, purpose: to give deaf people awareness of their environment. It alerts the wearer to anything from the siren of an oncoming fire truck to the wail of a crying baby. “What we discovered is that people who are deaf really dig that,” he explains. “But there are also all of these other things we didn’t think of. Like, they really like listening to music with it.”

Eagleman reaches for his laptop to show me a video he’s “pretty jazzed about.” As inspirational music plays in the background, a young man uses sign language while subtitles flash across the bottom of the screen, a blocky prototype visible on his wrist. “Hi, my name is Phillip Smith,” he begins. “I want to talk to you about the Buzz.” Smith goes on to say that the wristband helps him feel a new level of connection to the world and the people around him. “I got teary-eyed the first time I used the Buzz,” he says. “I put it on and I was like, Wow! I could feel everything on my own.” Later, when I speak with Smith on the phone through a video relay service, he reaffirms those sentiments. “I really had a connection with everything,” he tells me. “It made me feel like I wasn’t lonely. Sometimes, when I try to read lips, I’m not sure if somebody is mad or not. But with this, I can understand their emotion. I can [feel] how loud they’re speaking, which indicates anger.”

When the video finishes, I ask Eagleman how it made him feel when he saw Smith’s reaction. “It was like, this is what you hope for when you pour years of your life into a project,” he answers. “When Phillip said, ‘I cried,’ I felt like, boy, that is the greatest thing ever.

 

Eagleman’s devices are technically sophisticated, but there’s nothing new about the ideas behind them. Sensory substitution technologies have been around for almost 200 years. The underlying concept is simple enough: feeding information meant for one sense through another sensory channel. Braille is an excellent example of this—information that is normally conveyed visually is instead relayed through tactile sensation. “Historically, [sensory substitution] was a main strategy of people who were trying to help individuals who were either blind or deaf to recover from their fundamental deficits,” says Michael Merzenich, neuroscience professor emeritus at UCSF. “People have attempted for a long time to encode speech by pattern stimulation.”

In the late 1960s, neuroscientist Paul Bach-y-Rita created a device that enabled people to substitute tactile sensation for vision. People who had been blind since birth were seated in a modified dental chair. A computer turned filmed images of objects into pixels, which were then converted to vibrating pins in the back of the chair. (Think of them as tactile pixels that the blind users would feel with their skin.) Remarkably, subjects were able to differentiate between faces and shadows. They could even identify a coffee mug.

Bach-y-Rita’s revolutionary experiment provided substantial evidence for the idea that the brain can rewire its own circuitry to replace one sense with another. Neuroscientists agree that the brain plays a crucial function in creating the reality we perceive. “The sense that you have of the world around you is not really directly inherited from your eyes or your ears,” says Dan Feldman, a professor of neurobiology at UC Berkeley. “[It’s] an internal construction based on the activity patterns that your brain receives from those sensory organs.” They also accept neuroplasticity—the notion that the brain changes over time. And, to a degree, they support the idea of sensory substitution, as demonstrated in Bach-y-Rita’s work.

Several devices that take advantage of sensory substitution have been invented over the past few decades, including a device called the vOICe (“Oh, I See!”), which turns visual information into sound, and the BrainPort, which uses a video camera mounted on a pair of sunglasses and an electro-grid that sits on the user’s tongue, generating bubble-like patterns of sensation. Robert Beckman, the CEO of Wicab, the company that Bach-y-Rita founded in 1998 to develop the BrainPort, is careful to acknowledge the limitations of the device. “It’s appropriate for displaying, and the user interpreting, simple information” like staying inside a crosswalk, he says.

Eagleman, however, makes much more sweeping, even revolutionary, claims for what his devices can achieve. He takes the possibilities of brain plasticity and sensory substitution much further than his peers, describing different sensory organs (eyes, ears, skin) as “plug-and-play” devices that can be switched out for one another— and whose interface with the brain will result in entirely new perceptions. “The idea is that the brain is a general-purpose computational device,” Eagleman says. It’s a phrase that he repeats so often during our conversations that it almost becomes a mantra. “Whatever data you feed in, it incorporates that into its model of the world.” Eagleman is convinced that if he feeds in data—whether stock market information or an astronaut’s report on the functioning of a space station—to subjects wearing his VEST, their brains will learn to create a new perception. It won’t be like vision, hearing, smell, or taste, or even much like touch. It will be an entirely new experience.

Eagleman mentions several examples of peripheral devices in the animal kingdom: Mice use whiskers; the star-nosed mole uses its unique appendage, equipped with 25,000 sensory receptors, to feel seismic waves in the environment; cows align themselves with Earth’s magnetic field. “All of this made me think, God, it’s like the brain doesn’t particularly care what your sensors are,” he says. “It just adapts to how it can get the most out of its interaction with the world.”

 

Eagleman’s colleagues praise him as a gifted communicator who’s been able to lucidly present developments in neuroscience to the public. Jeffrey Yau, an assistant professor of neuroscience at Baylor, got to know him when they were colleagues. “He could take science and distill it down to packets of information that he could convey to any audience. That was a skill that, as someone just starting my lab, I wanted to develop and learn.”

But some of his peers are less impressed by Eagleman’s visionary ideas and wildly ambitious claims. What plays well on a PBS show, in a TED Talk, or in a VC suite does not mesmerize researchers who want to see evidence. Merzenich, the UCSF neuroscientist who is also a leading expert on neuroplasticity, scoffs at Eagleman’s grandiose claims. “He’s exaggerating the effectiveness with which you can achieve sensory substitution with his device,” he says. “That’s not to say it wouldn’t be useful. Lipreading is helpful to somebody interpreting ongoing sound. It’ll probably be helpful on about that level.” Merzenich references a point in Eagleman’s TED Talk when Eagleman talks about “artificial hearing,” more commonly known as a cochlear implant. “He implies that he can make a device that’s delivering input through the skin that works better,” Merzenich says. “And he’s full of shit about that.”

For Merzenich, the idea that interpreting vibrations on the torso could allow people to “hear” in the same way as a cochlear implant—a tool that essentially replaces the function of a damaged inner ear—is preposterous. “For a majority of people, this is a miracle, right?” he says, referring to the surgically implanted device. “And you’re not going to get that miracle, on that same level, from pattern stimulation to the skin. The machinery of the brain is just not organized in the same way to accomplish the same kind of result.” Touch simply does not provide sufficient high-quality information for the brain to create an analogue of hearing, Merzenich argues. “Input in the somatosensory system”—where our senses of touch, pressure, and pain come from—“is relatively low speed, and it’s poorly resolved in time and space. From the point of view of delivering input through the skin, it’s just not going to be represented in the same fidelity. The direct translation is just limited.”

In other words, if our ears have sophisticated enough receptors to process sensory information at the level of James Joyce’s Ulysses, the receptors on our skin—especially on our torso—are so crude that they can only process The Cat in the Hat. In fact, the skin may only be able to process individual letters. Merzenich says that Eagleman “completely misrepresents” the differences between the nature of input delivered to the eyes, the ears, and the skin. Why? “It might be because he’s trying to keep the story understandable. It might be because he doesn’t understand it.” There’s also a third possibility, one raised by Eagleman’s dual status as a researcher and an entrepreneur: It might be because he’s trying to sell it.

Feldman, the UC Berkeley neurobiologist, is skeptical that a vibrating vest can achieve anything significantly different from the low-tech tools we already have. “Think about a blind person walking down the street using a cane,” he says. “The vibration patterns coming down the end of the cane are a clue about whether there’s a curb or a signpost coming. From the vibrations, people can build up a mental image of the world around them.” Far from creating a new sense with his VEST, Eagleman is simply coding information into an existing sensory system, the way a cane does. “He’s making a tool,” Feldman says. “You can describe it in a fancy way, I guess, but fundamentally it’s really no di erent from any type of tool that we use to see something that isn’t readily seen.”

Merzenich also doesn’t buy Eagleman’s claim that sensory substitution can result in the brain generating new senses. “From the Eagleman point of view, he talks as if he’s taking input from any source, and the machinery of the brain is going to create some powerful and profound representation,” Merzenich says. “That’s a real big simplification.” Feldman makes the same point. “The brain is capable of processing information from multiple sensory streams and multiple organs. That doesn’t mean that the brain perceives it and treats it the same way.” Referring to the VEST, he says, “I think it’s unlikely that you’re going to develop a new sense for stock market information, for example, that feels different from any other sense. I’m sure you’ll be able to intuitively recognize certain types of vibration patterns as meaning certain things. That doesn’t mean that it’s going to feel like the stock market moved. It’s going to feel like patterns on your torso.”

Feldman also raises a foundational question about Eagleman’s claims. One of the bedrock tenets of science is testability: A hypothesis must be capable of being tested. But if we did develop new senses, how could that ever be proved? “Let’s imagine that someone did have a fundamentally different sense,” he says. “How would they describe that to us in a way that we could convincingly know? You’re going to have to change the topic of your article from neuroscience to philosophy to answer that one.”

Merzenich says that he doesn’t like to discourage anyone and hates to dump cold water on Eagleman’s claims. “You have to like his enthusiasm and his dreamy imagination,” he says. “But there’s a touch of unreality to it all.”

 

Over lunch at a Japanese restaurant that Eagleman says was a favorite of Steve Jobs’s, he excitedly describes how he’s been working as the scientific adviser for HBO’s sci-fi series Westworld. Viewers can expect the VEST to make an appearance in the show’s second season. “I’m calling it VESTworld now,” Eagleman laughs. The conversation moves to video games, where Eagleman envisions the VEST allowing players to intuitively feel the world of, say, a fighting game, processing each blow from an enemy.

But when he’s asked why his peers in the neuroscience community doubt that it’s really possible to add new senses, Eagleman’s expression abruptly becomes stern and his features set. “I don’t even think we have enough people thinking about this question,” he answers. “In my mind, I have to confess that I wouldn’t care at all if there’s a consensus or not. I feel like my job is to prove this one way or the other. Maybe I’m wrong, maybe I’m right. But I think I’m right.”

A moment later, however, Eagleman is back in full evangelical mode. We’re not just on the cusp of cultivating new senses, he proclaims. We’re on the precipice of a veritable new era of scientific discovery. “If I were phrasing it at my most blue-sky optimistic,” he says, “this is the very beginning of a new era of people exploring not other lands the way they did 500 years ago, but other parts of the data spectrum so that we can discover completely new things out there that we didn’t even know about. I feel like we’re at the absolute foot of the mountain here. Which is to say, we’re just about to expand what it means to be a human right now.”

For the moment, NeoSensory is sprinting to the finish line to get its products on the market. Eagleman is pushing to get the Buzz in stores by late 2018, a year ahead of the VEST. He says his company is primarily doing technical tweaks on things like battery life and manufacturing various parts of the circuit board. But it’s also conducting psychophysical studies that ask participants to answer questions like “What is the softest vibration you can feel?” and “Can you tell which single motor is on?,” trying to establish how well users can recognize and discriminate between different sounds.

Eagleman says that while the deaf community is the first one his devices will serve, their appeal will be far broader. “The Buzz can be used with our open API to feed in any data stream at all. In this way, it’s a platform for anyone who wants to try any new sense they can imagine,” he says. “We’ve thought of a lot, but the wider world out there will think of many more. I think we’re going to see a proliferation of new senses over the next two years.” From a commercial point of view, whether Eagleman’s devices actually create new senses might be irrelevant: All that will matter is that people think they might create them—or simply be curious enough to try them.

As for his peers’ view that his belief in humans developing new senses is an unproven, and highly questionable, hypothesis, Eagleman remains unfazed. “Whether or not the scientific community agrees in 2017 doesn’t matter,” he says. “The single thing that matters is what we can show in 2018.”

 

Originally published in the January issue of San Francisco

Have feedback? Email us at letterssf@sanfranmag.com
Follow us on Twitter
@sanfranmag
Follow Alex Orlando on Twitter @alex_e_orlando