Last summer, Ford Motor Co. made a bold announcement: it intends to have a self-driving car on the road by 2021. But unlike other companies currently trying to pursue that goal, Ford envisions a vehicle that won’t have a steering wheel, brake pedal, accelerator, or—most significantly—a human in the driver’s seat, relying instead on a fully autonomous system.
If Ford’s ambitious plan comes to fruition, it will owe its success in part to a Cornell neuroscientist, Sheila Nirenberg, whose research in visual perception could make driverless cars operate more efficiently. Her work has led to advances in other arenas, too, including the invention of a bionic eye that could potentially restore sight to the blind.
A professor of physiology and biophysics at the Medical college, Nirenberg has had her work featured in PBS and BBC documentaries, and she’s a two-time TED speaker whose 2011 talk has been viewed online more than 380,000 times. In 2013, the MacArthur Foundation gave her one of its prestigious “genius” grants. She says that the award—which carries an unrestricted $625,000 cash prize—has added great credibility to the projects she’s trying to advance. “Being a woman in a very technical field, I’m an underdog,” she says. “So those things matter.”
Nirenberg, a New York native, holds a PhD in neuroscience from Harvard; she was an assistant professor at UCLA before being recruited by Weill Cornell Medicine in 2005. A few years later, she succeeded in cracking the retina’s neural code—essentially, the language that the retina uses to tell the brain what you’re looking at. When a sighted person looks at an object, reflected light brings the image to photoreceptor cells in the eye. That information is processed through neural circuits and passed to the retina’s ganglion cells, which convert it into a kind of Morse code transmitted through electrical impulses to the brain. But in people with degenerative eye diseases, photoreceptors die—so visual images never make it into the retina. By deciphering the neural code, Nirenberg figured out how to bypass those damaged cells entirely.
This breakthrough sparked Nirenberg’s idea for an artificial retina, a prosthetic that could help the 25 million people worldwide who suffer from blindness due to macular degeneration and other diseases. Current prosthetics offer limited vision—one type can see bright light against a dark background, for instance,› but not a detailed picture—and they require that electrodes be surgically implanted in the eye. With Nirenberg’s device, no operation would be necessary. Instead, a patient would go to a doctor’s office to be injected with a special light-sensitive protein that lets the eye interact with visor-like glasses containing a camera and “encoder” chip. The glasses take in images and translate them into the neural code before sending that code into the eye via patterned pulses of light—mimicking what the retina does naturally. Nirenberg says that trying to make a device without the code would be sort of like speaking English to a person who only knows French. “You can say it louder and louder, but if it’s not in French, the person still won’t be able to understand you,” she explains. “With the neural code in hand, we can speak the right language and can send meaningful signals to the brain.”
So far, she says, testing in blind mice has shown great promise in the prosthetic’s ability to bring back sight. Nirenberg is currently trying to raise enough money from investors to go through FDA approval and launch clinical trials in humans. Bringing a medical product to market is a lengthy and costly process, but Nirenberg—who founded her own company, Bionic Sight, to further the device—felt compelled to continue. “It’s so complicated,” she says. “If I handed it off to someone else, it might never happen. Lots of things get shelved.”
Meanwhile, her retina work prompted another venture—developing a technology that gives “sight” to robots. In 2015, she set up a second company, Nirenberg Neuroscience, which built a computer platform based on the retinal code so machines can make sense of their surroundings, much like humans do. “The code of the eyes is an incredibly important part of what makes the brain smart,” she says. “So the idea was, could it simplify the world in a way so a robot’s brain can understand it? And that ended up solving a whole bunch of classic artificial intelligence problems.” For self-driving vehicles, that means Nirenberg’s software allows them to navigate obstacles, recognize faces, and perform other key tasks. Last year, Nirenberg signed an exclusive licensing deal with Ford, with the automaker noting that the partnership would help bring “humanlike intelligence” to its autonomous cars, which would be first available commercially for ride-sharing and ride-hailing services. As Ken Washington, Ford’s vice president of research and advanced engineering, told the Detroit News: “We think this could be game-changing and provide a capability that will really be pretty important for our vehicles.”
And Nirenberg hopes that her findings will inspire innovations in other fields. She believes the same framework used to decrypt the retinal code could be used to create improved cochlear implants to treat deafness, as well as better prosthetics to aid those who are paralyzed or have had a stroke. Given her current workload, Nirenberg says she doesn’t have much time to expand on these notions, but she’d be thrilled if her research inspires others to do so. “If we can understand the code, the language of the brain,” she told the audience at her first TED talk, “things become possible that didn’t seem obviously possible before.”
Consider supporting your alumni magazine!