New artwork created by artificial intelligence does weird things to the primate brain.
When shown to macaques, AI-generated images purposefully caused nerve cells in the monkeys’ brains to fire more than pictures of real-world objects. The AI could also design patterns that activated specific neurons while suppressing others, researchers report in the May 3 Science.
This unprecedented control over neural activity using images may lead to new kinds of neuroscience experiments or treatments for mental disorders. The AI’s ability to play the primate brain like a fiddle also offers insight into how closely AIs can emulate brain function.
The AI responsible for the new mind-bending images is an artificial neural network — a computer model composed of virtual neurons — modeled after the ventral stream. This is a neural pathway in the brain involved in vision (SN Online: 8/12/09). The AI learned to “see” by studying a library of about 1.3 million labeled images. Researchers then instructed the AI to design pictures that would affect specific ventral stream neurons in the brain.
Viewing any image triggers some kind of neural activity in a brain. But neuroscientist Kohitij Kar of MIT and colleagues wanted to see whether the AI’s deliberately designed images could induce specific neural responses of the team’s choosing. The researchers showed these images to three macaques fitted with neuron-monitoring microelectrodes.
In one experiment, the AI aimed to create patterns that would activate neurons at a specific site in the ventral stream as much as possible, regardless of how it affected other neurons. In 40 of the 59 neural sites tested, AI-made pictures caused neurons to fire more than any image of a real-world object, such as a bear, a car or a face. The AI’s images generally caused neurons to fire 39 percent more than their maximum response to real-world images. Even when the monkeys were shown patterns previously designed by researchers specifically to trigger ventral stream neurons, the AI designs made these neurons fire at higher rates.In another test, the AI crafted patterns meant to make the neurons at one target site go wild, while minimizing the activity of others. For 25 of 33 sites, AI-created images isolated neural activity to the target site significantly better than real-world images. Although this manipulation is not yet perfect, future AIs with more sophisticated designs and more training data may wield finer control, says study coauthor Pouya Bashivan, a computational neuroscientist at MIT.
“This is magnificent technical progress,” says Arash Afraz, a neuroscientist at the National Institute of Mental Health in Bethesda, Md., not involved in the study.
In neuroscience experiments, researchers “may want to induce a specific pattern of activity in the brain” to learn what different neurons are responsible for, Afraz says. “The direct way of doing that is to roll your sleeves up, open up the skull and stick something in there,” like electrodes. “Now, we have a new tool in our toolbox” to noninvasively tinker with neurons in ways that weren’t possible before.
AI-rendered images that orchestrate neural activity may also lead to new treatments for mental health problems like post-traumatic stress disorder, anxiety or “anything that would have to do with mood,” Bashivan says. Similar to the way that people use light therapy boxes to assuage seasonal affective disorder (SN: 4/23/05, p. 261) or look at peaceful nature scenes to calm down (SN: 11/10/18, p. 16), people may someday be soothed by gazing upon images that an AI tailor-made to boost mood.
These experiments not only demonstrate a new technique to manipulate neurons, but also provide new insight into the nature of AI. Artificial neural networks are neuroscientists’ best computer models of the ventral stream. The virtual neurons in these computer programs are arranged a similar architecture as biological ones, and these AIs are great at recognizing objects in photographs. But there’s been some debate about how truly brainlike these AIs are, in terms of how they process and understand visual inputs, says Ed Connor, a neuroscientist at Johns Hopkins University not involved in the work.
The fact that monkey neurons responded to AI-created images just as the AI intended suggests that this computer program does indeed understand visual information in a way that’s similar to the primate brain, Connor says. “This nails it in a way that will convince skeptics, including myself.”
If artificial neural networks actually “see” in a way that closely mimics the brain, studying these AI programs may help scientists better understand human vision. Researchers of the future might forgo monkeys and mice, and probe neural goings-on inside AIs instead (SN: 6/9/18, p. 14).
Experimenting on such virtual neurons could offer “a way of letting you do any dream experiment you would like on a system that’s completely accessible in a way that the brain isn’t,” Connor says.