Physicists’ theories, biology’s brains work best when they’re models of efficiency

When scientists talk about computer models, they don’t mean little toy facsimiles of a PC or Mac. A computer model is a digital representation of some piece of reality. It’s a translation of matter and motion into math, so a computer can calculate how a process will unfold under various circumstances.

Of course, even before computers, scientists still devised models. It was just harder to do the math. Scientists did their own calculations, using models to guide their expectations of what a natural system would do or how an experiment would turn out.

When you think about it, that’s how all life gets along in the world. Living things make a model of their environment when deciding what to do to survive. That model allows an organism to predict the probable outcomes of different actions. Whether it’s a bacterium “deciding” to swim toward food or away from poison, or a human choosing to slam on the brakes or run a red light, models of how the world works guide life’s activities. Understanding those models might just explain how laws governing biology are intimately related to the laws of physics.

Models are like maps for navigating the world successfully. And what makes a good map also makes a good model: enough information to get you where you want to go, but not so much that the map is too unwieldy to interpret (or fold up). You want a model to predict the outcome of your actions without excess baggage — in this case, irrelevant information.

“In other words,” write computer scientist Susanne Still and colleagues in a recent Physical Review Letters, “the model should contain as little dispensable non-predictive information as possible.”

To build their models of reality, brains and other living things require energy to process signals from the external world. Basically, Still and collaborators say, the model requires energy to store information about those external signals. Some of that information helps make useful predictions, but some represents “useless nostalgia.” Comparing the useful with the useless memory provides a measure of the efficiency of the model (greater amounts of useless nostalgia mean less efficiency). And — here comes the interesting point — the amount of useless information (properly measured) turns out to be the same as the amount of unnecessary energy used to process the signals.

“In summary, the unwarranted retention of past information is fundamentally equivalent to energetic inefficiency,” write Still, of the University of Hawaii at Manoa, and collaborators at Lawrence Berkeley National Laboratory and the University of California, Berkeley.

Evolutionary biologists have often commented on how important energy efficiency is to living things. Whether you’re talking about animals foraging for food or single brain cells converting chemical inputs into electrical outputs, using energy efficiently is important for survival.

In fact, Still and colleagues note, some researchers believe maximum energy efficiency in nerve cell operation may be “a unifying principle governing neuronal biophysics.” Since nerve cells are the brain’s key processors of information, the importance of energy efficiency for their jobs suggests a deep energy-information connection.

In other words, the brain’s job of processing information to predict a decision’s outcome depends on efficient models — models that jettison useless nostalgia but remember enough so that decisions aid survival. An “efficient” model in the memory sense is also an efficient model in the energy sense. This link of predictive power and energy efficiency suggests a deep principle connecting life and physics, an information- energy equivalence that underlies both the physics of computing and the biology of brains.

This information-energy connection seems to operate on all scales, from organisms to cells to the molecular complexes that perform construction and service jobs in the cell’s interior, Still and colleagues declare. These processes, driven by energetic input that allows for the creation of structural order (rather than the random mess of systems at equilibrium), make life itself possible.

“We have provided a connection between nonequilibrium thermodynamics and learning theory, by making precise how two important aspects of life are fundamentally related: making a predictive model of the environment and using available energy efficiently,” Still and collaborators comment.

Other work has shown information-energy connections in physics. Erasing information, for example (say, from a computer memory), always requires some use of energy. But the new work refines that principle and develops the connection more broadly.

So for living things, maximizing predictive power is good, but only to the extent that available energy can be used efficiently. It’s no coincidence that the superior information-processing power of the human brain evolved as more efficient use of energy became possible through innovations like cooking. It was those superior brains, of course, that made possible the making of scientific models and of science itself.

SN Prime | November 12, 2012 | Vol. 2, No. 43

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Physics