Self as Symbol

The loopy nature of consciousness trips up scientists studying themselves

This essay is part of Demystifying the Mind, a special report on the new science of consciousness. The next installments will appear in the February 25 and March 10 issues of Science News.

When Francis Crick decided to embark on a scientific research career, he chose his specialty by applying the “gossip test.” He’d noticed that he liked to gossip about two especially hot topics in the 1940s — the molecular basis for heredity and the mysteries of the brain. He decided to tackle biology’s molecules first. By 1953, with collaborator James Watson (and aided by data from competitor Rosalind Franklin), Crick had identified the structure of the DNA molecule, establishing the foundation for modern genetics.

A quarter century later, he decided it was time to try the path not taken and turn his attention to the brain — in particular, the enigma of consciousness.

At first, Crick believed the mysteries of consciousness would be solved with a striking insight, similar to the way the DNA double helix structure explained heredity’s mechanisms. But after a while he realized that consciousness posed a much tougher problem. Understanding DNA was easier because it appeared in life’s history sooner; the double helix template for genetic replication marked the beginning of evolution as we know it. Consciousness, on the other hand, represented evolution’s pinnacle, the outcome of eons of ever growing complexity in biochemical information processing.

“The simplicity of the double helix … probably goes back to near the origin of life when things had to be simple,” Crick said in a 1998 interview. “It isn’t clear there will be a similar thing in the brain.”

M.C. Escher’s “Drawing Hands” © 2012 The M.C. Escher Company–Holland. All rights reserved. www.mcescher.com

In fact, it has become pretty clear that deciphering consciousness will definitely be more difficult than describing the dynamics of DNA. Crick himself spent more than two decades attempting to unravel the consciousness riddle, working on it doggedly until his death in 2004. His collaborator, neuroscientist Christof Koch of Caltech, continues their work even today, just as dozens of other scientists pursue a similar agenda — to identify the biological processes that constitute consciousness and to explain how and why those processes produce the subjective sense of persistent identity, the self-awareness and unity of experience, and the “awareness of self-awareness” that scientists and philosophers have long wondered about, debated and sometimes even claimed to explain.

So far, no one has succeeded to anyone else’s satisfaction. Yes, there have been advances: Understanding how the brain processes information. Locating, within various parts of the brain, the neural activity that accompanies certain conscious perceptions. Appreciating the fine distinctions between awareness, attention and subjective impressions. But yet with all this progress, the consciousness problem remains popular on lists of problems that might never be solved.

Perhaps that’s because the consciousness problem is inherently similar to another famous problem that actually has been proved unsolvable: finding a self-consistent set of axioms for deducing all of mathematics. As the Austrian logician Kurt Gödel proved eight decades ago, no such axiomatic system is possible; any system as complicated as arithmetic contains true statements that cannot be proved within the system.

Gödel’s proof emerged from deep insights into the self-referential nature of mathematical statements. He showed how a system referring to itself creates paradoxes that cannot be logically resolved — and so certain questions cannot in principle be answered. Consciousness, in a way, is in the same logical boat. At its core, consciousness is self-referential awareness, the self’s sense of its own existence. It is consciousness itself that is trying to explain consciousness.

Self-reference, feedback loops, paradoxes and Gödel’s proof all play central roles in the view of consciousness articulated by Douglas Hofstadter in his 2007 book I Am a Strange Loop. Hofstadter is (among other things) a computer scientist, and he views consciousness through lenses unfamiliar to most neuroscientists. In his eyes, it’s not so bizarre to compare math and numbers to the mind and consciousness. Math is, after all, deeply concerned with logic and reason — the stuff of thought. Mathematical paradoxes, Hofstadter points out, open up “profound questions concerning the nature of reasoning — and thus concerning the elusive nature of thinking — and thus concerning the mysterious nature of the human mind itself.”

Enter the loop

In particular, Hofstadter seizes on Gödel’s insight that a mathematical formula — a statement about a number — can itself be represented by a number. So you can take the number describing a formula and insert that number into the formula, which then becomes a statement about itself. Such a self-referential capability introduces a certain “loopiness” into mathematics, Hofstadter notes, something like the famous Escher print of a right hand drawing a left hand, which in turn is drawing the right hand. This “strange loopiness” in math suggested to Hofstadter that something similar is going on in human thought.

So when he titled his book “I Am a Strange Loop,” Hofstadter didn’t mean that he was personally loopy, but that the concept of an individual — a persistent identity, an “I,” that accompanies what people refer to as consciousness — is a loop of a certain sort. It’s a feedback loop, like the circuit that turns a whisper into an ear-piercing screech when the microphone whispered into is too close to the loudspeaker emitting the sound.

But consciousness is more than just an ordinary feedback loop. It’s a strange loop, which Hofstadter describes as a loop capable of perceiving patterns in its environment and assigning common symbolic meanings to sufficiently similar patterns. An acoustic feedback loop generates no symbols, just noise. A human brain, though, can assign symbols to patterns. While patterns of dots on a TV screen are just dots to a mosquito, to a person, the same dots evoke symbols, such as football players, talk show hosts or NCIS agents. Floods of raw sensory data trigger perceptions that fall into categories designated by “symbols that stand for abstract regularities in the world,” Hofstadter asserts. Human brains create vast repertoires of these symbols, conferring the “power to represent phenomena of unlimited complexity and thus to twist back and to engulf themselves via a strange loop.”

Consciousness itself occurs when a system with such ability creates a higher-level symbol, a symbol for the ability to create symbols. That symbol is the self. The I. Consciousness. “You and I are mirages that perceive themselves,” Hofstadter writes.

This self-generated symbol of the self operates only on the level of symbols. It has no access to the workings of nerve cells and neurotransmitters, the microscopic electrochemical machinery of neurobiological life. The symbols that consciousness contemplates don’t look much like the real thing, the way a map of Texas conveys nothing of the grass and dirt and asphalt and bricks that cover the physical territory.

And just like a map of Texas remains remarkably stable over many decades — it doesn’t change with each new pothole in a Dallas street — human self-identity remains stable over a lifetime, despite constant changes on the micro level of proteins and cells. As an individual grows, matures, changes in many minute ways, the conscious self’s identity remains intact, just as Texas remains Texas even as new skyscrapers rise in the cities, farms grow different crops and the Red River sometimes shifts the boundary with Oklahoma a bit.

If consciousness were merely a map, a convenient shortcut symbol for a complex mess of neurobiological signaling, perhaps it wouldn’t be so hard to figure out. But its mysteries multiply because the symbol is generated by the thing doing the symbolizing. It’s like Gödel’s numbers that refer to formulas that represent truths about numbers; this self-referentialism creates unanswerable questions, unsolvable problems.

A typical example of such a Gödelian paradox is the following sentence: This sentence cannot be true.

Is that sentence true? Obviously not, because it says it isn’t true. But wait — then it is true. Except that it can’t be. Self-referential sentences seem to have it both ways — or neither way.

And so perceptual systems able to symbolize themselves — self-referential minds — can’t be explained just by understanding the parts that compose them. Simply describing how electric charges travel along nerve cells, how small molecules jump from one cell to another, how such signaling sends messages from one part of the brain to another — none of that explains consciousness any more than knowing the English alphabet letter by letter (and even the rules of grammar) will tell you the meaning of Shakespeare’s poetry.

Hofstadter does not contend, of course, that all the biochemistry and cellular communication is irrelevant. It provides the machinery for perceiving and symbolizing that makes the strange loop of consciousness possible. It’s just that consciousness does not itself deal with molecules and cells; it copes with thoughts and emotions, hopes and fears, ideas and desires. Just as numbers can represent the complexities of all of mathematics (including numbers), a brain can represent the complexities of experience (including the brain itself). Gödel’s proof showed that math is “incomplete”; it contains truths that can’t be proven. And consciousness is a truth of a sort that can’t be comprehended within a system of molecules and cells alone.

That doesn’t mean that consciousness can never be understood — Gödel’s work did not undermine human understanding of mathematics, it enriched it. And so the realization that consciousness is self-referential could also usher in a deeper understanding of what the word means — what it symbolizes.

Information handler

Viewed as a symbol, consciousness is very much like many of the other grand ideas of science. An atom is not so much a thing as an idea, a symbol for matter’s ultimate constituents, and the modern physical understanding of atoms bears virtually no resemblance to the original conception in the minds of the ancient Greeks who named them. Even Francis Crick’s gene made from DNA turned out to be much more elusive than the “unit of heredity” imagined by Gregor Mendel in the 19th century. The later coinage of the word gene to describe such units long remained a symbol; early 20th century experiments allowed geneticists to deduce a lot about genes, but nobody really had a clue what a gene was.

“In a sense people were just as vague about what genes were in the 1920s as they are now about consciousness,” Crick said in 1998. “It was exactly the same. The more professional people in the field, which was biochemistry at that time, thought that it was a problem that was too early to tackle.”

It turned out that with genes, their physical implementation didn’t really matter as much as the information storage and processing that genes engaged in. DNA is in essence a map, containing codes allowing one set of molecules to be transcribed into others necessary for life. It’s a lot easier to make a million copies of a map of Texas than to make a million Texases; DNA’s genetic mapping power is the secret that made the proliferation of life on Earth possible. Similarly, consciousness is deeply involved in representing information (with symbols) and putting that information together to make sense of the world. It’s the brain’s information processing powers that allow the mind to symbolize itself.

Koch believes that focusing on information could sharpen science’s understanding of consciousness. A brain’s ability to find patterns in influxes of sensory data, to send signals back and forth to integrate all that data into a coherent picture of reality and to trigger appropriate responses all seem to be processes that could be quantified and perhaps even explained with the math that describes how information works.

“Ultimately I think the key thing that matters is information,” Koch says. “You have these causal interactions and they can be quantified using information theory. Somehow out of that consciousness has to arrive.” An inevitable consequence of this point of view is that consciousness doesn’t care what kind of information processors are doing all its jobs — whether nerve cells or transistors.

“It’s not the stuff out of which your brain is made,” Koch says. “It’s what that stuff represents that’s conscious, and that tells us that lots of other systems could be conscious too.”

Perhaps, in the end, it will be the ability to create unmistakable features of consciousness in some stuff other than a biological brain that will signal success in the quest for an explanation. But it’s doubtful that experimentally exposing consciousness as not exclusively human will displace humankind’s belief in its own primacy. People will probably always believe that it can only be the strange loop of human consciousness that makes the world go ’round.

“We … draw conceptual boundaries around entities that we easily perceive, and in so doing we carve out what seems to us to be reality,” Hofstadter wrote. “The ‘I’ we create for each of us is a quintessential example of such a perceived or invented reality, and it does such a good job of explaining our behavior that it becomes the hub around which the rest of the world seems to rotate.”

Read Laura Sanders’s feature on consciousness, “Emblems of Awareness.”

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Neuroscience

From the Nature Index

Paid Content