Ah, dualism. It sounds so remote and academic. The kind
of thing gowned philosophers might debate while strolling
the quad. Sure, dualism is a deal more rarified than “bracketology”
– the study, according to ESPN, of who is going to win the
NCAA basketball tournament.
Bracketology is huge to people who bet on March madness.
The FBI claims that more than $2 billion changes hands during
the course of the tournament. That’s a ton of money, but
believe me a lot more rides on the outcome of the dualism
debate.
Broadly speaking, dualism is the proposition that the
universe is composed of both material and immaterial stuff.
Traditionally, the immaterial stuff makes up the mind or
soul – in contrast to the material brain or body. We all
(so far as I know) possess an unshakable feeling of having
a mind, and many of us sense a soul within the skin. On
the material side, however, I don’t know about you, but
I’m generally unaware of having anything inside my head
– except perhaps on a morning after I’ve drunk too much.
That’s when I know I have a brain, and it’s the size of
a walnut.
Nevertheless, outside theological circles dualism is
far from fashionable. From physics to neurology, science
has gnawed away its foundations. This might seem like an
arcane quibble, but it should be of grave concern to anyone
hoping to see science and religion reconciled.
Here’s why: If physical stuff is the only kind of stuff
that’s real, what’s left for religion? Apart from a relatively
few people who worship nature itself, no religion I am aware
of could be content with this worldview. After all, nobody
believes that God is made of molecules, right? That would
make God subject to the laws of physics.
However, just to stipulate that God is transcendent doesn’t
resolve matters. The very arguments against mind/brain dualism
form potent objections to any claim that God acts in the
world. Take the First Law of Thermodynamics, better known
as conservation of energy. Monists (those who argue that
physical stuff is the only stuff there is) point to this
law to argue that if there were an nonphysical mind it would
be unable to influence the brain without injecting some
energy into the world. (Brains, being made of matter, can
only react to energy.)
However, to add energy from the Great Beyond would violate
the conservation law. We don’t have to be physicists to
imagine how readily evident that would be. No matter how
small, the extra energy streaming from mind to brain would
pile up and eventually be measurable. No such excess of
energy has ever been found.
To the contrary, a huge body of evidence demonstrates
that the conservation law holds here, there, and everywhere.
Therefore, crows the monist, to claim that there is a mind
as pointless as insisting that there is an invisible dragon
in Carl Sagan’s garage.
The idea of “an immaterial spirit that occupies individual
brains,” says V. S. Ramachandran, director of the U.C.S.D.
Center for Brain and Cognition, “is complete nonsense.”1
What goes for minds and souls goes for God, unless we
want to fall back on magic. It’s not for me to tell God
how to get things done, of course, but to go that route
is to give up, perhaps prematurely, on a rational reconciliation
of claims. Can it be true that the laws of physics hold,
and that God acts in the world?
I think so. And I think evolutionary biology (with a
dash of physics) can help us see how.
In fairness, though, I must confess up front that my
version of dualism allows only a narrow channel of interaction.
I am not about to argue for the reality of the Noachian
flood or the Immaculate Conception. Miracles, if any there
be, are by definition beyond rational explanation.
The Case Against Dualism
The monist position is, in sum, that the environment
provokes nerves to fire and their signals reach the brain,
causing a certain sequence of neurons to fire, with consequences.
And that’s it. The whole chain of cause and effect is sealed
within the laws of physics, which do not allow for interference
by nonphysical forces. The mind, subjective self, and free
will are therefore epiphenomena, illusory byproducts of
the system with no causal influence over it.
In light of your experience, this may sound ridiculous.
After all, you know that you have free will. You
can hold your breath or stand on your head, if you choose.
However, your experience also tells you that the Earth is
standing still at the center of a revolving Universe. Centuries
of careful observation and deduction force any reasonable
person to conclude that the Earth spins on its axis at roughly
a thousand miles an hour (at the equator). From this and
many other instances, we know that experience can be unreliable.
Common sense alone is not a sufficient answer to the
very powerful arguments in favor of monism. Before we defend
dualism, we have to be sure we understand why it is so hard
to defend. Here are some of the key arguments against it.
- Interactionism: in principle, there is no conceivable
way for a ‘ghost’ to have a physical effect without
violating the law of conservation of energy; thus ruling
out an immaterial ‘mind’ from influencing physical outcomes;
- Incoherency: proponents of dualism are unable to
describe what they are claiming in a clear, consistent
way. How can the reality of something be judged if it
cannot be described?
- Extravagance: the principle of Occam’s Razor calls
on us to leave out anything not critical to an explanation
of phenomena. It is sufficient, for example to explain
a pot of boiling water to invoke the fire that heats
it, rather than invoke an angry water god. Similarly,
if physical principles can explain the interactions
of the brain and world, enough said!
These arguments carry great weight, because without the
mindset they represent we would not enjoy the tremendous
advantages science has given us in the last 500 years. The
great advantage of reducing explanations to physical principles
is that they transcend all cultures and join in a coherent
body of knowledge.2
Sadly, while nearly everyone appreciates at least some of
the fruits of science, comparatively few people appreciate
how it depends on a willingness to set aside the transcendental
temptation in describing the natural world.
In my past writings I have often criticized those who
attempt to have it both ways: to entertain a worldview that
accepts science and hold onto their favorite supernatural
proposition. I’ve battled the claims of the Intelligent
Design movement, and I’ve dissed the idea, espoused by the
biologist Kenneth Miller among others, that you can embrace
science and miracles. Having taken this stand,
I must be scrupulous to avoid giving into the transcendental
temptation, not only for my sake (who wants to be a hypocrite?)
but also because I believe it is crucial for humanity to
relinquish magical thinking. And yet, I repeat, my task
here is to uphold both the laws of physics and the proposition
that God acts in the world.
So, from this standpoint how can we defend dualism? It
seems to me sufficient to rely on three arguments: that
nature points the way, that a ‘modeling’ model can meet
the monist objections listed above, and that scientific
evidence demands that we admit immaterial concepts into
our world of “real things.”
What Good Is a Brain?
Evolution provides a great springboard for diving into
these deep philosophical waters. It prompts us to ask what
a brain is good for and what it costs an organism to maintain
it. You might think that the utility of brains is self-evident.
They allow organisms to extract information from their environment
and react to it. But you don’t actually need a brain to
do that. Even the tiny Rhodospirillum rubrum, a
purple bacterium with no more brains than a doorknob, can
sense light and corkscrew its way toward it.
There’s nothing mystical about such automatic responses.
We build machines with such features all the time. When
an elevator “senses” that someone has pressed button number
7, off it goes to the seventh floor. Elevators, however,
cannot consider the advantages of going to the tenth floor
instead – at least not yet. (Fans of the late Douglas Adams’
Hitchhiker’s Guide to the Galaxy series know what
horrors await when elevators are installed with cyberpersonalities.)
Perhaps the extra value that a brain like ours provides
lies its ability to manipulate the information it collects
from the environment and model alternative futures. It’s
an ability we evidently share with a number of animals.
Consider a cat.
If you’ve ever watched a cat for awhile, you’ve probably
seen something very much like this: the cat is pussyfooting
along when she spots something interesting up on a countertop.
Maybe it’s a bug crawling along. She hunches, focuses, and
then rocks back and forth. She might rise out of her crouch
for a moment then settle back down. At that stage, you just
cannot tell whether the cat will spring or not. What’s going
on? Surely, she is modeling consequences of her proposed
leap and weighing the risks.
Cats are risk takers, so most likely she will make the
leap. Perhaps she’ll make it, or perhaps she’ll flail and
tumble back down to the ground. Either way, she’ll learn
from the experience. That’s largely what a brain is for
– modeling the future based on experience and real-time
sensory data, and then making a decision. But if a cat’s
brain is enough to do the job, why do we come equipped with
a prefrontal cortex and all the latest and greatest in consciousness-generating
machinery?
The Cost of Consciousness
Keeping up a brain that can do everything from regulating
heartbeats to wondering about consciousness is a costly
undertaking. It soaks up about 20 percent of the body’s
oxygen supply – ten times its share by weight. Moreover,
the human brain is so big that it makes birth exceptionally
difficult. Women’s hips are as broad as they can be and
still allow bipedal locomotion. The tragic consequence is
that, without modern medicine, an appalling number of mothers
and babies die in childbirth.
That creates negative selection pressure, which must
mean that there is an even greater opposing selection pressure
in favor of big brains. What could explain that pressure?
One possibility, espoused by evolutionary biologists such
as Geoffrey Miller, is that the human brain’s extravagant
abilities are the result of sexual selection.
The argument is that, just as peacocks have evolved huge,
unwieldy tails at the behest of picky peahens, so human
courtship has resulted in the evolution of brains capable
of generating music, art, morality, and above all elaborate
symbolic language. It is a claim with considerable merit,
but there are some obvious differences between the peacock
tail and the human brain.
For one thing, the peacock’s tail is a dimorphic trait,
meaning that, like beards in humans, it appears only in
the male. Differences between male and female brains in
humans are miniscule, and vanish altogether in proportion
to the difference between the human and hominid brains.
This is a pretty strong clue that human brains are more
than mere ornaments.
In fact, human evolution traded away a lot to get such
powerful brains. For starters, we gave up strong jaws. Imagine
what a sacrifice that was at a time when our ancestors were
hunter-gatherers. Very little of the available food was
easy to eat. There was no mac and cheese, no ice cream,
no jello. Hominids had to tear the raw flesh off bones or
grind their way through tough grasses and roots.
Around 2 million years ago, a mutation in a single gene
in one of our ancestors blocked production of a protein
called MYH16, which is critical to the formation of strong
jaw muscles, such as those found in gorillas. That kind
of muscle requires a massive bone on which to anchor itself.
Until the moment of mutation, every one of our ancestors
had a thick skull with a big ridge of bone on its crown.
The ridge served as an anchor for the jaw muscles. Once
the mutation made its way into the gene pool, individuals
who had carried it had no need of the ridge. They must have
had a hard time keeping to the traditional hominid diet,
but perhaps they found new food sources that served even
better (termites, anyone?).
In any event, we know they survived, and we
know they kicked off a rapid increase in brain size over
the next several hundred thousand years. Liberated from
the need for a crowning ridge, our ancestors evolved from
boneheads to brainiacs. The liberated brain must have proved
adaptive: all the brawnier hominids petered out and we’re
still here. That would be unlikely if our big brains were
merely for showing off.
The peacock’s tail is also more than mere ornament. It
comes under a category known as the handicap fitness display.
The huge, useless tail is a burden that only a truly fit
peacock can bear up under. Of course, the extra capacities
of human brains may well serve in courtship – ask any Elizabethan
poet you can happen to meet – but they evidently serve many
other ends as well. That would be typical of evolution,
Nature’s ultimate multifunction designer. Look at the mouth:
It’s not just for eating and drinking anymore! It also works
for breathing, speaking, kissing and whistling, to name
a few uses. Similarly, our brains are definitely multipurpose
organs
So, it is difficult to see how natural selection could
favor the kind of brains we carry around if costly features
such as consciousness, conceptualization, and free will
were merely illusory extras. In that case, even a cat brain
might be too much. Ants seem to fare pretty well on about
1/40,000 the amount of brains we carry around.
Still, it’s important not to make a straw man of the
monist position. There are certainly many who believe that
mental features play a role but maintain that they can all
be explained as brain states within a deterministic framework.
There are others, a minority led by the mathematician Roger
Penrose, who believe that the brain exploits quantum indeterminacy
to free itself from the harness of classical physics.
The Joke’s On Us
We’ve summed up the monist’s case. What supports the
dualist? There is the evidence of consciousness and its
powers. Just because their validation might spoil the current
scientific paradigm is not reason for rejection. After all,
quantum mechanics sank the determinism of classical physics,
but the evidence wins.
So let’s look at consciousness through an evolutionary
lens. As anyone who has ever experienced one will tell you,
a conscious experience is radically different from mere
knowledge. Let me demonstrate:
The teacher, noticing that little Billy doesn't seem
to be paying attention in history class, suddenly asks,
"Billy, can you tell us what happened when Hannibal crossed
the Alps with his elephants?" Coming out of a daydream,
Billy ponders a moment and then says, "Oh, I know! He got
a mountain range that never forgets!"
My computer has now stored this joke, but I’m pretty
sure it doesn’t get it. Even if I programmed in heaps of
data about mountains, elephants, and idioms, it still wouldn’t
be amused. Hopefully, you were, and even if you weren’t
you can remember what it feels like to be amused.
It is an experience that defies reduction to brain states,
even though it remains dependent on them. It is moreover,
a gift from evolution, one of the human universals.
Amusement and other qualia, as philosophers call them,
are subjective experiences that have as yet no satisfactory
scientific explanation. At best we have speculations about
recursive brain functions that give rise to self-awareness.
However, our brains are aware of more than just the physical
environment and self. They are jam-packed with concepts
that embody abstractions, fictions, and contradictions with
no physical object.
Take Spongebob. Although he purportedly resides in Bikini
Bottom on the floor of the Pacific Ocean, Mr. S. is clearly
a fictional character whose relation to any living creature,
marine or mammalian, is tangential at best. In Spongebob
Squarepants, then, we have ready proof that human brains
can model nonphysical objects.
The monist may object that fictional characters are mere
epiphood for thought. I disagree, and will now offer an
evolutionary fable to make my case.
You and I have a great grandfather in common. Most likely
he lived in Africa some hundreds of thousands of years ago.
Let’s call him Olujimi,3
or Jimi for short. Picture Jimi sitting by a campfire in
the grasslands when a lioness bursts into view. Like anyone
of us, I am sure, he would run for his life. To do this
would require nothing more than his brain recognizing the
smells, sounds, and above all images reaching it as “lion!”
and going into flight response.
Later, however, Jimi remembers the lion and imagines
different outcomes. He models a wholly different response
– what if he had picked up a burning branch from the fire
and waved it in the lion’s face? He imagines the various
possibilities and decides the most likely is that the lion
will retreat.
Next time a lion shows up, Jimi puts his plan into action
and it pays off bigtime. Not only does he survive, but all
the women in the area think, “Wow! Here is a brave and clever
man who can protect me and my offspring from lions! I think
I’ll have his babies!”
Jimi becomes the happiest man on the veldt … at least
until other guys catch onto his trick. Then, it’s back to
the drawing board of the imagination to try to come up with
some new competitive advantage.
Now, I hope you will agree that this is a highly plausible
fable. Something very much like it surely happened at some
point, and the use of conscious modeling of alternate futures
has never stopped. In contrast to evolution, it is the chief
engine of human innovation. Consciously driven innovation
could not possibly occur, however, if there were no causal
link between a concept and the world.
Supermodels
How could this take place without violating the laws
of physics? I suggest it happens through the physical process
of modeling. The perception of a real lion presents no problem:
physical stimuli – mainly sound and light waves – lead to
certain sequences of neuronal firing, which causes Jimi
to jump up and run.
Similarly, the remembered lion is physically accounted
for by the firings of neurons assigned to memory. The internal
dynamics of a brain simulation may not be understood all
that well, but our ignorance does not imply magic. We can
assume that in modeling one thing leads to another, ending
in a changed memory, which can later be used to drive a
particular set of actions. Notice, however, that there must
be some top-down causation going on, as coalitions of neurons
embodying some idea (“wave burning stick”) shape the behavior
of the next coalition of neurons actions (“laugh triumphantly
at fleeing lion”).
This all may seem right, but we must guard against accepting
intuitions. The monist may yet argue that consciousness
and will are epiphenomena, and that the real action happens
out of sight. Is there any evidence to support the claim
that concepts can have consequences? Once again evolutionary
biology comes to our aid.
The genetic code is the best example we have of the power
of information. Some – myself included – have wondered if
the DNA-RNA-protein sequence truly involves information
in the deep sense. It might be argued that it is simply
a dynamic process, sort of like a row of dominoes in which
each one tips the next.
However, a series of experiments has blown any such notion
away. It turns out that genes can embody high level abstractions
such as “do what it takes to form an eye.” Pluck out the
Eyes absent gene from a mouse and insert it into
the genome of a fruitfly whose eyeless gene is
missing, and you get a fruitfly with eyes.4
Not mouse eyes, mind you, but fruitfly eyes, which are built
along totally different lines. A mouse eye, like yours or
mine, has a single lens which focuses light on the retina.
A fruitfly has a compound eye, made up of thousands of lenses
in tubes, like a group of tightly packed telescopes. About
the only thing the eyes have in common are that they are
for seeing.
What does this tell us? Information, organized into concepts,
is demonstrably out there in the world, and without
violating the laws of physics it can guide processes as
they unfold. As in the genes, so in the mind.
Discovering the Immaterial
Now comes the crucial step. If the environment that humans
occupy contains not only material realities but also immaterial
ones, can we then infer that human consciousness exists
to discover and exploit these objects? Again, evolution
suggests that this is so.
I hear Occam’s Razor stropping, so before I get my throat
cut let me hasten to add that I’m not referring to ghosts
or Spongebob but to objects whose power in the world is
far more evident: numbers.
I suggest that we discover numbers and their relations
by modeling them in our brains – or in clay tablets, notebooks,
and computers.5
Moreover, I call on the increasing body of evidence that
our brains are evolved not only for arbitrary human language
but also for the universal language of math.6
The transcendental quality of math can be framed in cultural
terms, but it cannot be denied. A prime is a prime, whatever
symbols denote it.
In his marvelous book A Mathematical Mystery Tour,
A.K. Dewdney makes an extended case for a strong form of
mathematical realism. He argues (through a series of fictional
interlocutors) that numbers are not only real but that they
occupy their own realm, to which he gives the delightful
name holos. The holos, in turn, gives rise to the
cosmos, which, he rather convincingly argues, is dependent
on equations, some of which humans have discovered. Equations,
one of his characters argues, “provide all the information
you could want about a hydrogen atom. There is nothing else,
in effect. You could even say that not even the energy is
real. Only the information about its behavior is real.”7
This is similar to the physicist John Wheeler’s claim that
information is fundamental, summed up in the quip, “it from
bit.”
I don’t know for sure whether this is true, but all of
physics for the past one hundred years points in that direction.
Yes, I know, if you run headlong into a brick wall you quickly
discover its unyielding physicality. Nevertheless, in thinking
of it as solid we misapprehend. In 1995, exactly 100 years
after Einstein launched the quantum era, Eric Cornell and
his team at the University of Colorado demonstrated the
evanescence of atoms by creating the world’s first Bose-Einstein
condensate (BEC). Rather than yield all of their information,
a group of supercooled rubidium atoms disappeared into a
blob of indeterminacy. It was – and still is – an astonishing
confirmation of Nature’s insistence that you cannot know
all about the position and momentum of a particle.
Consider: temperature is purely relational. The atom
is only hot if it is belly bumping with other nearby atoms.
How does it know when it has cooled to the point that it
is time to merge into a BEC? Apparently, when its information
slip begins to show. What Nature seeks to veil is not the
structure of the atom but the relational information about
it and its neighbors. Apparently, we can know all about
the dancer but only so much about the dance.
Bringing Home the Bacon
So, relying on the inductive methods of science, as recommended
by Sir Francis Bacon, we have compiled evidence for the
following: that immaterial concepts inhere in the world,
that they influence physical processes, that such concepts
exist in the mind and work through the brain, and that the
interplay of information and physics may be the most fundamental
level of reality we can apprehend through science.
Many mysteries remain. We have at least established a
modest dualism on a foundation stone of evidence. Information
is immaterial yet real and powerful.
Where does God come into it? Here I must throw off my
own veil. I am an agnostic who accepts that God is real
to this extent at least: it is overwhelmingly evident that
God is a powerful concept working in the world through the
minds of believers. The monist may not like it, but to deny
God in this sense is to deny the power of concepts altogether,
and that, I think, we have shown to be unreasonable.
As to the deeper reality of God, that is a question of
faith. For me, God is a mental construct in the minds of
others. For you, perhaps, God is infinite, loving, and immanent.
So be it. The discovery that there is more to the world
than mere physics allows us to be both scientifically informed
and at peace with our worldviews. How wonderful is that?
Endnotes
1
Quoted in Cornelia Dean, “Science of the Soul? ‘I
Think, Therefore I Am’ Is Losing Force,” New
York Times, June 26, 2007.
2
The sciences form a nearly seamless stack
of explanations. Pictured as a column, physics forms
the pedestal, chemistry the trunk, and biology the
capital. Other sciences, each with its own interesting
questions, are located alongside. However, the column
has three major gaps, two at the top and one at
the bottom. At the top is the problem of consciousness
(encompassing mind, will, and other issues that
we are considering here). Between biology and chemistry,
the problem of life’s origin remains open. At the
base is the question of how to fit general relativity
together with quantum mechanics, a problem that
has led to the conjecture of supersymmetry.
3
A Yoruba name meaning “Gift of God”
4
These genes get their nicknames from the function
that goes missing when they are excised. Each is
crucial to eye formation in their species. See,
for example, Nancy M. Bonini, Quang T. Bui, Gladys
L. Gray-Board and John M. Warrick, “The Drosophila
eyes absent gene directs ectopic eye formation in
a pathway conserved between flies and vertebrates,”
Development (1997) 124, 4819-4826.
5
The debate about whether numbers are invented or
discovered is an old one, and I have made my own
stand on this elsewhere, but I must repeat one point:
if numbers and the rest of mathematics are inventions
of the human brain, it follows that they are wholly
dependent. It would therefore be no more true to
say that the circumference of a Euclidian circle
is 2Πr than to say that the name of the third planet
is Earth. That equivalence seems unlikely to me,
but the key point is that if this is so then logic,
which is inseparable from math, is likewise merely
conventional. I therefore argue that it is rational
to accept (provisionally) that numbers are real
in order to preserve the foundation of rationality
itself. Otherwise, all knowledge becomes the plaything
of rhetoric and power.
6
For an interesting journalistic report on number-brain
research, see Jim Holt, “Numbers Guy: Are our brains
wired for math?” The New Yorker, March
3, 2008. www.newyorker.com/reporting/2008/03/03/080303fa_fact_holt?
7
A.K. Dewdney, A Mathematical Mystery Tour.
New York: John Wiley & Sons, 1999. p. 146.