Share this page
Follow Salvo online
I opened up "The End of Morality," Discover magazine's latest article on ethics and the brain (July/August 2011),1 and I wondered, "Will this be any different from the others?" Articles on this topic seem to follow a consistent pattern: (1) Researchers can pinpoint physical events taking place in people's brains when they make ethical decisions. (2) Thus, science is discovering, finally, what ethics is all about: it's chemistry and electricity doing their chemical and electrical thing inside your skull. And that's it.
It's an approach many thinkers call reductionist. Reductionism in this context means that crucial aspects of human experience, like consciousness, love, ethical decisions, the ability to make choices (free will), and so on are best understood as biological processes, which in turn are best understood in terms of chemistry and physics. It's a matter of bringing down—reducing—these things to the lowest level of physical explanation.
According to reductionism, in fact, the only real thing going on is what happens at the level of chemistry and physics. Everything else—whether it's an ethical decision, a seemingly free choice, the love we feel for that special someone, whatever we think makes us human—is just a by-product. Some say everything but chemistry and physics is an illusion. So society is an illusion? Love is just a neuron in heat?
Brain in the News
Science journalists must love this reductionist story; they keep telling it over and over again. Four years ago, for example, the Washington Post reported on brain research and ethics, telling us that when volunteers thought about donating money to charity, their brain scans revealed that
the generosity activated a primitive part of the brain that usually lights up in response to food or sex. Altruism, the experiment suggested, was not a superior moral faculty that suppresses basic selfish urges but rather was basic to the brain, hard-wired and pleasurable.2
The article went on to wonder about the "troubling questions" this gave rise to:
Reducing morality and immorality to brain chemistry—rather than free will—might diminish the importance of personal responsibility. Even more important, some wonder whether the very idea of morality is somehow degraded if it turns out to be just another evolutionary tool that nature uses to help species survive and propagate.
Good questions. In 2008, the New York Times reported on an imaging study showing that "a mother's impulse to love and protect her child appears to be hard-wired into her brain."3 Dr. John H. Krystal, the lead researcher, told the paper, "This type of knowledge provides the beginnings of a scientific understanding of human maternal behavior."
Unfortunately, "it's not known whether fathers have similar brain responses to a child's smile or tears." Scientists didn't put any dads in the scanners, you see, so whatever we might think we know about human paternal behavior, it probably hasn't quite attained the status of "the beginnings of a scientific understanding."
In 2007, Scientific American reported on "Your Brain in Love," including this wisdom:
Researchers have revealed the fonts of desire by comparing functional MRI studies of people who indicated they were experiencing passionate love, maternal love or unconditional love. Together, the regions release neurotransmitters and other chemicals in the brain and blood that prompt greater euphoric sensations such as attraction and pleasure.4
Apparently no poet, artist, novelist, or philosopher ever knew where the fonts of desire were to be found. How could they? They didn't have functional MRI (fMRI) machines.
There is a predictable reductionist sameness about these articles. You or I could almost write them before we read them: Brain researchers study human experience y, and discover that human experience y is nothing but some region x lighting up in our brains. Admittedly, I'm taking a rather reductionist approach of my own: I've reduced all this research and journalism to a neat little formula.
It seems unfair, I'll admit. Brain science is certainly one of the most challenging of all scientific disciplines. The object under study is, after all, the most complex structure we know of anywhere in the universe. I thank God for how neuroscience saved my nephew's life when he was three, and my sister's life just a few years ago. I have great respect for the knowledge and skill that go into this research.
I'm just suspicious of the logic some neuroscientists and secularists employ when it comes to explaining complex human experiences like ethical decision-making, altruism, and love. Too often their reasoning goes like this (seriously):
1. Ethics and morality reside in the brain.
2. We look in the brain, and we find regions lighting up under stimuli.
3. Therefore, ethics and morality, at their core, are probably nothing but parts of the brain responding to stimuli.
That pattern was already in the back of my mind (I don't know how many centimeters back) when I began to read "The End of Morality," and I wondered if it, too, would match the formula. I was not, shall we say, disappointed: once again, it was the usual tale of brain scans and reductionism.
Take the Trolley
But maybe this is brushing the Discover article aside too quickly, for it did have all kinds of fascinating insights to offer. One of the tough ethical dilemmas it covered was "the trolley problem." The trolley problem is a thought experiment that's been making the rounds of psychology labs and philosophical discussions (with and without beer) for several decades now. In one classic formulation, a trolley is careening out of control down a track onto which some "mad philosopher" (as the problem's originator, Philippa Foot, imagined it) has tied five people. You're standing near the track, and you can save the five by flipping a switch that diverts the trolley to another track—but there's one person tied to that other track who will die as the direct result of your action.
So what do you do? Let's find out if science can help. Brain studies have now demonstrated that the trolley problem pits utilitarian calculations (five is greater than one; therefore, I will kill the one to save the five) against gut emotion (I can't stand to be the one who sends the trolley over to kill that one guy!).
It's numbers versus feelings. We know this scientifically, thanks to neuroscience. In people who take the more utilitarian approach, brain regions involved in calculation light up. Those who follow more of a gut approach activate an emotional center more strongly.
Now, isn't that helpful? We have expensive (but scientific!) fMRIs to tell us that an emotional approach is more emotional than one that runs the numbers. It's a great example of insights we could never have gained any other way—except, maybe, by sitting back a minute and thinking about it. (Should I apply for a grant to do that? I promise you my charge for sitting and thinking would be less than whatever an fMRI scan costs.)
I'm being unkind again, I know. I admit to overlooking some really good stuff coming out of neuroethics. For example, neuroethics has taught us that real-life decisions seem to be conditioned by factors other than common sense. That's important. Joshua Greene, a Harvard scientist who studies morality by using "behavioral experiments, functional neuroimaging (fMRI), transcranial magnetic stimulation (TMS), and genotyping," presents some of the fruit of this deep science on his faculty website:
As everyone knows, we humans are beset by a number of serious social problems: war, terrorism, the destruction of the environment, etc. Many people think that the cure for these ills is a heaping helping of common sense morality: "If only people everywhere would do what they know, deep down, is right, we'd all get along."
I believe that the opposite is true, that the aforementioned problems are a product of well-intentioned people abiding by their respective common senses and that the only long-run solution to these problems is for people to develop a healthy distrust of moral common sense. This is largely because our social instincts were not designed for the modern world. Nor, for that matter, were they designed to promote peace and happiness in the world for which they were designed, the world of our hunter-gatherer ancestors.5
Designed?! I thought that was a naughty word in biology. (If you don't believe me, look up for yourself W. J. Bock's 2009 article, "Design—an inappropriate concept in evolutionary theory.")
Regardless, I'm grateful we have this research to inform us that sometimes we get confused, and, in spite of our best efforts, we don't always make the right decisions. Up until now, I could never figure that out! I'm not sure I could have done it even with grant money. Thank you for that, Dr. Greene. But you seem to be a bit confused about the evolutionary part; you used a word—design—you shouldn't have.
Jordan Grafman and Jorge Moll, also featured in Discover's "The End of Morality," are a little more careful:
Brain scans showed that donating money activated primitive areas like the ventral tegumentum, part of the brain's reward circuit that lights up in response to food, sex, and other pleasurable activities necessary to our survival. Moll concluded that humans are hardwired with the neural architecture for such pro-social sentiments as generosity, guilt, and compassion. While the dollar amounts were modest, those who donated more . . . showed a small but significant bump of activity in the brain's septal region, an area strongly associated with social affiliation and attachment.
"This region is very rich in oxytocin receptors," Moll says. "I think these instincts evolved from nonhuman primates' capacity to form social bonds and from mother-offspring attachment capabilities."
There's none of that "design" language here. There's no language of virtue, either. We are not designed to love. Rather, compassion, guilt, and generosity are "pro-social sentiments," "instincts" relating to "primitive areas" in the brain that we must attribute to evolution. I suppose that's how you speak if your primitive-region instincts lead you to adopt pro-social sentiments toward evolutionary biologists.
Discover tells us that the same Joshua Greene mentioned above has proposed a viable hypothesis to explain a certain human paradox, summed up in Josef Stalin's trenchant observation that "the death of one man is a tragedy; the death of millions is a statistic." We tend to focus on individual drama far more than, say, the many thousands who may be suffering from famine right now in Africa. Greene's neuroimaging studies have revealed which region of the brain lights up when we think of tragedy in large numbers. It's an arithmetical part of the brain rather than an ethical part. Greene thinks our ethical functions stumble when we're confronted with high numbers because, at that point, we former hunter-gatherers switch to using "valuation mechanisms designed to think about things like nuts!" (He does like that word "designed," doesn't he? I wonder just how pro-social his primitive regions really are toward biologists.)
The New Society
I'm not sure what good Greene's discovery does us, though. Maybe we could wear transcranial magnetic stimulation devices to re-direct our neural systems for more nuanced ethical/arithmetical processing. Or maybe his insights could lead us to reflect on how imbalanced you and I can be in our responses to tragedy, and prod us to think of ways to correct that. But then again, maybe some of us could have worked that out even before Greene discovered that our ventral striata were involved in the process (and our approach wouldn't have required near so much in grant funding).
Seriously, genuinely helpful insights flowing out of this kind of research are hard to find. Why, then, does it even matter? Even Kristin Ohlson, who wrote "The End of Morality," seems to have been asking herself this question. If the answer were obvious, she probably wouldn't have felt the need to explain it to us. But she did:
This is why this research matters. It helps us become conscious of our brain's moral machinery. When the sirens of our emotions are sounding in unproductive ways, we can crank up the reasoning parts of our brain to make sound decisions. Often, Greene observes, we have made progress as individuals and as a society when we have managed to override our automatic settings, even if we did not realize that was what we were doing.
There you have it. Nowhere does "The End of Morality" suggest that we should cease being moral. It speaks instead of Spock-like "new ways to approach . . . moral questions, allowing logic to triumph over deep-rooted instinct." Progress for society will come out of "moral machinery," which can be "unproductive" unless we "crank up" other "parts" and "override our automatic settings." So, what part of the brain lights up when we "manage to override" these "settings"? How do we flip those switches? I'll have to think about that.
But first I must deal with the guilt-related instinctive response my pro-social sentiments have activated in my brain for the way I've treated these neuroethical researchers and the journalists who write about their work. I wish I could just find it in my brain's hunter-gatherer wiring to be more charitable. But the neurochemistry within me is bound to do whatever it is that neurochemistry does.
If you enjoyed this article from Salvo magazine, please consider contributing to our matching grant fundraising effort. All gifts will be matched dollar for dollar! Thanks for your continued support.
© 2015 Salvo magazine. Published by The Fellowship of St. James. All rights reserved.