Are our moral intuitions irrelevant?

Mark argued this morning (follow-up here) that it is neither irrational nor morally wrong for Americans to place greater weight on the well-being of their fellow citizens than on that of unknown persons abroad when thinking about the desirability of expanded global trade. This observation will draw fire from consequentialist moral philosophers, who insist that the right course of action is the one that leads to the best consequences overall. Thus, argue the consequentialists, if trade benefits foreigners more than it harms Americans, it is morally desirable, full stop.

Although, like most economists, I’m attracted by the logic of the consequentialist position, on this issue I believe Mark has the stronger hand. As I argue here, consequentialists have been too quick to dismiss moral sentiments that conflict with their prescriptions, which sentiments they view as largely irrelevant vestiges of our evolutionary past. Consequentialists are probably right that moral sentiments sometimes inhibit us from making the best choices. But as Mark suggests, any system that did not actively encourage these sentiments would be unlikely to deliver good consequences.

More below the fold, including some relevant neuroscience

Consequentialists have interpreted recent findings in neuroscience as supportive of their belief that moral intuitions not only do not define morally correct conduct but actually militate against it in many cases. Consider, for example, the following of moral dilemma:

A trolley car with no one at the controls is speeding along a track seconds away from striking and killing five persons. You happen to be standing next to a switch that can divert the trolley onto a parallel side loop of tracks. If you throw the switch, the trolley will strike and kill a large man who is standing on the side loop, causing it to derail before it regains the main track where the five persons are standing. Should you throw the switch?

Consequentialists argue that you should, because then only one person will die instead of five. Most people seem to agree, although few people would want to have to make this choice. But now consider this variant of the same dilemma:

A trolley car with no one at the controls is speeding along a track seconds away from striking and killing five persons. You are standing on a footbridge above the tracks. A large stranger is standing next to you. If you push him off the bridge onto the tracks below, his body will derail the trolley, in the process killing him but sparing the lives of the five strangers. (It won’t work for you to jump down onto the tracks yourself, because you are too small to derail the trolley.) Should you push the stranger from the bridge?

Here, too, consequentialists argue that you should, again because only one person will die instead of five. But this time most people say that pushing the stranger is morally wrong. Joshua Greene, a cognitive neuroscientist, has suggested that people’s intuitions differ in these two examples not because the morally correct action differs, but rather because the action that results in the large stranger’s death is so much more vivid and personal in the footbridge case than in the looped-track case:

Because people have a robust, negative emotional response to the personal violation proposed in the footbridge case they immediately say that it’s wrong … At the same time, people fail to have a strong negative emotional response to the relatively impersonal violation proposed in the original trolley case, and therefore revert to the most obvious moral principle, “minimize harm,” which in turn leads them to say that the action in the original case is permissible. (Greene, 2002, p. 178.)

To test this explanation, Green used functional magnetic resonance imaging to examine activity patterns in the brains of subjects confronted with the two decisions. His prediction was that activity levels in brain regions associated with emotion would be higher when subjects considered pushing the stranger from a footbridge than when they considered diverting the trolley onto the looped track. He also reasoned that the minority of subjects who felt the right action was to push the stranger from the footbridge would reach that judgment only after overcoming their initial emotional reactions to the contrary. Thus he also predicted that the decisions taken by these subjects would take longer than those reached by the majority who thought it wrong to push the stranger to his death, and longer as well than it took for them to decide what to do in the looped-track example. Each of these predictions was confirmed.

Is it morally relevant that thinking about causing someone’s death by pushing him from a footbridge elicits stronger emotions than thinking about causing his death by throwing a switch? Consequentialists argue that it is not—that the difference is a simple, non-normative consequence of our evolutionary past.

Perhaps pushing the stranger from the bridge is the morally correct choice, just as consequentialists suggest. But that does not imply that it is generally best to ignore the evolved moral sentiments that influence such choices.

Moral systems that ignore moral emotions face multiple challenges. It is one thing, for example, to say that we would all enjoy greater prosperity if we refrained from cheating one another. But it is quite another to persuade individuals not to cheat when cheating cannot be detected and punished.

Even for persons strongly motivated to do the right thing, consequentialist moral systems can sometimes make impossible demands on individuals. Imagine, for example, that five strangers are about to be killed by a runaway trolley, which at the flip of a switch you could divert onto a side track where it would kill four of your closest friends. Many consequentialists would argue that it is your moral duty to flip the switch, since it is better that only four die instead of five. But a person capable of heeding such advice would have been unlikely to have had any close friends in the first place.

The capacity to form deep bonds of sympathy and affection is important for solving a variety of commitment problems that require trust. People who have this capacity reap considerable benefits from it. It is not a capacity easily abandoned. And even if we could abandon it, the emotional and material costs would be substantial. The twist, then, is that consequentialist prescriptions that treat moral intuitions as irrelevant may not lead to very good consequences.

Author: Robert Frank

Robert H. Frank is the Henrietta Johnson Louis Professor of Management and Professor of Economics at Cornell's Johnson Graduate School of Management and the co-director of the Paduano Seminar in business ethics at NYU’s Stern School of Business. His “Economic View” column appears monthly in The New York Times. He is a Distinguished Senior Fellow at Demos. He received his B.S. in mathematics from Georgia Tech, then taught math and science for two years as a Peace Corps Volunteer in rural Nepal. He holds an M.A. in statistics and a Ph.D. in economics, both from the University of California at Berkeley. His papers have appeared in the American Economic Review, Econometrica, Journal of Political Economy, and other leading professional journals. His books, which include Choosing the Right Pond, Passions Within Reason, Microeconomics and Behavior, Principles of Economics (with Ben Bernanke), Luxury Fever, What Price the Moral High Ground?, Falling Behind, The Economic Naturalist, and The Darwin Economy, have been translated into 22 languages. The Winner-Take-All Society, co-authored with Philip Cook, received a Critic's Choice Award, was named a Notable Book of the Year by The New York Times, and was included in Business Week's list of the ten best books of 1995. He is a co-recipient of the 2004 Leontief Prize for Advancing the Frontiers of Economic Thought. He was awarded the Johnson School’s Stephen Russell Distinguished teaching award in 2004, 2010, and 2012, and its Apple Distinguished Teaching Award in 2005.