Joshua D. Greene uses MRIs to discover what parts of our brains are responsible for morality.

Courtesy Joshua Greene

The Biology of Morality

One school of thought, established by German philosopher Immanuel Kant, believes that our sense of morality is connected to reason. Pulling a lever and incidentally killing a man on the side track is worth it if it saves five lives. Conversely, it's wrong to kill a person intentionally, so pushing the man onto the trolley tracks is immoral, even if it saves five others.

Early 21st century investigation suggests Kant's theory may not be correct. Primates in studies have been shown to understand principles of fairness and get angry when others in their groups behave selfishly. This undermines the idea of reason-based morality, since it's believed that high reasoning belongs to humans alone. And technology is also lending support to the idea that morality is ingrained in us.

Since its creation in the 1970s, magnetic resonance imaging (MRI) has been used for everything from finding tumors hidden deep within the brain to detecting whether or not a person is lying. Now, it's being used to discover which parts of our brains help us determine right from wrong.

Joshua Greene at Princeton University is leading the charge to explore morality through the use of technology. He's been using MRIs in conjunction with the trolley problem and other moral paradoxes. He's found that when a person in an MRI machine is asked questions like whether they should take a bus or a train to work, the parts of their brain that activate to form their answers are among the same areas that activate when the person is sorting through the first example in the trolley problem. The thought of pulling a switch that will dispatch one person to save five appears to be governed along the lines of reason and problem solving.

On the other hand (or region of the brain), Greene has found that distinctly different parts of the brain activate when people consider pushing a man onto the tracks. Regions that are responsible for determining what other people are feeling, as well as an area related to strong emotions, swing into action when a person is confronted with the decision of whether to push the man onto the tracks. It's possible this combination of brain functions constitutes our moral judgment.

Greene's not alone in his quest to update human morality. John Mikhail, a philosopher at Georgetown University, is investigating his belief that the brain handles morality in a similar way to how it handles grammar. In Mikhail's opinion, we decide if an act is moral or immoral based on a series of clues within the context. We recognize an act as immoral in the same way we recognize a grammatical error in a sentence -- it just stands out.

Morality, whether instinctual, as Mikhail believes, or exclusively carried out by neural functions, remains elusive. But once science determines exactly how morality works, a question will still remain: Why do we have morality?

To learn more about morality, philosophy and other related topics, see the links on the next page.