Tuesday, March 27, 2007

Relations between rich and poor countries

March 25, 2007
Idea Lab

Reverse Foreign Aid

For the last 10 years, people in China have been sending me money. I also get money from countries in Latin America and sub-Saharan Africa — really, from every poor country. I’m not the only one who’s so lucky. Everyone in a wealthy nation has become the beneficiary of the generous subsidies that poorer countries bestow upon rich ones. Here in the United States, this welfare program in reverse allows our government to spend wildly without runaway inflation, keeps many American businesses afloat and even provides medical care in parts of the country where doctors are scarce.

Economic theory holds that money should flow downhill. The North, as rich countries are informally known, should want to sink its capital into the South — the developing world, which some statisticians define as all countries but the 29 wealthiest. According to this model, money both does well and does good: investors get a higher return than they could get in their own mature economies, and poor countries get the capital they need to get richer. Increasing the transfer of capital from rich nations to poorer ones is often listed as one justification for economic globalization.

Historically, the global balance sheet has favored poor countries. But with the advent of globalized markets, capital began to move in the other direction, and the South now exports capital to the North, at a skyrocketing rate. According to the United Nations, in 2006 the net transfer of capital from poorer countries to rich ones was $784 billion, up from $229 billion in 2002. (In 1997, the balance was even.) Even the poorest countries, like those in sub-Saharan Africa, are now money exporters.

How did this great reversal take place? Why did globalization begin to redistribute wealth upward? The answer, in large part, has to do with global finance. All countries hold hard-currency reserves to cover their foreign debts or to use in case of a natural or a financial disaster. For the past 50 years, rich countries have steadily held reserves equivalent to about three months’ worth of their total imports. As money circulates more and more quickly in a globalized economy, however, many countries have felt the need to add to their reserves, mainly to head off investor panic, which can strike even well-managed economies. Since 1990, the world’s nonrich nations have increased their reserves, on average, from around three months’ worth of imports to more than eight months’ worth — or the equivalent of about 30 percent of their G.D.P. China and other countries maintain those reserves mainly in the form of supersecure U.S. Treasury bills; whenever they buy T-bills, they are in effect lending the United States money. This allows the U.S. to keep interest rates low and Washington to run up huge deficits with no apparent penalty.

But the cost to poorer countries is very high. The benefit of T-bills, of course, is that they are virtually risk-free and thus help assure investors and achieve stability. But the problem is that T-bills earn low returns. All the money spent on T-bills — a very substantial sum — could be earning far better returns invested elsewhere, or could be used to pay teachers and build highways at home, activities that bring returns of a different type. Dani Rodrik, an economist at Harvard’s Kennedy School of Government, estimates conservatively that maintaining reserves in excess of the three-month standard costs poor countries 1 percent of their economies annually — some $110 billion every year. Joseph Stiglitz, the Columbia University economist, says he thinks the real cost could be double that.

In his recent book, “Making Globalization Work,” Stiglitz proposes a solution. Adapting an old idea of John Maynard Keynes, he proposes a sort of insurance pool that would provide hard currency to countries going through times of crisis. Money actually changes hands only if a country needs the reserve, and the recipient must repay what it has used.

No one planned the rapid swelling of reserves. Other South-to-North subsidies, by contrast, have been built into the rules of globalization by international agreements. Consider the World Trade Organization’s requirements that all member countries respect patents and copyrights — patents on medicines and industrial and other products; copyrights on, say, music and movies. As poorer countries enter the W.T.O., they must agree to pay royalties on such goods — and a result is a net obligation of more than $40 billion annually that poorer countries owe to American and European corporations.

There are good reasons for countries to respect intellectual property, but doing so is also an overwhelming burden on the poorest people in poorer countries. After all, the single largest beneficiary of the intellectual-property system is the pharmaceutical industry. But consumers in poorer nations do not get much in return, as they do not form a lucrative enough market to inspire research on cures for many of their illnesses. Moreover, the intellectual-property rules make it difficult for poorer countries to manufacture less-expensive generic drugs that poor people rely on. The largest cost to poor countries is not money but health, as many people simply will not be able to find or afford brand-name medicine.

The hypercompetition for global investment has produced another important reverse subsidy: the tax holidays poor countries offer foreign investors. A company that announces it wants to make cars, televisions or pharmaceuticals in, say, east Asia, will then send its representatives to negotiate with government officials in China, Malaysia, the Philippines and elsewhere, holding an auction for the best deal. The savviest corporations get not only 10-year tax holidays but also discounts on land, cheap government loans, below-market rates for electricity and water and government help in paying their workers.

Rich countries know better — the European Union, for example, regulates the incentives members can offer to attract investment. That car plant will most likely be built in one of the competing countries anyway — the incentives serve only to reduce the host country’s benefits. Since deals between corporations and governments are usually secret, it is hard to know how much investment incentives cost poorer countries — certainly tens of billions of dollars. Whatever the cost, it is growing, as country after country has passed laws enabling the offer of such incentives.

Human nature, not smart lobbying, is responsible for another poor-to-rich subsidy: the brain drain. The migration of highly educated people from poor nations is increasing. A small brain drain can benefit the South, as emigrants send money home and may return with new skills and capital. But in places where educated people are few and emigrants don’t go home again, the brain drain devastates. In many African countries, more than 40 percent of college-educated people emigrate to rich countries. Malawian nurses have moved to Britain and other English-speaking nations en masse, and now two-thirds of nursing posts in Malawi’s public health system are vacant. Zambia has lost three-quarters of its new physicians in recent years. Even in South Africa, 21 percent of graduating doctors migrate.

The financial consequences for the poorer nations can be severe. A doctor who moves from Johannesburg to North Dakota costs the South African government as much as $100,000, the price of training him there. As with patent enforcement, a larger cost may be in health. A lack of trained people — a gap that widens daily — is now the main barrier to fighting AIDS, malaria and other diseases in Africa.

Sometimes reverse subsidies are disguised. Rich-country governments spent $283 billion in 2005 to support and subsidize their own agriculture, mainly agribusiness. Artificially cheap food exported to poor countries might seem like a gift — but it is often a Trojan horse. Corn, rice or cotton exported by rich countries is so cheap that small farmers in poor countries cannot compete, so they stop farming. Three-quarters of the world’s poor people are rural. The African peasant with an acre and a hoe is losing her livelihood, and the benefits go mainly to companies like Archer Daniels Midland and Cargill.

Most costly to poor countries, they have been drafted into paying for rich nations’ energy use. On a per capita basis, Americans emit more greenhouse gases into the atmosphere — and thus create more global warming — than anyone else. What we pay to drive a car or keep an industrial plant running is not the true cost of oil or coal. The real price would include the cost of the environmental damage that comes from burning these fuels. But even as we do not pay that price, other countries do. American energy use is being subsidized by tropical coastal nations, who appear to be global warming’s first victims. Some scientists argue that Bangladesh already has more powerful monsoon downpours and Honduras fiercer cyclones because of global warming — likely indicators of worse things ahead. The islands of the Maldives may someday be completely underwater. The costs these nations will pay do not appear on the global balance sheets. But they are the ultimate subsidy.

Tina Rosenberg is a contributing writer for the magazine.

Copyright 2007 The New York Times Company

Tuesday, March 13, 2007

Brain Is Not Mind

March 11, 2007

The Brain on the Stand

I. Mr. Weinstein’s Cyst When historians of the future try to identify the moment that neuroscience began to transform the American legal system, they may point to a little-noticed case from the early 1990s. The case involved Herbert Weinstein, a 65-year-old ad executive who was charged with strangling his wife, Barbara, to death and then, in an effort to make the murder look like a suicide, throwing her body out the window of their 12th-floor apartment on East 72nd Street in Manhattan. Before the trial began, Weinstein’s lawyer suggested that his client should not be held responsible for his actions because of a mental defect — namely, an abnormal cyst nestled in his arachnoid membrane, which surrounds the brain like a spider web.

The implications of the claim were considerable. American law holds people criminally responsible unless they act under duress (with a gun pointed at the head, for example) or if they suffer from a serious defect in rationality — like not being able to tell right from wrong. But if you suffer from such a serious defect, the law generally doesn’t care why — whether it’s an unhappy childhood or an arachnoid cyst or both. To suggest that criminals could be excused because their brains made them do it seems to imply that anyone whose brain isn’t functioning properly could be absolved of responsibility. But should judges and juries really be in the business of defining the normal or properly working brain? And since all behavior is caused by our brains, wouldn’t this mean all behavior could potentially be excused?

The prosecution at first tried to argue that evidence of Weinstein’s arachnoid cyst shouldn’t be admitted in court. One of the government’s witnesses, a forensic psychologist named Daniel Martell, testified that brain-scanning technologies were new and untested, and their implications weren’t yet widely accepted by the scientific community. Ultimately, on Oct. 8, 1992, Judge Richard Carruthers issued a Solomonic ruling: Weinstein’s lawyers could tell the jury that brain scans had identified an arachnoid cyst, but they couldn’t tell jurors that arachnoid cysts were associated with violence. Even so, the prosecution team seemed to fear that simply exhibiting images of Weinstein’s brain in court would sway the jury. Eleven days later, on the morning of jury selection, they agreed to let Weinstein plead guilty in exchange for a reduced charge of manslaughter.

After the Weinstein case, Daniel Martell found himself in so much demand to testify as a expert witness that he started a consulting business called Forensic Neuroscience. Hired by defense teams and prosecutors alike, he has testified over the past 15 years in several hundred criminal and civil cases. In those cases, neuroscientific evidence has been admitted to show everything from head trauma to the tendency of violent video games to make children behave aggressively. But Martell told me that it’s in death-penalty litigation that neuroscience evidence is having its most revolutionary effect. “Some sort of organic brain defense has become de rigueur in any sort of capital defense,” he said. Lawyers routinely order scans of convicted defendants’ brains and argue that a neurological impairment prevented them from controlling themselves. The prosecution counters that the evidence shouldn’t be admitted, but under the relaxed standards for mitigating evidence during capital sentencing, it usually is. Indeed, a Florida court has held that the failure to admit neuroscience evidence during capital sentencing is grounds for a reversal. Martell remains skeptical about the worth of the brain scans, but he observes that they’ve “revolutionized the law.”

The extent of that revolution is hotly debated, but the influence of what some call neurolaw is clearly growing. Neuroscientific evidence has persuaded jurors to sentence defendants to life imprisonment rather than to death; courts have also admitted brain-imaging evidence during criminal trials to support claims that defendants like John W. Hinckley Jr., who tried to assassinate President Reagan, are insane. Carter Snead, a law professor at Notre Dame, drafted a staff working paper on the impact of neuroscientific evidence in criminal law for President Bush’s Council on Bioethics. The report concludes that neuroimaging evidence is of mixed reliability but “the large number of cases in which such evidence is presented is striking.” That number will no doubt increase substantially. Proponents of neurolaw say that neuroscientific evidence will have a large impact not only on questions of guilt and punishment but also on the detection of lies and hidden bias, and on the prediction of future criminal behavior. At the same time, skeptics fear that the use of brain-scanning technology as a kind of super mind-reading device will threaten our privacy and mental freedom, leading some to call for the legal system to respond with a new concept of “cognitive liberty.”

One of the most enthusiastic proponents of neurolaw is Owen Jones, a professor of law and biology at Vanderbilt. Jones (who happens to have been one of my law-school classmates) has joined a group of prominent neuroscientists and law professors who have applied for a large MacArthur Foundation grant; they hope to study a wide range of neurolaw questions, like: Do sexual offenders and violent teenagers show unusual patterns of brain activity? Is it possible to capture brain images of chronic neck pain when someone claims to have suffered whiplash? In the meantime, Jones is turning Vanderbilt into a kind of Los Alamos for neurolaw. The university has just opened a $27 million neuroimaging center and has poached leading neuroscientists from around the world; soon, Jones hopes to enroll students in the nation’s first program in law and neuroscience. “It’s breathlessly exciting,” he says. “This is the new frontier in law and science — we’re peering into the black box to see how the brain is actually working, that hidden place in the dark quiet, where we have our private thoughts and private reactions — and the law will inevitably have to decide how to deal with this new technology.”

II. A Visit to Vanderbilt Owen Jones is a disciplined and quietly intense man, and his enthusiasm for the transformative power of neuroscience is infectious. With RenĂ© Marois, a neuroscientist in the psychology department, Jones has begun a study of how the human brain reacts when asked to impose various punishments. Informally, they call the experiment Harm and Punishment — and they offered to make me one of their first subjects.

We met in Jones’s pristine office, which is decorated with a human skull and calipers, like those that phrenologists once used to measure the human head; his father is a dentist, and his grandfather was an electrical engineer who collected tools. We walked over to Vanderbilt’s Institute of Imaging Science, which, although still surrounded by scaffolding, was as impressive as Jones had promised. The basement contains one of the few 7-tesla magnetic-resonance-imaging scanners in the world. For Harm and Punishment, Jones and Marois use a less powerful 3 tesla, which is the typical research M.R.I.

We then made our way to the scanner. After removing all metal objects — including a belt and a stray dry-cleaning tag with a staple — I put on earphones and a helmet that was shaped like a birdcage to hold my head in place. The lab assistant turned off the lights and left the room; I lay down on the gurney and, clutching a panic button, was inserted into the magnet. All was dark except for a screen flashing hypothetical crime scenarios, like this one: “John, who lives at home with his father, decides to kill him for the insurance money. After convincing his father to help with some electrical work in the attic, John arranges for him to be electrocuted. His father survives the electrocution, but he is hospitalized for three days with injuries caused by the electrical shock.” I was told to press buttons indicating the appropriate level of punishment, from 0 to 9, as the magnet recorded my brain activity.

After I spent 45 minutes trying not to move an eyebrow while assigning punishments to dozens of sordid imaginary criminals, Marois told me through the intercom to try another experiment: namely, to think of familiar faces and places in sequence, without telling him whether I was starting with faces or places. I thought of my living room, my wife, my parents’ apartment and my twin sons, trying all the while to avoid improper thoughts for fear they would be discovered. Then the experiments were over, and I stumbled out of the magnet.

The next morning, Owen Jones and I reported to RenĂ© Marois’s laboratory for the results. Marois’s graduate students, who had been up late analyzing my brain, were smiling broadly. Because I had moved so little in the machine, they explained, my brain activity was easy to read. “Your head movement was incredibly low, and you were the harshest punisher we’ve had,” Josh Buckholtz, one of the grad students, said with a happy laugh. “You were a researcher’s dream come true!” Buckholtz tapped the keyboard, and a high-resolution 3-D image of my brain appeared on the screen in vivid colors. Tiny dots flickered back and forth, showing my eyes moving as they read the lurid criminal scenarios. Although I was only the fifth subject to be put in the scanner, Marois emphasized that my punishment ratings were higher than average. In one case, I assigned a 7 where the average punishment was 4. “You were focusing on the intent, and the others focused on the harm,” Buckholtz said reassuringly.

Marois explained that he and Jones wanted to study the interactions among the emotion-generating regions of the brain, like the amygdala, and the prefrontal regions responsible for reason. “It is also possible that the prefrontal cortex is critical for attributing punishment, making the essential decision about what kind of punishment to assign,” he suggested. Marois stressed that in order to study that possibility, more subjects would have to be put into the magnet. But if the prefrontal cortex does turn out to be critical for selecting among punishments, Jones added, it could be highly relevant for lawyers selecting a jury. For example, he suggested, lawyers might even select jurors for different cases based on their different brain-activity patterns. In a complex insider-trading case, for example, perhaps the defense would “like to have a juror making decisions on maximum deliberation and minimum emotion”; in a government entrapment case, emotional reactions might be more appropriate.

We then turned to the results of the second experiment, in which I had been asked to alternate between thinking of faces and places without disclosing the order. “We think we can guess what you were thinking about, even though you didn’t tell us the order you started with,” Marois said proudly. “We think you started with places and we will prove to you that it wasn’t just luck.” Marois showed me a picture of my parahippocampus, the area of the brain that responds strongly to places and the recognition of scenes. “It’s lighting up like Christmas on all cylinders,” Marois said. “It worked beautifully, even though we haven’t tried this before here.”

He then showed a picture of the fusiform area, which is responsible for facial recognition. It, too, lighted up every time I thought of a face. “This is a potentially very serious legal implication,” Jones broke in, since the technology allows us to tell what people are thinking about even if they deny it. He pointed to a series of practical applications. Because subconscious memories of faces and places may be more reliable than conscious memories, witness lineups could be transformed. A child who claimed to have been victimized by a stranger, moreover, could be shown pictures of the faces of suspects to see which one lighted up the face-recognition area in ways suggesting familiarity.

Jones and Marois talked excitedly about the implications of their experiments for the legal system. If they discovered a significant gap between people’s hard-wired sense of how severely certain crimes should be punished and the actual punishments assigned by law, federal sentencing guidelines might be revised, on the principle that the law shouldn’t diverge too far from deeply shared beliefs. Experiments might help to develop a deeper understanding of the criminal brain, or of the typical brain predisposed to criminal activity.

III. The End of Responsibility? Indeed, as the use of functional M.R.I. results becomes increasingly common in courtrooms, judges and juries may be asked to draw new and sometimes troubling lines between “normal” and “abnormal” brains. Ruben Gur, a professor of psychology at the University of Pennsylvania School of Medicine, specializes in doing just that. Gur began his expert-witness career in the mid-1990s when a colleague asked him to help in the trial of a convicted serial killer in Florida named Bobby Joe Long. Known as the “classified-ad rapist,” because he would respond to classified ads placed by women offering to sell household items, then rape and kill them, Long was sentenced to death after he committed at least nine murders in Tampa. Gur was called as a national expert in positron-emission tomography, or PET scans, in which patients are injected with a solution containing radioactive markers that illuminate their brain activity. After examining Long’s PET scans, Gur testified that a motorcycle accident that had left Long in a coma had also severely damaged his amygdala. It was after emerging from the coma that Long committed his first rape.

“I didn’t have the sense that my testimony had a profound impact,” Gur told me recently — Long is still filing appeals — but he has testified at more than 20 capital cases since then. He wrote a widely circulated affidavit arguing that adolescents are not as capable of controlling their impulses as adults because the development of neurons in the prefrontal cortex isn’t complete until the early 20s. Based on that affidavit, Gur was asked to contribute to the preparation of one of the briefs filed by neuroscientists and others in Roper v. Simmons, the landmark case in which a divided Supreme Court struck down the death penalty for offenders who committed crimes when they were under the age of 18.

The leading neurolaw brief in the case, filed by the American Medical Association and other groups, argued that because “adolescent brains are not fully developed” in the prefrontal regions, adolescents are less able than adults to control their impulses and should not be held fully accountable “for the immaturity of their neural anatomy.” In his majority decision, Justice Anthony Kennedy declared that “as any parent knows and as the scientific and sociological studies” cited in the briefs “tend to confirm, ‘[a] lack of maturity and an underdeveloped sense of responsibility are found in youth more often than in adults.’ ” Although Kennedy did not cite the neuroscience evidence specifically, his indirect reference to the scientific studies in the briefs led some supporters and critics to view the decision as the Brown v. Board of Education of neurolaw.

One important question raised by the Roper case was the question of where to draw the line in considering neuroscience evidence as a legal mitigation or excuse. Should courts be in the business of deciding when to mitigate someone’s criminal responsibility because his brain functions improperly, whether because of age, in-born defects or trauma? As we learn more about criminals’ brains, will we have to redefine our most basic ideas of justice?

Two of the most ardent supporters of the claim that neuroscience requires the redefinition of guilt and punishment are Joshua D. Greene, an assistant professor of psychology at Harvard, and Jonathan D. Cohen, a professor of psychology who directs the neuroscience program at Princeton. Greene got Cohen interested in the legal implications of neuroscience, and together they conducted a series of experiments exploring how people’s brains react to moral dilemmas involving life and death. In particular, they wanted to test people’s responses in the f.M.R.I. scanner to variations of the famous trolley problem, which philosophers have been arguing about for decades.

The trolley problem goes something like this: Imagine a train heading toward five people who are going to die if you don’t do anything. If you hit a switch, the train veers onto a side track and kills another person. Most people confronted with this scenario say it’s O.K. to hit the switch. By contrast, imagine that you’re standing on a footbridge that spans the train tracks, and the only way you can save the five people is to push an obese man standing next to you off the footbridge so that his body stops the train. Under these circumstances, most people say it’s not O.K. to kill one person to save five.

“I wondered why people have such clear intuitions,” Greene told me, “and the core idea was to confront people with these two cases in the scanner and see if we got more of an emotional response in one case and reasoned response in the other.” As it turns out, that’s precisely what happened: Greene and Cohen found that the brain region associated with deliberate problem solving and self-control, the dorsolateral prefrontal cortex, was especially active when subjects confronted the first trolley hypothetical, in which most of them made a utilitarian judgment about how to save the greatest number of lives. By contrast, emotional centers in the brain were more active when subjects confronted the second trolley hypothetical, in which they tended to recoil at the idea of personally harming an individual, even under such wrenching circumstances. “This suggests that moral judgment is not a single thing; it’s intuitive emotional responses and then cognitive responses that are duking it out,” Greene said.

“To a neuroscientist, you are your brain; nothing causes your behavior other than the operations of your brain,” Greene says. “If that’s right, it radically changes the way we think about the law. The official line in the law is all that matters is whether you’re rational, but you can have someone who is totally rational but whose strings are being pulled by something beyond his control.” In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain. Greene insists that this insight means that the criminal-justice system should abandon the idea of retribution — the idea that bad people should be punished because they have freely chosen to act immorally — which has been the focus of American criminal law since the 1970s, when rehabilitation went out of fashion. Instead, Greene says, the law should focus on deterring future harms. In some cases, he supposes, this might mean lighter punishments. “If it’s really true that we don’t get any prevention bang from our punishment buck when we punish that person, then it’s not worth punishing that person,” he says. (On the other hand, Carter Snead, the Notre Dame scholar, maintains that capital defendants who are not considered fully blameworthy under current rules could be executed more readily under a system that focused on preventing future harms.)

Others agree with Greene and Cohen that the legal system should be radically refocused on deterrence rather than on retribution. Since the celebrated M’Naughten case in 1843, involving a paranoid British assassin, English and American courts have recognized an insanity defense only for those who are unable to appreciate the difference between right and wrong. (This is consistent with the idea that only rational people can be held criminally responsible for their actions.) According to some neuroscientists, that rule makes no sense in light of recent brain-imaging studies. “You can have a horrendously damaged brain where someone knows the difference between right and wrong but nonetheless can’t control their behavior,” says Robert Sapolsky, a neurobiologist at Stanford. “At that point, you’re dealing with a broken machine, and concepts like punishment and evil and sin become utterly irrelevant. Does that mean the person should be dumped back on the street? Absolutely not. You have a car with the brakes not working, and it shouldn’t be allowed to be near anyone it can hurt.”

Even as these debates continue, some skeptics contend that both the hopes and fears attached to neurolaw are overblown. “There’s nothing new about the neuroscience ideas of responsibility; it’s just another material, causal explanation of human behavior,” says Stephen J. Morse, professor of law and psychiatry at the University of Pennsylvania. “How is this different than the Chicago school of sociology,” which tried to explain human behavior in terms of environment and social structures? “How is it different from genetic explanations or psychological explanations? The only thing different about neuroscience is that we have prettier pictures and it appears more scientific.”

Morse insists that “brains do not commit crimes; people commit crimes” — a conclusion he suggests has been ignored by advocates who, “infected and inflamed by stunning advances in our understanding of the brain . . . all too often make moral and legal claims that the new neuroscience . . . cannot sustain.” He calls this “brain overclaim syndrome” and cites as an example the neuroscience briefs filed in the Supreme Court case Roper v. Simmons to question the juvenile death penalty. “What did the neuroscience add?” he asks. If adolescent brains caused all adolescent behavior, “we would expect the rates of homicide to be the same for 16- and 17-year-olds everywhere in the world — their brains are alike — but in fact, the homicide rates of Danish and Finnish youths are very different than American youths.” Morse agrees that our brains bring about our behavior — “I’m a thoroughgoing materialist, who believes that all mental and behavioral activity is the causal product of physical events in the brain” — but he disagrees that the law should excuse certain kinds of criminal conduct as a result. “It’s a total non sequitur,” he says. “So what if there’s biological causation? Causation can’t be an excuse for someone who believes that responsibility is possible. Since all behavior is caused, this would mean all behavior has to be excused.” Morse cites the case of Charles Whitman, a man who, in 1966, killed his wife and his mother, then climbed up a tower at the University of Texas and shot and killed 13 more people before being shot by police officers. Whitman was discovered after an autopsy to have a tumor that was putting pressure on his amygdala. “Even if his amygdala made him more angry and volatile, since when are anger and volatility excusing conditions?” Morse asks. “Some people are angry because they had bad mommies and daddies and others because their amygdalas are mucked up. The question is: When should anger be an excusing condition?”

Still, Morse concedes that there are circumstances under which new discoveries from neuroscience could challenge the legal system at its core. “Suppose neuroscience could reveal that reason actually plays no role in determining human behavior,” he suggests tantalizingly. “Suppose I could show you that your intentions and your reasons for your actions are post hoc rationalizations that somehow your brain generates to explain to you what your brain has already done” without your conscious participation. If neuroscience could reveal us to be automatons in this respect, Morse is prepared to agree with Greene and Cohen that criminal law would have to abandon its current ideas about responsibility and seek other ways of protecting society.

Some scientists are already pushing in this direction. In a series of famous experiments in the 1970s and ’80s, Benjamin Libet measured people’s brain activity while telling them to move their fingers whenever they felt like it. Libet detected brain activity suggesting a readiness to move the finger half a second before the actual movement and about 400 milliseconds before people became aware of their conscious intention to move their finger. Libet argued that this leaves 100 milliseconds for the conscious self to veto the brain’s unconscious decision, or to give way to it — suggesting, in the words of the neuroscientist Vilayanur S. Ramachandran, that we have not free will but “free won’t.”

Morse is not convinced that the Libet experiments reveal us to be helpless automatons. But he does think that the study of our decision-making powers could bear some fruit for the law. “I’m interested,” he says, “in people who suffer from drug addictions, psychopaths and people who have intermittent explosive disorder — that’s people who have no general rationality problem other than they just go off.” In other words, Morse wants to identify the neural triggers that make people go postal. “Suppose we could show that the higher deliberative centers in the brain seem to be disabled in these cases,” he says. “If these are people who cannot control episodes of gross irrationality, we’ve learned something that might be relevant to the legal ascription of responsibility.” That doesn’t mean they would be let off the hook, he emphasizes: “You could give people a prison sentence and an opportunity to get fixed.”

IV. Putting the Unconscious on Trial If debates over criminal responsibility long predate the f.M.R.I., so do debates over the use of lie-detection technology. What’s new is the prospect that lie detectors in the courtroom will become much more accurate, and correspondingly more intrusive. There are, at the moment, two lie-detection technologies that rely on neuroimaging, although the value and accuracy of both are sharply contested. The first, developed by Lawrence Farwell in the 1980s, is known as “brain fingerprinting.” Subjects put on an electrode-filled helmet that measures a brain wave called p300, which, according to Farwell, changes its frequency when people recognize images, pictures, sights and smells. After showing a suspect pictures of familiar places and measuring his p300 activation patterns, government officials could, at least in theory, show a suspect pictures of places he may or may not have seen before — a Qaeda training camp, for example, or a crime scene — and compare the activation patterns. (By detecting not only lies but also honest cases of forgetfulness, the technology could expand our very idea of lie detection.)

The second lie-detection technology uses f.M.R.I. machines to compare the brain activity of liars and truth tellers. It is based on a test called Guilty Knowledge, developed by Daniel Langleben at the University of Pennsylvania in 2001. Langleben gave subjects a playing card before they entered the magnet and told them to answer no to a series of questions, including whether they had the card in question. Langleben and his colleagues found that certain areas of the brain lighted up when people lied.

Two companies, No Lie MRI and Cephos, are now competing to refine f.M.R.I. lie-detection technology so that it can be admitted in court and commercially marketed. I talked to Steven Laken, the president of Cephos, which plans to begin selling its products this year. “We have two to three people who call every single week,” he told me. “They’re in legal proceedings throughout the world, and they’re looking to bolster their credibility.” Laken said the technology could have “tremendous applications” in civil and criminal cases. On the government side, he said, the technology could replace highly inaccurate polygraphs in screening for security clearances, as well as in trying to identify suspected terrorists’ native languages and close associates. “In lab studies, we’ve been in the 80- to 90-percent-accuracy range,” Laken says. This is similar to the accuracy rate for polygraphs, which are not considered sufficiently reliable to be allowed in most legal cases. Laken says he hopes to reach the 90-percent- to 95-percent-accuracy range — which should be high enough to satisfy the Supreme Court’s standards for the admission of scientific evidence. Judy Illes, director of Neuroethics at the Stanford Center for Biomedical Ethics, says, “I would predict that within five years, we will have technology that is sufficiently reliable at getting at the binary question of whether someone is lying that it may be utilized in certain legal settings.”

If and when lie-detection f.M.R.I.’s are admitted in court, they will raise vexing questions of self-incrimination and privacy. Hank Greely, a law professor and head of the Stanford Center for Law and the Biosciences, notes that prosecution and defense witnesses might have their credibility questioned if they refused to take a lie-detection f.M.R.I., as might parties and witnesses in civil cases. Unless courts found the tests to be shocking invasions of privacy, like stomach pumps, witnesses could even be compelled to have their brains scanned. And equally vexing legal questions might arise as neuroimaging technologies move beyond telling whether or not someone is lying and begin to identify the actual content of memories. Michael Gazzaniga, a professor of psychology at the University of California, Santa Barbara, and author of “The Ethical Brain,” notes that within 10 years, neuroscientists may be able to show that there are neurological differences when people testify about their own previous acts and when they testify to something they saw. “If you kill someone, you have a procedural memory of that, whereas if I’m standing and watch you kill somebody, that’s an episodic memory that uses a different part of the brain,” he told me. Even if witnesses don’t have their brains scanned, neuroscience may lead judges and jurors to conclude that certain kinds of memories are more reliable than others because of the area of the brain in which they are processed. Further into the future, and closer to science fiction, lies the possibility of memory downloading. “One could even, just barely, imagine a technology that might be able to ‘read out’ the witness’s memories, intercepted as neuronal firings, and translate it directly into voice, text or the equivalent of a movie,” Hank Greely writes.

Greely acknowledges that lie-detection and memory-retrieval technologies like this could pose a serious challenge to our freedom of thought, which is now defended largely by the First Amendment protections for freedom of expression. “Freedom of thought has always been buttressed by the reality that you could only tell what someone thought based on their behavior,” he told me. “This technology holds out the possibility of looking through the skull and seeing what’s really happening, seeing the thoughts themselves.” According to Greely, this may challenge the principle that we should be held accountable for what we do, not what we think. “It opens up for the first time the possibility of punishing people for their thoughts rather than their actions,” he says. “One reason thought has been free in the harshest dictatorships is that dictators haven’t been able to detect it.” He adds, “Now they may be able to, putting greater pressure on legal constraints against government interference with freedom of thought.”

In the future, neuroscience could also revolutionize the way jurors are selected. Steven Laken, the president of Cephos, says that jury consultants might seek to put prospective jurors in f.M.R.I.’s. “You could give videotapes of the lawyers and witnesses to people when they’re in the magnet and see what parts of their brains light up,” he says. A situation like this would raise vexing questions about jurors’ prejudices — and what makes for a fair trial. Recent experiments have suggested that people who believe themselves to be free of bias may harbor plenty of it all the same.

The experiments, conducted by Elizabeth Phelps, who teaches psychology at New York University, combine brain scans with a behavioral test known as the Implicit Association Test, or I.A.T., as well as physiological tests of the startle reflex. The I.A.T. flashes pictures of black and white faces at you and asks you to associate various adjectives with the faces. Repeated tests have shown that white subjects take longer to respond when they’re asked to associate black faces with positive adjectives and white faces with negative adjectives than vice versa, and this is said to be an implicit measure of unconscious racism. Phelps and her colleagues added neurological evidence to this insight by scanning the brains and testing the startle reflexes of white undergraduates at Yale before they took the I.A.T. She found that the subjects who showed the most unconscious bias on the I.A.T. also had the highest activation in their amygdalas — a center of threat perception — when unfamiliar black faces were flashed at them in the scanner. By contrast, when subjects were shown pictures of familiar black and white figures — like Denzel Washington, Martin Luther King Jr. and Conan O’Brien — there was no jump in amygdala activity.

The legal implications of the new experiments involving bias and neuroscience are hotly disputed. Mahzarin R. Banaji, a psychology professor at Harvard who helped to pioneer the I.A.T., has argued that there may be a big gap between the concept of intentional bias embedded in law and the reality of unconscious racism revealed by science. When the gap is “substantial,” she and the U.C.L.A. law professor Jerry Kang have argued, “the law should be changed to comport with science” — relaxing, for example, the current focus on intentional discrimination and trying to root out unconscious bias in the workplace with “structural interventions,” which critics say may be tantamount to racial quotas. One legal scholar has cited Phelps’s work to argue for the elimination of peremptory challenges to prospective jurors — if most whites are unconsciously racist, the argument goes, then any decision to strike a black juror must be infected with racism. Much to her displeasure, Phelps’s work has been cited by a journalist to suggest that a white cop who accidentally shot a black teenager on a Brooklyn rooftop in 2004 must have been responding to a hard-wired fear of unfamiliar black faces — a version of the amygdala made me do it.

Phelps herself says it’s “crazy” to link her work to cops who shoot on the job and insists that it is too early to use her research in the courtroom. “Part of my discomfort is that we haven’t linked what we see in the amygdala or any other region of the brain with an activity outside the magnet that we would call racism,” she told me. “We have no evidence whatsoever that activity in the brain is more predictive of things we care about in the courtroom than the behaviors themselves that we correlate with brain function.” In other words, just because you have a biased reaction to a photograph doesn’t mean you’ll act on those biases in the workplace. Phelps is also concerned that jurors might be unduly influenced by attention-grabbing pictures of brain scans. “Frank Keil, a psychologist at Yale, has done research suggesting that when you have a picture of a mechanism, you have a tendency to overestimate how much you understand the mechanism,” she told me. Defense lawyers confirm this phenomenon. “Here was this nice color image we could enlarge, that the medical expert could point to,” Christopher Plourd, a San Diego criminal defense lawyer, told The Los Angeles Times in the early 1990s. “It documented that this guy had a rotten spot in his brain. The jury glommed onto that.”

Other scholars are even sharper critics of efforts to use scientific experiments about unconscious bias to transform the law. “I regard that as an extraordinary claim that you could screen potential jurors or judges for bias; it’s mind-boggling,” I was told by Philip Tetlock, professor at the Haas School of Business at the University of California at Berkley. Tetlock has argued that split-second associations between images of African-Americans and negative adjectives may reflect “simple awareness of the social reality” that “some groups are more disadvantaged than others.” He has also written that, according to psychologists, “there is virtually no published research showing a systematic link between racist attitudes, overt or subconscious, and real-world discrimination.” (A few studies show, Tetlock acknowledges, that openly biased white people sometimes sit closer to whites than blacks in experiments that simulate job hiring and promotion.) “A light bulb going off in your brain means nothing unless it’s correlated with a particular output, and the brain-scan stuff, heaven help us, we have barely linked that with anything,” agrees Tetlock’s co-author, Amy Wax of the University of Pennsylvania Law School. “The claim that homeless people light up your amygdala more and your frontal cortex less and we can infer that you will systematically dehumanize homeless people — that’s piffle.”

V. Are You Responsible for What You Might Do? The attempt to link unconscious bias to actual acts of discrimination may be dubious. But are there other ways to look inside the brain and make predictions about an individual’s future behavior? And if so, should those discoveries be employed to make us safer? Efforts to use science to predict criminal behavior have a disreputable history. In the 19th century, the Italian criminologist Cesare Lombroso championed a theory of “biological criminality,” which held that criminals could be identified by physical characteristics, like large jaws or bushy eyebrows. Nevertheless, neuroscientists are trying to find the factors in the brain associated with violence. PET scans of convicted murderers were first studied in the late 1980s by Adrian Raine, a professor of psychology at the University of Southern California; he found that their prefrontal cortexes, areas associated with inhibition, had reduced glucose metabolism and suggested that this might be responsible for their violent behavior. In a later study, Raine found that subjects who received a diagnosis of antisocial personality disorder, which correlates with violent behavior, had 11 percent less gray matter in their prefrontal cortexes than control groups of healthy subjects and substance abusers. His current research uses f.M.R.I.’s to study moral decision-making in psychopaths.

Neuroscience, it seems, points two ways: it can absolve individuals of responsibility for acts they’ve committed, but it can also place individuals in jeopardy for acts they haven’t committed — but might someday. “This opens up a Pandora’s box in civilized society that I’m willing to fight against,” says Helen S. Mayberg, a professor of psychiatry, behavioral sciences and neurology at Emory University School of Medicine, who has testified against the admission of neuroscience evidence in criminal trials. “If you believe at the time of trial that the picture informs us about what they were like at the time of the crime, then the picture moves forward. You need to be prepared for: ‘This spot is a sign of future dangerousness,’ when someone is up for parole. They have a scan, the spot is there, so they don’t get out. It’s carved in your brain.”

Other scholars see little wrong with using brain scans to predict violent tendencies and sexual predilections — as long as the scans are used within limits. “It’s not necessarily the case that if predictions work, you would say take that guy off the street and throw away the key,” says Hank Greely, the Stanford law professor. “You could require counseling, surveillance, G.P.S. transmitters or warning the neighbors. None of these are necessarily benign, but they beat the heck out of preventative detention.” Greely has little doubt that predictive technologies will be enlisted in the war on terror — perhaps in radical ways. “Even with today’s knowledge, I think we can tell whether someone has a strong emotional reaction to seeing things, and I can certainly imagine a friend-versus-foe scanner. If you put everyone who reacts badly to an American flag in a concentration camp or GuantĂ¡namo, that would be bad, but in an occupation situation, to mark someone down for further surveillance, that might be appropriate.”

Paul Root Wolpe, who teaches social psychiatry and psychiatric ethics at the University of Pennsylvania School of Medicine, says he anticipates that neuroscience predictions will move beyond the courtroom and will be used to make predictions about citizens in all walks of life.

“Will we use brain imaging to track kids in school because we’ve discovered that certain brain function or morphology suggests aptitude?” he asks. “I work for NASA, and imagine how helpful it might be for NASA if it could scan your brain to discover whether you have a good enough spatial sense to be a pilot.” Wolpe says that brain imaging might eventually be used to decide if someone is a worthy foster or adoptive parent — a history of major depression and cocaine abuse can leave telltale signs on the brain, for example, and future studies might find parts of the brain that correspond to nurturing and caring.

The idea of holding people accountable for their predispositions rather than their actions poses a challenge to one of the central principles of Anglo-American jurisprudence: namely, that people are responsible for their behavior, not their proclivities — for what they do, not what they think. “We’re going to have to make a decision about the skull as a privacy domain,” Wolpe says. Indeed, Wolpe serves on the board of an organization called the Center for Cognitive Liberty and Ethics, a group of neuroscientists, legal scholars and privacy advocates “dedicated to protecting and advancing freedom of thought in the modern world of accelerating neurotechnologies.”

There may be similar “cognitive liberty” battles over efforts to repair or enhance broken brains. A remarkable technique called transcranial magnetic stimulation, for example, has been used to stimulate or inhibit specific regions of the brain. It can temporarily alter how we think and feel. Using T.M.S., Ernst Fehr and Daria Knoch of the University of Zurich temporarily disrupted each side of the dorsolateral prefrontal cortex in test subjects. They asked their subjects to participate in an experiment that economists call the ultimatum game. One person is given $20 and told to divide it with a partner. If the partner rejects the proposed amount as too low, neither person gets any money. Subjects whose prefrontal cortexes were functioning properly tended to reject offers of $4 or less: they would rather get no money than accept an offer that struck them as insulting and unfair. But subjects whose right prefrontal cortexes were suppressed by T.M.S. tended to accept the $4 offer. Although the offer still struck them as insulting, they were able to suppress their indignation and to pursue the selfishly rational conclusion that a low offer is better than nothing.

Some neuroscientists believe that T.M.S. may be used in the future to enforce a vision of therapeutic justice, based on the idea that defective brains can be cured. “Maybe somewhere down the line, a badly damaged brain would be viewed as something that can heal, like a broken leg that needs to be repaired,” the neurobiologist Robert Sapolsky says, although he acknowledges that defining what counts as a normal brain is politically and scientifically fraught. Indeed, efforts to identify normal and abnormal brains have been responsible for some of the darkest movements in the history of science and technology, from phrenology to eugenics. “How far are we willing to go to use neurotechnology to change people’s brains we consider disordered?” Wolpe asks. “We might find a part of the brain that seems to be malfunctioning, like a discrete part of the brain operative in violent or sexually predatory behavior, and then turn off or inhibit that behavior using transcranial magnetic stimulation.” Even behaviors in the normal range might be fine-tuned by T.M.S.: jurors, for example, could be made more emotional or more deliberative with magnetic interventions. Mark George, an adviser to the Cephos company and also director of the Medical University of South Carolina Center for Advanced Imaging Research, has submitted a patent application for a T.M.S. procedure that supposedly suppresses the area of the brain involved in lying and makes a person less capable of not telling the truth.

As the new technologies proliferate, even the neurolaw experts themselves have only begun to think about the questions that lie ahead. Can the police get a search warrant for someone’s brain? Should the Fourth Amendment protect our minds in the same way that it protects our houses? Can courts order tests of suspects’ memories to determine whether they are gang members or police informers, or would this violate the Fifth Amendment’s ban on compulsory self-incrimination? Would punishing people for their thoughts rather than for their actions violate the Eighth Amendment’s ban on cruel and unusual punishment? However astonishing our machines may become, they cannot tell us how to answer these perplexing questions. We must instead look to our own powers of reasoning and intuition, relatively primitive as they may be. As Stephen Morse puts it, neuroscience itself can never identify the mysterious point at which people should be excused from responsibility for their actions because they are not able, in some sense, to control themselves. That question, he suggests, is “moral and ultimately legal,” and it must be answered not in laboratories but in courtrooms and legislatures. In other words, we must answer it ourselves.

Jeffrey Rosen, a frequent contributor, is the author most recently of “The Supreme Court: The Personalities and Rivalries That Defined America.”

Copyright 2007 The New York Times Company

This Recovery

The Expansion Continues

By MICHAEL DARDA March 13, 2007; Page A23

The latest jobs report is further evidence that the doomsayers aren't right about the state of the U.S. economy. There were 97,000 new jobs last month and the unemployment rate was 4.5%. Moreover, the job-growth numbers for December and January were revised upward by 55,000.

[Darda]

No doubt the pessimists will hone in on the declines in the number of construction jobs and hours worked during the last month. But these appear largely to be temporary, weather-related blemishes. Job growth in the services sector, which accounts for 84% of total U.S. employment, continues to expand by about 170,000 per month -- exactly the average of the last 12 months.

As always, prosperity has its discontents. With profits and productivity strongly outperforming compensation for most of this cycle, many of the usual suspects have emerged to call for higher tax rates on upper earners, caps on executive pay and other "equalizing" measures. But history shows that profit and productivity cycles always revert to the mean, with compensation playing catch up as the demand for labor rises and unemployment rates fall.

How has the average worker fared during the current expansion compared to the 1990s boom? Real wage growth for non-supervisory workers has averaged 0.6% per year for the last 21 quarters, twice the pace of the same period during the 1991-2000 expansion. The broader measure of real labor compensation per hour has advanced at 1.4% per year during the last 21 quarters of consecutive growth, compared with a 0.6% annual average during the last recovery cycle.

The main explanation for the strong performance of wages is an unemployment rate that has averaged 5.4% during this expansion, compared to 6.4% during the comparable period of the previous cycle.

Some argue that the problems in the sub-prime mortgage market and manufacturing are only the beginning of a much broader and more intense slowdown that is likely to undermine the labor market and stifle consumption. While the problems in the sub-prime mortgage market shouldn't be taken lightly, mortgage resets are estimated at $10 billion to $15 billion during the next two years -- about 0.1% of nominal GDP and 0.02% of aggregate household net worth. Sub-prime borrowers are mostly households in the bottom 20% of the income distribution, which accounts for only about 8% of total consumption spending.

With broad measures of commercial bank credit and household deposits expanding at a 9.8% and 10.3% annual pace, respectively, and global foreign exchange reserves expanding at a 17% year-to-year pace, it hardly seems that a broad-based liquidity squeeze or credit crunch is on the horizon.

The Fed's 2007 forecast for about 2.75% real GDP and nominal growth of about 5.25% continues to look modest against the backdrop of still-massive global liquidity, low real interest rates, low tax rates on capital and a booming global economy. It is a little reported fact that exports added more to GDP growth last year than housing took away. If the drag from housing is removed, the U.S. economy expanded at nearly a 4% average annual rate in 2006. Even with the housing drag, full-year growth was slightly faster in 2006 than it was in 2005.

As the drags from housing and manufacturing abate sometime later in 2007, it is more likely than not that the economy will return to an above-trend growth rate powered by strong consumption (via a tight labor market), strong global growth (exports) and a pickup in capital spending (thanks to record profits and still-tight credit spreads).

Indeed, the current environment of excess liquidity and a super-tight labor market appears to be the first such combination since the 1960s, a period during which unemployment rates fell below 4%. The mid-1960s also witnessed an inverted yield curve, a sharp contraction in residential construction and a sharp, temporary slowdown in overall growth -- but no recession. Perhaps this was because financial conditions were moderate during this period, as they are today.

The unemployment rate dropped briefly below 4% at the tail end of the last economic expansion in 2000, but this period was characterized by excessive dollar strength, weak commodities, wide credit spreads and high real interest rates. Not surprisingly, profits were weak between 1997-2000 as tight financial conditions eviscerated corporate pricing power. In other words, there seem to be more differences than similarities between the current environment and the situation that existed at the end of the last cycle.

The Fed's assumption of about 2% core inflation for 2007 would appear to suggest that board members believe profit margins will compress significantly this year, as was the case during the late 1990s and early 2000s. The sanguine outlook for inflation seems to be justified by modest spreads in the inflation-linked bond market and "low and stable" survey expectations for inflation five years ahead.

The problem with the inflation-linked bond market and with various surveys of inflation expectations is that they tend to rise and fall with (or after) actual inflation, not before it. And to the extent that excess liquidity is holding interest rates below levels that would prevail under more normal conditions, the bond market may be giving policy makers a false sense of security about the inflation outlook.

With unit labor costs now rising at the fastest pace in six years against the backdrop of easy financial conditions, the risks appear to be skewed toward higher inflation rather than a weaker economy. Tight labor markets should continue to support consumption and growth, but corporate pricing power may also prove to be better than most analysts suspect.

In this scenario, non-energy inflation almost surely would fail to moderate as widely expected, and the batch of Fed rate cuts now priced into the bond market likely would be up in smoke once again.

Mr. Darda is chief economist for MKM Partners.

URL for this article: http://online.wsj.com/article/SB117375326105635045.html
Copyright 2007 Dow Jones & Company, Inc. All Rights Reserved

Monday, March 05, 2007

Persistence of Religion

March 4, 2007

Darwin’s God

God has always been a puzzle for Scott Atran. When he was 10 years old, he scrawled a plaintive message on the wall of his bedroom in Baltimore. “God exists,” he wrote in black and orange paint, “or if he doesn’t, we’re in trouble.” Atran has been struggling with questions about religion ever since — why he himself no longer believes in God and why so many other people, everywhere in the world, apparently do.

Call it God; call it superstition; call it, as Atran does, “belief in hope beyond reason” — whatever you call it, there seems an inherent human drive to believe in something transcendent, unfathomable and otherworldly, something beyond the reach or understanding of science. “Why do we cross our fingers during turbulence, even the most atheistic among us?” asked Atran when we spoke at his Upper West Side pied-Ă -terre in January. Atran, who is 55, is an anthropologist at the National Center for Scientific Research in Paris, with joint appointments at the University of Michigan and the John Jay College of Criminal Justice in New York. His research interests include cognitive science and evolutionary biology, and sometimes he presents students with a wooden box that he pretends is an African relic. “If you have negative sentiments toward religion,” he tells them, “the box will destroy whatever you put inside it.” Many of his students say they doubt the existence of God, but in this demonstration they act as if they believe in something. Put your pencil into the magic box, he tells them, and the nonbelievers do so blithely. Put in your driver’s license, he says, and most do, but only after significant hesitation. And when he tells them to put in their hands, few will.

If they don’t believe in God, what exactly are they afraid of?

Atran first conducted the magic-box demonstration in the 1980s, when he was at Cambridge University studying the nature of religious belief. He had received a doctorate in anthropology from Columbia University and, in the course of his fieldwork, saw evidence of religion everywhere he looked — at archaeological digs in Israel, among the Mayans in Guatemala, in artifact drawers at the American Museum of Natural History in New York. Atran is Darwinian in his approach, which means he tries to explain behavior by how it might once have solved problems of survival and reproduction for our early ancestors. But it was not clear to him what evolutionary problems might have been solved by religious belief. Religion seemed to use up physical and mental resources without an obvious benefit for survival. Why, he wondered, was religion so pervasive, when it was something that seemed so costly from an evolutionary point of view?

The magic-box demonstration helped set Atran on a career studying why humans might have evolved to be religious, something few people were doing back in the ’80s. Today, the effort has gained momentum, as scientists search for an evolutionary explanation for why belief in God exists — not whether God exists, which is a matter for philosophers and theologians, but why the belief does.

This is different from the scientific assault on religion that has been garnering attention recently, in the form of best-selling books from scientific atheists who see religion as a scourge. In “The God Delusion,” published last year and still on best-seller lists, the Oxford evolutionary biologist Richard Dawkins concludes that religion is nothing more than a useless, and sometimes dangerous, evolutionary accident. “Religious behavior may be a misfiring, an unfortunate byproduct of an underlying psychological propensity which in other circumstances is, or once was, useful,” Dawkins wrote. He is joined by two other best-selling authors — Sam Harris, who wrote “The End of Faith,” and Daniel Dennett, a philosopher at Tufts University who wrote “Breaking the Spell.” The three men differ in their personal styles and whether they are engaged in a battle against religiosity, but their names are often mentioned together. They have been portrayed as an unholy trinity of neo-atheists, promoting their secular world view with a fervor that seems almost evangelical.

Lost in the hullabaloo over the neo-atheists is a quieter and potentially more illuminating debate. It is taking place not between science and religion but within science itself, specifically among the scientists studying the evolution of religion. These scholars tend to agree on one point: that religious belief is an outgrowth of brain architecture that evolved during early human history. What they disagree about is why a tendency to believe evolved, whether it was because belief itself was adaptive or because it was just an evolutionary byproduct, a mere consequence of some other adaptation in the evolution of the human brain.

Which is the better biological explanation for a belief in God — evolutionary adaptation or neurological accident? Is there something about the cognitive functioning of humans that makes us receptive to belief in a supernatural deity? And if scientists are able to explain God, what then? Is explaining religion the same thing as explaining it away? Are the nonbelievers right, and is religion at its core an empty undertaking, a misdirection, a vestigial artifact of a primitive mind? Or are the believers right, and does the fact that we have the mental capacities for discerning God suggest that it was God who put them there?

In short, are we hard-wired to believe in God? And if we are, how and why did that happen?

“All of our raptures and our drynesses, our longings and pantings, our questions and beliefs . . . are equally organically founded,” William James wrote in “The Varieties of Religious Experience.” James, who taught philosophy and experimental psychology at Harvard for more than 30 years, based his book on a 1901 lecture series in which he took some early tentative steps at breaching the science-religion divide.

In the century that followed, a polite convention generally separated science and religion, at least in much of the Western world. Science, as the old trope had it, was assigned the territory that describes how the heavens go; religion, how to go to heaven.

Anthropologists like Atran and psychologists as far back as James had been looking at the roots of religion, but the mutual hands-off policy really began to shift in the 1990s. Religion made incursions into the traditional domain of science with attempts to bring intelligent design into the biology classroom and to choke off human embryonic stem-cell research on religious grounds. Scientists responded with counterincursions. Experts from the hard sciences, like evolutionary biology and cognitive neuroscience, joined anthropologists and psychologists in the study of religion, making God an object of scientific inquiry.

The debate over why belief evolved is between byproduct theorists and adaptationists. You might think that the byproduct theorists would tend to be nonbelievers, looking for a way to explain religion as a fluke, while the adaptationists would be more likely to be believers who can intuit the emotional, spiritual and community advantages that accompany faith. Or you might think they would all be atheists, because what believer would want to subject his own devotion to rationalism’s cold, hard scrutiny? But a scientist’s personal religious view does not always predict which side he will take. And this is just one sign of how complex and surprising this debate has become.

Angels, demons, spirits, wizards, gods and witches have peppered folk religions since mankind first started telling stories. Charles Darwin noted this in “The Descent of Man.” “A belief in all-pervading spiritual agencies,” he wrote, “seems to be universal.” According to anthropologists, religions that share certain supernatural features — belief in a noncorporeal God or gods, belief in the afterlife, belief in the ability of prayer or ritual to change the course of human events — are found in virtually every culture on earth.

This is certainly true in the United States. About 6 in 10 Americans, according to a 2005 Harris Poll, believe in the devil and hell, and about 7 in 10 believe in angels, heaven and the existence of miracles and of life after death. A 2006 survey at Baylor University found that 92 percent of respondents believe in a personal God — that is, a God with a distinct set of character traits ranging from “distant” to “benevolent.”

When a trait is universal, evolutionary biologists look for a genetic explanation and wonder how that gene or genes might enhance survival or reproductive success. In many ways, it’s an exercise in post-hoc hypothesizing: what would have been the advantage, when the human species first evolved, for an individual who happened to have a mutation that led to, say, a smaller jaw, a bigger forehead, a better thumb? How about certain behavioral traits, like a tendency for risk-taking or for kindness?

Atran saw such questions as a puzzle when applied to religion. So many aspects of religious belief involve misattribution and misunderstanding of the real world. Wouldn’t this be a liability in the survival-of-the-fittest competition? To Atran, religious belief requires taking “what is materially false to be true” and “what is materially true to be false.” One example of this is the belief that even after someone dies and the body demonstrably disintegrates, that person will still exist, will still be able to laugh and cry, to feel pain and joy. This confusion “does not appear to be a reasonable evolutionary strategy,” Atran wrote in “In Gods We Trust: The Evolutionary Landscape of Religion” in 2002. “Imagine another animal that took injury for health or big for small or fast for slow or dead for alive. It’s unlikely that such a species could survive.” He began to look for a sideways explanation: if religious belief was not adaptive, perhaps it was associated with something else that was.

Atran intended to study mathematics when he entered Columbia as a precocious 17-year-old. But he was distracted by the radical politics of the late ’60s. One day in his freshman year, he found himself at an antiwar rally listening to Margaret Mead, then perhaps the most famous anthropologist in America. Atran, dressed in a flamboyant Uncle Sam suit, stood up and called her a sellout for saying the protesters should be writing to their congressmen instead of staging demonstrations. “Young man,” the unflappable Mead said, “why don’t you come see me in my office?”

Atran, equally unflappable, did go to see her — and ended up working for Mead, spending much of his time exploring the cabinets of curiosities in her tower office at the American Museum of Natural History. Soon he switched his major to anthropology.

Many of the museum specimens were religious, Atran says. So were the artifacts he dug up on archaeological excursions in Israel in the early ’70s. Wherever he turned, he encountered the passion of religious belief. Why, he wondered, did people work so hard against their preference for logical explanations to maintain two views of the world, the real and the unreal, the intuitive and the counterintuitive?

Maybe cognitive effort was precisely the point. Maybe it took less mental work than Atran realized to hold belief in God in one’s mind. Maybe, in fact, belief was the default position for the human mind, something that took no cognitive effort at all.

While still an undergraduate, Atran decided to explore these questions by organizing a conference on universal aspects of culture and inviting all his intellectual heroes: the linguist Noam Chomsky, the psychologist Jean Piaget, the anthropologists Claude Levi-Strauss and Gregory Bateson (who was also Margaret Mead’s ex-husband), the Nobel Prize-winning biologists Jacques Monod and Francois Jacob. It was 1974, and the only site he could find for the conference was at a location just outside Paris. Atran was a scraggly 22-year-old with a guitar who had learned his French from comic books. To his astonishment, everyone he invited agreed to come.

Atran is a sociable man with sharp hazel eyes, who sparks provocative conversations the way other men pick bar fights. As he traveled in the ’70s and ’80s, he accumulated friends who were thinking about the issues he was: how culture is transmitted among human groups and what evolutionary function it might serve. “I started looking at history, and I wondered why no society ever survived more than three generations without a religious foundation as its raison d’Ăªtre,” he says. Soon he turned to an emerging subset of evolutionary theory — the evolution of human cognition.

Some cognitive scientists think of brain functioning in terms of modules, a series of interconnected machines, each one responsible for a particular mental trick. They do not tend to talk about a God module per se; they usually consider belief in God a consequence of other mental modules.

Religion, in this view, is “a family of cognitive phenomena that involves the extraordinary use of everyday cognitive processes,” Atran wrote in “In Gods We Trust.” “Religions do not exist apart from the individual minds that constitute them and the environments that constrain them, any more than biological species and varieties exist independently of the individual organisms that compose them and the environments that conform them.”

At around the time “In Gods We Trust” appeared five years ago, a handful of other scientists — Pascal Boyer, now at Washington University; Justin Barrett, now at Oxford; Paul Bloom at Yale — were addressing these same questions. In synchrony they were moving toward the byproduct theory.

Darwinians who study physical evolution distinguish between traits that are themselves adaptive, like having blood cells that can transport oxygen, and traits that are byproducts of adaptations, like the redness of blood. There is no survival advantage to blood’s being red instead of turquoise; it is just a byproduct of the trait that is adaptive, having blood that contains hemoglobin.

Something similar explains aspects of brain evolution, too, say the byproduct theorists. Which brings us to the idea of the spandrel.

Stephen Jay Gould, the famed evolutionary biologist at Harvard who died in 2002, and his colleague Richard Lewontin proposed “spandrel” to describe a trait that has no adaptive value of its own. They borrowed the term from architecture, where it originally referred to the V-shaped structure formed between two rounded arches. The structure is not there for any purpose; it is there because that is what happens when arches align.

In architecture, a spandrel can be neutral or it can be made functional. Building a staircase, for instance, creates a space underneath that is innocuous, just a blank sort of triangle. But if you put a closet there, the under-stairs space takes on a function, unrelated to the staircase’s but useful nonetheless. Either way, functional or nonfunctional, the space under the stairs is a spandrel, an unintended byproduct.

“Natural selection made the human brain big,” Gould wrote, “but most of our mental properties and potentials may be spandrels — that is, nonadaptive side consequences of building a device with such structural complexity.”

The possibility that God could be a spandrel offered Atran a new way of understanding the evolution of religion. But a spandrel of what, exactly?

Hardships of early human life favored the evolution of certain cognitive tools, among them the ability to infer the presence of organisms that might do harm, to come up with causal narratives for natural events and to recognize that other people have minds of their own with their own beliefs, desires and intentions. Psychologists call these tools, respectively, agent detection, causal reasoning and theory of mind.

Agent detection evolved because assuming the presence of an agent — which is jargon for any creature with volitional, independent behavior — is more adaptive than assuming its absence. If you are a caveman on the savannah, you are better off presuming that the motion you detect out of the corner of your eye is an agent and something to run from, even if you are wrong. If it turns out to have been just the rustling of leaves, you are still alive; if what you took to be leaves rustling was really a hyena about to pounce, you are dead.

A classic experiment from the 1940s by the psychologists Fritz Heider and Marianne Simmel suggested that imputing agency is so automatic that people may do it even for geometric shapes. For the experiment, subjects watched a film of triangles and circles moving around. When asked what they had been watching, the subjects used words like “chase” and “capture.” They did not just see the random movement of shapes on a screen; they saw pursuit, planning, escape.

So if there is motion just out of our line of sight, we presume it is caused by an agent, an animal or person with the ability to move independently. This usually operates in one direction only; lots of people mistake a rock for a bear, but almost no one mistakes a bear for a rock.

What does this mean for belief in the supernatural? It means our brains are primed for it, ready to presume the presence of agents even when such presence confounds logic. “The most central concepts in religions are related to agents,” Justin Barrett, a psychologist, wrote in his 2004 summary of the byproduct theory, “Why Would Anyone Believe in God?” Religious agents are often supernatural, he wrote, “people with superpowers, statues that can answer requests or disembodied minds that can act on us and the world.”

A second mental module that primes us for religion is causal reasoning. The human brain has evolved the capacity to impose a narrative, complete with chronology and cause-and-effect logic, on whatever it encounters, no matter how apparently random. “We automatically, and often unconsciously, look for an explanation of why things happen to us,” Barrett wrote, “and ‘stuff just happens’ is no explanation. Gods, by virtue of their strange physical properties and their mysterious superpowers, make fine candidates for causes of many of these unusual events.” The ancient Greeks believed thunder was the sound of Zeus’s thunderbolt. Similarly, a contemporary woman whose cancer treatment works despite 10-to-1 odds might look for a story to explain her survival. It fits better with her causal-reasoning tool for her recovery to be a miracle, or a reward for prayer, than for it to be just a lucky roll of the dice.

A third cognitive trick is a kind of social intuition known as theory of mind. It’s an odd phrase for something so automatic, since the word “theory” suggests formality and self-consciousness. Other terms have been used for the same concept, like intentional stance and social cognition. One good alternative is the term Atran uses: folkpsychology.

Folkpsychology, as Atran and his colleagues see it, is essential to getting along in the contemporary world, just as it has been since prehistoric times. It allows us to anticipate the actions of others and to lead others to believe what we want them to believe; it is at the heart of everything from marriage to office politics to poker. People without this trait, like those with severe autism, are impaired, unable to imagine themselves in other people’s heads.

The process begins with positing the existence of minds, our own and others’, that we cannot see or feel. This leaves us open, almost instinctively, to belief in the separation of the body (the visible) and the mind (the invisible). If you can posit minds in other people that you cannot verify empirically, suggests Paul Bloom, a psychologist and the author of “Descartes’ Baby,” published in 2004, it is a short step to positing minds that do not have to be anchored to a body. And from there, he said, it is another short step to positing an immaterial soul and a transcendent God.

The traditional psychological view has been that until about age 4, children think that minds are permeable and that everyone knows whatever the child himself knows. To a young child, everyone is infallible. All other people, especially Mother and Father, are thought to have the same sort of insight as an all-knowing God.

But at a certain point in development, this changes. (Some new research suggests this might occur as early as 15 months.) The “false-belief test” is a classic experiment that highlights the boundary. Children watch a puppet show with a simple plot: John comes onstage holding a marble, puts it in Box A and walks off. Mary comes onstage, opens Box A, takes out the marble, puts it in Box B and walks off. John comes back onstage. The children are asked, Where will John look for the marble?

Very young children, or autistic children of any age, say John will look in Box B, since they know that’s where the marble is. But older children give a more sophisticated answer. They know that John never saw Mary move the marble and that as far as he is concerned it is still where he put it, in Box A. Older children have developed a theory of mind; they understand that other people sometimes have false beliefs. Even though they know that the marble is in Box B, they respond that John will look for it in Box A.

The adaptive advantage of folkpsychology is obvious. According to Atran, our ancestors needed it to survive their harsh environment, since folkpsychology allowed them to “rapidly and economically” distinguish good guys from bad guys. But how did folkpsychology — an understanding of ordinary people’s ordinary minds — allow for a belief in supernatural, omniscient minds? And if the byproduct theorists are right and these beliefs were of little use in finding food or leaving more offspring, why did they persist?

Atran ascribes the persistence to evolutionary misdirection, which, he says, happens all the time: “Evolution always produces something that works for what it works for, and then there’s no control for however else it’s used.” On a sunny weekday morning, over breakfast at a French cafe on upper Broadway, he tried to think of an analogy and grinned when he came up with an old standby: women’s breasts. Because they are associated with female hormones, he explained, full breasts indicate a woman is fertile, and the evolution of the male brain’s preference for them was a clever mating strategy. But breasts are now used for purposes unrelated to reproduction, to sell anything from deodorant to beer. “A Martian anthropologist might look at this and say, ‘Oh, yes, so these breasts must have somehow evolved to sell hygienic stuff or food to human beings,’ ” Atran said. But the Martian would, of course, be wrong. Equally wrong would be to make the same mistake about religion, thinking it must have evolved to make people behave a certain way or feel a certain allegiance.

That is what most fascinated Atran. “Why is God in there?” he wondered.

The idea of an infallible God is comfortable and familiar, something children readily accept. You can see this in the experiment Justin Barrett conducted recently — a version of the traditional false-belief test but with a religious twist. Barrett showed young children a box with a picture of crackers on the outside. What do you think is inside this box? he asked, and the children said, “Crackers.” Next he opened it and showed them that the box was filled with rocks. Then he asked two follow-up questions: What would your mother say is inside this box? And what would God say?

As earlier theory-of-mind experiments already showed, 3- and 4-year-olds tended to think Mother was infallible, and since the children knew the right answer, they assumed she would know it, too. They usually responded that Mother would say the box contained rocks. But 5- and 6-year-olds had learned that Mother, like any other person, could hold a false belief in her mind, and they tended to respond that she would be fooled by the packaging and would say, “Crackers.”

And what would God say? No matter what their age, the children, who were all Protestants, told Barrett that God would answer, “Rocks.” This was true even for the older children, who, as Barrett understood it, had developed folkpsychology and had used it when predicting a wrong response for Mother. They had learned that, in certain situations, people could be fooled — but they had also learned that there is no fooling God.

The bottom line, according to byproduct theorists, is that children are born with a tendency to believe in omniscience, invisible minds, immaterial souls — and then they grow up in cultures that fill their minds, hard-wired for belief, with specifics. It is a little like language acquisition, Paul Bloom says, with the essential difference that language is a biological adaptation and religion, in his view, is not. We are born with an innate facility for language but the specific language we learn depends on the environment in which we are raised. In much the same way, he says, we are born with an innate tendency for belief, but the specifics of what we grow up believing — whether there is one God or many, whether the soul goes to heaven or occupies another animal after death — are culturally shaped.

Whatever the specifics, certain beliefs can be found in all religions. Those that prevail, according to the byproduct theorists, are those that fit most comfortably with our mental architecture. Psychologists have shown, for instance, that people attend to, and remember, things that are unfamiliar and strange, but not so strange as to be impossible to assimilate. Ideas about God or other supernatural agents tend to fit these criteria. They are what Pascal Boyer, an anthropologist and psychologist, called “minimally counterintuitive”: weird enough to get your attention and lodge in your memory but not so weird that you reject them altogether. A tree that talks is minimally counterintuitive, and you might believe it as a supernatural agent. A tree that talks and flies and time-travels is maximally counterintuitive, and you are more likely to reject it.

Atran, along with Ara Norenzayan of the University of British Columbia, studied the idea of minimally counterintuitive agents earlier this decade. They presented college students with lists of fantastical creatures and asked them to choose the ones that seemed most “religious.” The convincingly religious agents, the students said, were not the most outlandish — not the turtle that chatters and climbs or the squealing, flowering marble — but those that were just outlandish enough: giggling seaweed, a sobbing oak, a talking horse. Giggling seaweed meets the requirement of being minimally counterintuitive, Atran wrote. So does a God who has a human personality except that he knows everything or a God who has a mind but has no body.

It is not enough for an agent to be minimally counterintuitive for it to earn a spot in people’s belief systems. An emotional component is often needed, too, if belief is to take hold. “If your emotions are involved, then that’s the time when you’re most likely to believe whatever the religion tells you to believe,” Atran says. Religions stir up emotions through their rituals — swaying, singing, bowing in unison during group prayer, sometimes working people up to a state of physical arousal that can border on frenzy. And religions gain strength during the natural heightening of emotions that occurs in times of personal crisis, when the faithful often turn to shamans or priests. The most intense personal crisis, for which religion can offer powerfully comforting answers, is when someone comes face to face with mortality.

In John Updike’s celebrated early short story “Pigeon Feathers,” 14-year-old David spends a lot of time thinking about death. He suspects that adults are lying when they say his spirit will live on after he dies. He keeps catching them in inconsistencies when he asks where exactly his soul will spend eternity. “Don’t you see,” he cries to his mother, “if when we die there’s nothing, all your sun and fields and what not are all, ah, horror? It’s just an ocean of horror.”

The story ends with David’s tiny revelation and his boundless relief. The boy gets a gun for his 15th birthday, which he uses to shoot down some pigeons that have been nesting in his grandmother’s barn. Before he buries them, he studies the dead birds’ feathers. He is amazed by their swirls of color, “designs executed, it seemed, in a controlled rapture.” And suddenly the fears that have plagued him are lifted, and with a “slipping sensation along his nerves that seemed to give the air hands, he was robed in this certainty: that the God who had lavished such craft upon these worthless birds would not destroy His whole Creation by refusing to let David live forever.”

Fear of death is an undercurrent of belief. The spirits of dead ancestors, ghosts, immortal deities, heaven and hell, the everlasting soul: the notion of spiritual existence after death is at the heart of almost every religion. According to some adaptationists, this is part of religion’s role, to help humans deal with the grim certainty of death. Believing in God and the afterlife, they say, is how we make sense of the brevity of our time on earth, how we give meaning to this brutish and short existence. Religion can offer solace to the bereaved and comfort to the frightened.

But the spandrelists counter that saying these beliefs are consolation does not mean they offered an adaptive advantage to our ancestors. “The human mind does not produce adequate comforting delusions against all situations of stress or fear,” wrote Pascal Boyer, a leading byproduct theorist, in “Religion Explained,” which came out a year before Atran’s book. “Indeed, any organism that was prone to such delusions would not survive long.”

Whether or not it is adaptive, belief in the afterlife gains power in two ways: from the intensity with which people wish it to be true and from the confirmation it seems to get from the real world. This brings us back to folkpsychology. We try to make sense of other people partly by imagining what it is like to be them, an adaptive trait that allowed our ancestors to outwit potential enemies. But when we think about being dead, we run into a cognitive wall. How can we possibly think about not thinking? “Try to fill your consciousness with the representation of no-consciousness, and you will see the impossibility of it,” the Spanish philosopher Miguel de Unamuno wrote in “Tragic Sense of Life.” “The effort to comprehend it causes the most tormenting dizziness. We cannot conceive of ourselves as not existing.”

Much easier, then, to imagine that the thinking somehow continues. This is what young children seem to do, as a study at the Florida Atlantic University demonstrated a few years ago. Jesse Bering and David Bjorklund, the psychologists who conducted the study, used finger puppets to act out the story of a mouse, hungry and lost, who is spotted by an alligator. “Well, it looks like Brown Mouse got eaten by Mr. Alligator,” the narrator says at the end. “Brown Mouse is not alive anymore.”

Afterward, Bering and Bjorklund asked their subjects, ages 4 to 12, what it meant for Brown Mouse to be “not alive anymore.” Is he still hungry? Is he still sleepy? Does he still want to go home? Most said the mouse no longer needed to eat or drink. But a large proportion, especially the younger ones, said that he still had thoughts, still loved his mother and still liked cheese. The children understood what it meant for the mouse’s body to cease to function, but many believed that something about the mouse was still alive.

“Our psychological architecture makes us think in particular ways,” says Bering, now at Queens University in Belfast, Northern Ireland. “In this study, it seems, the reason afterlife beliefs are so prevalent is that underlying them is our inability to simulate our nonexistence.”

It might be just as impossible to simulate the nonexistence of loved ones. A large part of any relationship takes place in our minds, Bering said, so it’s natural for it to continue much as before after the other person’s death. It is easy to forget that your sister is dead when you reach for the phone to call her, since your relationship was based so much on memory and imagined conversations even when she was alive. In addition, our agent-detection device sometimes confirms the sensation that the dead are still with us. The wind brushes our cheek, a spectral shape somehow looks familiar and our agent detection goes into overdrive. Dreams, too, have a way of confirming belief in the afterlife, with dead relatives appearing in dreams as if from beyond the grave, seeming very much alive.

Belief is our fallback position, according to Bering; it is our reflexive style of thought. “We have a basic psychological capacity that allows anyone to reason about unexpected natural events, to see deeper meaning where there is none,” he says. “It’s natural; it’s how our minds work.”

Intriguing as the spandrel logic might be, there is another way to think about the evolution of religion: that religion evolved because it offered survival advantages to our distant ancestors. This is where the action is in the science of God debate, with a coterie of adaptationists arguing on behalf of the primary benefits, in terms of survival advantages, of religious belief.

The trick in thinking about adaptation is that even if a trait offers no survival advantage today, it might have had one long ago. This is how Darwinians explain how certain physical characteristics persist even if they do not currently seem adaptive — by asking whether they might have helped our distant ancestors form social groups, feed themselves, find suitable mates or keep from getting killed. A facility for storing calories as fat, for instance, which is a detriment in today’s food-rich society, probably helped our ancestors survive cyclical famines.

So trying to explain the adaptiveness of religion means looking for how it might have helped early humans survive and reproduce. As some adaptationists see it, this could have worked on two levels, individual and group. Religion made people feel better, less tormented by thoughts about death, more focused on the future, more willing to take care of themselves. As William James put it, religion filled people with “a new zest which adds itself like a gift to life . . . an assurance of safety and a temper of peace and, in relation to others, a preponderance of loving affections.”

Such sentiments, some adaptationists say, made the faithful better at finding and storing food, for instance, and helped them attract better mates because of their reputations for morality, obedience and sober living. The advantage might have worked at the group level too, with religious groups outlasting others because they were more cohesive, more likely to contain individuals willing to make sacrifices for the group and more adept at sharing resources and preparing for warfare.

One of the most vocal adaptationists is David Sloan Wilson, an occasional thorn in the side of both Scott Atran and Richard Dawkins. Wilson, an evolutionary biologist at the State University of New York at Binghamton, focuses much of his argument at the group level. “Organisms are a product of natural selection,” he wrote in “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society,” which came out in 2002, the same year as Atran’s book, and staked out the adaptationist view. “Through countless generations of variation and selection, [organisms] acquire properties that enable them to survive and reproduce in their environments. My purpose is to see if human groups in general, and religious groups in particular, qualify as organismic in this sense.”

Wilson’s father was Sloan Wilson, author of “The Man in the Gray Flannel Suit,” an emblem of mid-’50s suburban anomie that was turned into a film starring Gregory Peck. Sloan Wilson became a celebrity, with young women asking for his autograph, especially after his next novel, “A Summer Place,” became another blockbuster movie. The son grew up wanting to do something to make his famous father proud.

“I knew I couldn’t be a novelist,” said Wilson, who crackled with intensity during a telephone interview, “so I chose something as far as possible from literature — I chose science.” He is disarmingly honest about what motivated him: “I was very ambitious, and I wanted to make a mark.” He chose to study human evolution, he said, in part because he had some of his father’s literary leanings and the field required a novelist’s attention to human motivations, struggles and alliances — as well as a novelist’s flair for narrative.

Wilson eventually chose to study religion not because religion mattered to him personally — he was raised in a secular Protestant household and says he has long been an atheist — but because it was a lens through which to look at and revivify a branch of evolution that had fallen into disrepute. When Wilson was a graduate student at Michigan State University in the 1970s, Darwinians were critical of group selection, the idea that human groups can function as single organisms the way beehives or anthills do. So he decided to become the man who rescued this discredited idea. “I thought, Wow, defending group selection — now, that would be big,” he recalled. It wasn’t until the 1990s, he said, that he realized that “religion offered an opportunity to show that group selection was right after all.”

Dawkins once called Wilson’s defense of group selection “sheer, wanton, head-in-bag perversity.” Atran, too, has been dismissive of this approach, calling it “mind blind” for essentially ignoring the role of the brain’s mental machinery. The adaptationists “cannot in principle distinguish Marxism from monotheism, ideology from religious belief,” Atran wrote. “They cannot explain why people can be more steadfast in their commitment to admittedly counterfactual and counterintuitive beliefs — that Mary is both a mother and a virgin, and God is sentient but bodiless — than to the most politically, economically or scientifically persuasive account of the way things are or should be.”

Still, for all its controversial elements, the narrative Wilson devised about group selection and the evolution of religion is clear, perhaps a legacy of his novelist father. Begin, he says, with an imaginary flock of birds. Some birds serve as sentries, scanning the horizon for predators and calling out warnings. Having a sentry is good for the group but bad for the sentry, which is doubly harmed: by keeping watch, the sentry has less time to gather food, and by issuing a warning call, it is more likely to be spotted by the predator. So in the Darwinian struggle, the birds most likely to pass on their genes are the nonsentries. How, then, could the sentry gene survive for more than a generation or two?

To explain how a self-sacrificing gene can persist, Wilson looks to the level of the group. If there are 10 sentries in one group and none in the other, 3 or 4 of the sentries might be sacrificed. But the flock with sentries will probably outlast the flock that has no early-warning system, so the other 6 or 7 sentries will survive to pass on the genes. In other words, if the whole-group advantage outweighs the cost to any individual bird of being a sentry, then the sentry gene will prevail.

There are costs to any individual of being religious: the time and resources spent on rituals, the psychic energy devoted to following certain injunctions, the pain of some initiation rites. But in terms of intergroup struggle, according to Wilson, the costs can be outweighed by the benefits of being in a cohesive group that out-competes the others.

There is another element here too, unique to humans because it depends on language. A person’s behavior is observed not only by those in his immediate surroundings but also by anyone who can hear about it. There might be clear costs to taking on a role analogous to the sentry bird — a person who stands up to authority, for instance, risks losing his job, going to jail or getting beaten by the police — but in humans, these local costs might be outweighed by long-distance benefits. If a particular selfless trait enhances a person’s reputation, spread through the written and spoken word, it might give him an advantage in many of life’s challenges, like finding a mate. One way that reputation is enhanced is by being ostentatiously religious.

“The study of evolution is largely the study of trade-offs,” Wilson wrote in “Darwin’s Cathedral.” It might seem disadvantageous, in terms of foraging for sustenance and safety, for someone to favor religious over rationalistic explanations that would point to where the food and danger are. But in some circumstances, he wrote, “a symbolic belief system that departs from factual reality fares better.” For the individual, it might be more adaptive to have “highly sophisticated mental modules for acquiring factual knowledge and for building symbolic belief systems” than to have only one or the other, according to Wilson. For the group, it might be that a mixture of hardheaded realists and symbolically minded visionaries is most adaptive and that “what seems to be an adversarial relationship” between theists and atheists within a community is really a division of cognitive labor that “keeps social groups as a whole on an even keel.”

Even if Wilson is right that religion enhances group fitness, the question remains: Where does God come in? Why is a religious group any different from groups for which a fitness argument is never even offered — a group of fraternity brothers, say, or Yankees fans?

Richard Sosis, an anthropologist with positions at the University of Connecticut and Hebrew University of Jerusalem, has suggested a partial answer. Like many adaptationists, Sosis focuses on the way religion might be adaptive at the individual level. But even adaptations that help an individual survive can sometimes play themselves out through the group. Consider religious rituals.

“Religious and secular rituals can both promote cooperation,” Sosis wrote in American Scientist in 2004. But religious rituals “generate greater belief and commitment” because they depend on belief rather than on proof. The rituals are “beyond the possibility of examination,” he wrote, and a commitment to them is therefore emotional rather than logical — a commitment that is, in Sosis’s view, deeper and more long-lasting.

Rituals are a way of signaling a sincere commitment to the religion’s core beliefs, thereby earning loyalty from others in the group. “By donning several layers of clothing and standing out in the midday sun,” Sosis wrote, “ultraorthodox Jewish men are signaling to others: ‘Hey! Look, I’m a haredi’ — or extremely pious — ‘Jew. If you are also a member of this group, you can trust me because why else would I be dressed like this?’ ” These “signaling” rituals can grant the individual a sense of belonging and grant the group some freedom from constant and costly monitoring to ensure that their members are loyal and committed. The rituals are harsh enough to weed out the infidels, and both the group and the individual believers benefit.

In 2003, Sosis and Bradley Ruffle of Ben Gurion University in Israel sought an explanation for why Israel’s religious communes did better on average than secular communes in the wake of the economic crash of most of the country’s kibbutzim. They based their study on a standard economic game that measures cooperation. Individuals from religious communes played the game more cooperatively, while those from secular communes tended to be more selfish. It was the men who attended synagogue daily, not the religious women or the less observant men, who showed the biggest differences. To Sosis, this suggested that what mattered most was the frequent public display of devotion. These rituals, he wrote, led to greater cooperation in the religious communes, which helped them maintain their communal structure during economic hard times.

In 1997, Stephen Jay Gould wrote an essay in Natural History that called for a truce between religion and science. “The net of science covers the empirical universe,” he wrote. “The net of religion extends over questions of moral meaning and value.” Gould was emphatic about keeping the domains separate, urging “respectful discourse” and “mutual humility.” He called the demarcation “nonoverlapping magisteria” from the Latin magister, meaning “canon.”

Richard Dawkins had a history of spirited arguments with Gould, with whom he disagreed about almost everything related to the timing and focus of evolution. But he reserved some of his most venomous words for nonoverlapping magisteria. “Gould carried the art of bending over backward to positively supine lengths,” he wrote in “The God Delusion.” “Why shouldn’t we comment on God, as scientists? . . . A universe with a creative superintendent would be a very different kind of universe from one without. Why is that not a scientific matter?”

The separation, other critics said, left untapped the potential richness of letting one worldview inform the other. “Even if Gould was right that there were two domains, what religion does and what science does,” says Daniel Dennett (who, despite his neo-atheist label, is not as bluntly antireligious as Dawkins and Harris are), “that doesn’t mean science can’t study what religion does. It just means science can’t do what religion does.”

The idea that religion can be studied as a natural phenomenon might seem to require an atheistic philosophy as a starting point. Not necessarily. Even some neo-atheists aren’t entirely opposed to religion. Sam Harris practices Buddhist-inspired meditation. Daniel Dennett holds an annual Christmas sing-along, complete with hymns and carols that are not only harmonically lush but explicitly pious.

And one prominent member of the byproduct camp, Justin Barrett, is an observant Christian who believes in “an all-knowing, all-powerful, perfectly good God who brought the universe into being,” as he wrote in an e-mail message. “I believe that the purpose for people is to love God and love each other.”

At first blush, Barrett’s faith might seem confusing. How does his view of God as a byproduct of our mental architecture coexist with his Christianity? Why doesn’t the byproduct theory turn him into a skeptic?

“Christian theology teaches that people were crafted by God to be in a loving relationship with him and other people,” Barrett wrote in his e-mail message. “Why wouldn’t God, then, design us in such a way as to find belief in divinity quite natural?” Having a scientific explanation for mental phenomena does not mean we should stop believing in them, he wrote. “Suppose science produces a convincing account for why I think my wife loves me — should I then stop believing that she does?”

What can be made of atheists, then? If the evolutionary view of religion is true, they have to work hard at being atheists, to resist slipping into intrinsic habits of mind that make it easier to believe than not to believe. Atran says he faces an emotional and intellectual struggle to live without God in a nonatheist world, and he suspects that is where his little superstitions come from, his passing thought about crossing his fingers during turbulence or knocking on wood just in case. It is like an atavistic theism erupting when his guard is down. The comforts and consolations of belief are alluring even to him, he says, and probably will become more so as he gets closer to the end of his life. He fights it because he is a scientist and holds the values of rationalism higher than the values of spiritualism.

This internal push and pull between the spiritual and the rational reflects what used to be called the “God of the gaps” view of religion. The presumption was that as science was able to answer more questions about the natural world, God would be invoked to answer fewer, and religion would eventually recede. Research about the evolution of religion suggests otherwise. No matter how much science can explain, it seems, the real gap that God fills is an emptiness that our big-brained mental architecture interprets as a yearning for the supernatural. The drive to satisfy that yearning, according to both adaptationists and byproduct theorists, might be an inevitable and eternal part of what Atran calls the tragedy of human cognition.

Robin Marantz Henig, a contributing writer, has written recently for the magazine about the neurobiology of lying and about obesity.