Thursday, January 26, 2006

Do taxes matter?

Why Do Americans Work More Than Europeans?

By EDWARD C. PRESCOTT October 21, 2004; Page A18

Last week, The Wall Street Journal published a story describing a new method of measuring a nation's progress – "gross national happiness1." Maybe it's because we're nearing the end of an election season, but one hopes that this indicator does not catch on. Of all the promises that candidates find themselves making, and of all the problems they pledge to fix, one shudders at the notion of pledges to make us happier. The mind reels at the thought of the ill-conceived policies that would be concocted if the stated goal were to increase gross national happiness. It's hard enough to make everybody more prosperous, educated and healthy, but imagine if the government was responsible for keeping you in a good mood. And just think about the data problems.

I mention this not to poke fun at the idea of happiness. Indeed, our Constitution, in its elegant wisdom, allows for individuals to pursue happiness. But individual pursuit is far different from the aggregate management of happiness. This point is at the core of how we should think about many government policies, especially tax policy, which is the subject of this essay.

* * *

Let's begin by considering a commonly held view which says that labor supply is not affected by tax rates. This idea holds that labor participation would remain steady when tax rates are either raised or lowered. If you are a policy maker and you subscribe to this, then you can confidently increase marginal tax rates as high as you like to attain the revenues you desire. Not only that, but you can move those tax rates up and down whenever you like and blithely assume that this will have no effect on output.

But economic theory and data have come together to prove this notion wrong, and we have many different laboratories -- or countries -- in which we can view live experiments. The most useful comparison is between the U.S. and the countries of Europe, because these economies share traits; but the data also hold when we consider other countries (more on those later).

This issue is encapsulated in one question that is currently puzzling policy makers: Why do Americans work so much more than Europeans? The answer is important because it suggests policy proposals that will improve European standards of living (which should give a boost to its gross national happiness, by the way). However, an incorrect answer to that question will result in policies that will only exacerbate Europe's problems and could have implications for other countries that are looking for best practices.

Here's a startling fact: Based on labor market statistics from the Organization for Economic Cooperation and Development, Americans aged 15-64, on a per-person basis, work 50% more than the French. Comparisons between Americans and Germans or Italians are similar. What's going on here? What can possibly account for these large differences in labor supply? It turns out that the answer is not related to cultural differences or institutional factors like unemployment benefits, but that marginal tax rates explain virtually all of this difference. I admit that when I first conducted this analysis I was surprised by this finding, because I fully expected that institutional constraints are playing a bigger role. But this is not the case. (Citations and more complete data can be found in my paper, at http://www.minneapolisfed.org/2.)

Let's take another look at the data. According to the OECD, from 1970-74 France's labor supply exceeded that of the U.S. Also, a review of other industrialized countries shows that their labor supplies either exceeded or were comparable to the U.S. during this period. Jump ahead two decades and you will find that France's labor supply dropped significantly (as did others), and that some countries improved and stayed in line with the U.S. Controlling for other factors, what stands out in these cross-country comparisons is that when European countries and U.S. tax rates are comparable, labor supplies are comparable.

And this insight doesn't just apply to Western industrialized economies. A review of Japanese and Chilean data reveals the same result. This is an important point because some critics of this analysis have suggested that cultural differences explain the difference between European and American labor supplies. The French, for example, prefer leisure more than do Americans or, on the other side of the coin, that Americans like to work more. This is silliness.

Again, I would point you to the data which show that when the French and others were taxed at rates similar to Americans, they supplied roughly the same amount of labor. Other research has shown that at the aggregate level, where idiosyncratic preference differences are averaged out, people are remarkably similar across countries. Further, a recent study has shown that Germans and Americans spend the same amount of time working, but the proportion of taxable market time vs. nontaxable home work time is different. In other words, Germans work just as much, but more of their work is not captured in the taxable market.

I would add another data set for certain countries, especially Italy, and that is nontaxable market time or the underground economy. Many Italians, for example, aren't necessarily working any less than Americans -- they are simply not being taxed for some of their labor. Indeed, the Italian government increases its measured output by nearly 25% to capture the output of the underground sector. Change the tax laws and you will notice a change in behavior: These people won't start working more, they will simply engage in more taxable market labor, and will produce more per hour worked.

This analysis has important implications for policy -- and not just for Europeans, but for the U.S. as well. For example, much has been made during this election season about whether the current administration's tax cuts were good or bad for the economy, but that is more a political question than a policy consideration and it misses the point. The real issue is about whether it is better to tweak the economy with short-lived stimulus plans or to establish an efficient tax system with low tax rates that do not change with the political climate.

What does this mean for U.S. tax policy? It means that we should stop focusing our attention on the recent tax cuts and, instead, start thinking about tax rates. And that means that we should roll back the 1993 tax rate increases and re-establish those from the 1986 Tax Reform Act. Just as they did in the late 1980s, and just as they would in Europe, these lower rates would increase the labor supply, output would grow and tax revenues would increase.

Now, might there be a small increase in debt as we move to a better tax system? Sure, but remember that the most important measure of debt is privately owned government debt as a percent of gross national income, which has been flat over the past three years. Also, there is a sure-fire way to handle this increase in debt, and that would be to cut expenditures. Actually, there is another way to handle it, and that would be to pray to the Gods for another high-tech boom and the debt would go "poof," and we'll praise whoever is president for being fiscally responsible.

Some say that the 1993 tax-rate hike was responsible for erasing this country's debt problems because it increased government revenues. This is false. The ratio of U.S. debt to gross national income continued to increase in the years following those rate hikes and did not fall until the fortuitous boom that occurred in the late '90s. The high-tech boom meant that people worked more, output increased, incomes climbed and tax revenues followed suit. You cannot tax your way to that sort of prosperity. Imagine the outcome of the late-'90s boom if tax rates had been lower. And by the way, lower tax rates are good for all taxpayers. We're barking up the wrong tree if we think that "taxing the rich" will solve all our problems. You know who these rich people are? They're often families with two professional wage-earners. If you tax that family too much, one wage-earner will drop out, and that's not only bad for the income of that family but also for the output of the whole economy -- and will result in lower tax revenues.

Also, we need to get away from thinking of the rich as some sort of permanent class. Many of the individuals who show up on annual millionaire lists, for example, are people who happened to have a good year and who may never appear on that list again. Consider people who worked hard for many years and built a successful business that finally goes public. The big capital gain they realize that year is really compensation for the uncompensated effort they put into building the business. They should not be penalized for their vision and tenacity. If we establish rules that punish the winners, entrepreneurs will take fewer risks and we will have less innovation, less output, less job growth. The whole economy suffers under such a scenario -- not just those few individuals who are taxed at a higher rate. And this doesn't just involve the Googles and Apples and Microsofts, but countless other companies that start small and end up making large contributions to the economy.

The important thing to remember is that the labor supply is not fixed. People, be they European or American, respond to taxes on their income. Just one more example: In 1998, Spain flattened its tax rates in similar fashion to the U.S. rate cuts of 1986, and the Spanish labor supply increased by 12%. In addition, Spanish tax revenues also increased by a few percent.

And that brings us back to our framing question about the labor supplies of the U.S. and Europe: The bottom line is that a thorough analysis of historical data in the U.S. and Europe indicates that, given similar incentives, people make similar choices about labor and leisure. Free European workers from their tax bondage and you will see an increase in gross domestic product (oh, and you might see a pretty significant increase in gross national happiness, too). The same holds true for Americans.

Mr. Prescott is co-winner of the 2004 Nobel Prize in Economics, senior monetary adviser at the Federal Reserve Bank of Minneapolis and professor of economics at Arizona State University.

URL for this article: http://online.wsj.com/article/0,,SB109830788286551061,00.html
Hyperlinks in this Article: (1) http://online.wsj.com/article/0,,SB109716629151639303,00.html (2) http://www.minneapolisfed.org/
Copyright 2004 Dow Jones & Company, Inc. All Rights Reserved

Monday, January 23, 2006

Tax on capital and balance sheet effects

Capital Offense

By JOHN RUTLEDGE January 21, 2006; Page A8

As the Senate and House get back to work on the economy, their chief order of business now -- as it was 25 years ago this month, when Ronald Reagan was inaugurated as president -- is taxes. In particular, it is the Tax Reconciliation Bill that will emerge from Senate-House Conference in early February. At stake -- tax rates on the capital that determines American productivity and our workers' paychecks.

America is not competing for jobs with China. We are competing for capital. Double taxing dividend and capital gains income drives capital to China, where it earns higher after-tax returns. When that happens, American workers are left behind with falling productivity and uncompetitive companies.

Reducing or eliminating dividend and capital-gains tax rates keeps capital in America, where it makes workers productive and supports high incomes. Congress must act now to keep rates from increasing in 2008, by extending or eliminating dividend and capital gains taxes.

The 2003 cuts in both dividend and capital-gains tax rates was a substantial boost for the stock market and corporate boardrooms. The Dow Jones Industrial Average is up 32% since Dec. 31, 2002, one week before President Bush announced the 2003 tax cuts. The S&P 500 large-cap index is up 47%. Mid-caps are up 79%, and small-caps up 81%.

Overall, the value of U.S. equities increased $6 trillion (up 50% from $11.9 trillion to $17.9 trillion on Sept. 30, 2005) since the dividend tax cut first appeared in the headlines. Household net worth increased $12.1 trillion to $51.1 trillion over the same period, an increase of $40,631 for every person in America. These gains accrue to the 91 million Americans who own shares of stock directly or through mutual funds, and to more than 80 million private and government workers through their pension funds. Growth, profits, and investment spending also grew, and we have created 4.4 million jobs. Tax cuts were a major factor in producing these gains.

Dividend and capital-gains tax cuts are not trickle-down economics as claimed by opponents. They work by jolting asset markets, stock prices, and capital spending, and by altering business decisions about capital structure, dividend payout and capital deployment.

In December 2002, I prepared a report for a White House working group detailing how the dividend tax cut would impact the U.S. stock market through two different channels 1) recapitalizing the stock market and 2) restructuring corporate balance sheets.

Tax cuts initially impact asset prices by making investors recapitalize, or revalue, the equities of existing companies to reflect higher after-tax returns relative to interest-bearing securities, tangible assets like land and collectibles, and foreign assets. The return gap -- more than 100 basis points for the 2003 tax cuts -- makes investors sell relatively low-return assets, driving their prices down, and buy relatively high-return assets, driving those prices up, until after-tax returns have been driven together again. My estimates showed an initial positive impact on equity values of $560 billion to $938 billion, or 6% to 10%.

The restructuring impact of tax cuts on stock prices plays out over several years but is potentially several times larger than the initial price impact. The 2003 tax cuts were larger for dividend income (from 38.6% to 15%), than for capital gains income (20% to 15%); tax rates on interest income were unchanged. The positive impact on a stock's value will be greater the more profitable the company is, the greater percentage of equity rather than debt on its balance sheet, the greater its payout rate, and the greater its duration (a stock with a greater duration is more sensitive to changes in cost of capital).

In 2003, U.S. companies were poorly structured to benefit from the changes. Decades of high tax rates on dividends prompted managers to reinvest profits and hoard cash for acquisitions rather than pay out dividends regardless of the company's prospects. Meanwhile, deductible interest payments had encouraged managers to finance companies with debt instead of equity, which reduced profits and increased bankruptcy risk. According to the American Shareholders Association, the number of S&P 500 companies paying dividends fell from 469 in 1980 to 351 in 2002. By 2002 the S&P 900 large- and mid-cap companies paid out just 53% of profits, and financed companies with only 27% equity and 73% debt.

Once tax rates were cut in 2003, managers quickly learned they could profit from lower tax rates by restructuring balance sheets. Companies like Nextel issued equity to buy back debt. Other companies, like Microsoft, initiated new dividends and cleaned out their cash hoards through one-time special dividends. Most increased dividend payout ratios: Dividend payments received by shareholders have doubled since the tax cuts.

As companies, one by one, made these changes, their equity values increased further. But changing capital structure takes time, one reason I believe equities will enjoy strong returns for years if tax rates remain low.

We need permanent tax cuts, not temporary extensions, to fully realize these benefits. Managers do not make decisions about leverage and dividend payouts lightly; they will do so only if they believe tax rates will remain low. But Congress gives them temporary rate cuts and temporary extensions in order to comply with the bizarre Congressional budget scoring ritual.

Equities are a long-term investment. Based on our estimates, the duration of the S&P 500 is over 22 years. Each of the first five years of expected free cash flow determines only about 5% of the stock market's intrinsic value. That means 90% of the value of the stock market depends on expected after-tax profits after year two, the date when tax cuts are currently scheduled to expire. We need to make tax cuts permanent for them to be fully reflected in stock prices.

Congress could adopt the two-year extension in the House bill and keep the recovery strong and net worth growing. Better still, they could make current tax rates permanent, which would encourage managers to speed up restructuring activities, accelerate stock-market gains, reduce cost of capital and increase capital spending. Best, they should end double taxation by making both dividend and capital-gains tax rates permanently zero.

America enjoys the highest living standard in the world because American workers enjoy the use of the largest and most advanced stock of tools in the world. But tools are mobile, workers are not. While America continues to double-tax capital income through dividend and capital gains taxes, China, India, and other countries are aggressively competing for American capital with investor-friendly policies. When the capital leaves, the paycheck goes with it. We can't afford to let that happen.

Mr. Rutledge is chairman of Rutledge Capital and president of Mundell International University School of Business in Beijing.

URL for this article: http://online.wsj.com/article/SB113780947395852687.html
Copyright 2006 Dow Jones & Company, Inc. All Rights Reserved

Monday, January 16, 2006

Cost of war

January 15, 2006
Economic View

When Talk of Guns and Butter Includes Lives Lost

AS the toll of American dead and wounded mounts in Iraq, some economists are arguing that the war's costs, broadly measured, far outweigh its benefits.

Studies of previous wars focused on the huge outlays for military operations. That is still a big concern, along with the collateral impact on such things as oil prices, economic growth and interest on the debt run up to pay for the war. Now some economists have added in the dollar value of a life lost in combat, and that has fed antiwar sentiment.

"The economics profession in general is paying more attention to the cost of lives cut short or curtailed by injury and illness," said David Gold, an economist at the New School. "The whole tobacco issue has encouraged this research."

The economics of war is a subject that goes back centuries. But in the cost-benefit analyses of past American wars, a soldier killed or wounded in battle was typically thought of not as a cost but as a sacrifice, an inevitable and sad consequence in achieving a victory that protected and enhanced the country. The victory was a benefit that offset the cost of death.

That halo still applies to World War II, which sits in the American psyche as a defensive war in response to attack. The lives lost in combat helped preserve the nation, and that is a considerable and perhaps immeasurable benefit.

Through the cold war, economists generally avoided calculations of the cost of a human life. Even during Vietnam, the focus of economic studies was on guns and butter - the misguided insistence of the Johnson administration that America could afford a full-blown war and uncurtailed civilian spending. The inflation in the 1970's was partly a result of the Vietnam era.

Cost-benefit analysis, applied to war, all but ceased after Vietnam and did not pick up again until the fall of 2002 as President Bush moved the nation toward war in Iraq. "We are doing this research again," said William D. Nordhaus, a Yale economist, "because the Iraq war is so contentious."

Mr. Nordhaus is the economist who put the subject back on the table with the publication of a prescient prewar paper that compared the coming conflict to a "giant role of the dice." He warned that "if the United States had a string of bad luck or misjudgments during or after the war, the outcome could reach $1.9 trillion," once all the secondary costs over many years were included.

So far, the string of bad luck has materialized, and Mr. Nordhaus's forecast has been partially fulfilled. In recent studies by other economists, the high-end estimates of the war's actual cost, broadly measured, are already moving into the $1 trillion range. For starters, the outlay just for military operations totaled $251 billion through December, and that number is expected to double if the war runs a few more years.

The researchers add to this the cost of disability payments and of lifelong care in Veterans Administration hospitals for the most severely injured - those with brain and spinal injuries, roughly 20 percent of the 16,000 wounded so far. Even before the Iraq war, these outlays were rising to compensate the aging veterans of World War II and Korea. But those wars were accepted by the public, and the costs escape public notice.

Not so Iraq. In a war that has lost much public support, the costs stand out and the benefits - offsetting the costs and justifying the war - are harder to pinpoint. In a paper last September, for example, Scott Wallsten, a resident scholar at the conservative American Enterprise Institute, and Katrina Kosec, a research assistant, listed as benefits "no longer enforcing U.N. sanctions such as the 'no-fly zone' in northern and southern Iraq and people no longer being murdered by Saddam Hussein's regime."

Such benefits, they found, fall well short of the costs. "Another possible impact of the conflict, is a change in the probability of future major terrorist attacks," they wrote. "Unfortunately, experts do not agree on whether the war has increased or decreased this probability. Clearly, whether the direct benefits of the war exceed the costs ultimately relies at least in part on the answer to that question."

The newest research was a paper posted last week on the Web (www2.gsb.columbia.edu/faculty/jstiglitz/cost_of_war_in_iraq.pdf) by two antiwar Democrats from the Clinton administration: Joseph E. Stiglitz of Columbia University and Linda Bilmes, now at the Kennedy School of Government at Harvard. Their upper-end, long-term cost estimate tops $1 trillion, based on the death and damage caused by the war to date. They assumed an American presence in Iraq through at least 2010, and their estimate includes the war's contribution to higher domestic petroleum prices. They also argue that while military spending has contributed to economic growth, that growth would have been greater if the outlays had gone instead to highways, schools, civilian research and other more productive investment.

The war has raised the cost of Army recruiting, they argue, and has subtracted from income the wages given up by thousands of reservists who left civilian jobs to fight in Iraq at lower pay.

JUST as Mr. Wallsten and Ms. Kosec calculated the value of life lost in battle or impaired by injury, so did Mr. Stiglitz and Ms. Bilmes - putting the loss at upwards of $100 billion. That is more than double the Wallsten-Kosec estimate. Both studies draw on research undertaken since Vietnam by W. Kip Viscusi, a Harvard law professor.

The old way of valuing life calculated the present value of lost earnings, a standard still used by the courts to compensate accident victims, generally awarding $500,000 a victim, at most. Mr. Viscusi, however, found that Americans tend to value risk differently. He found that society pays people an additional $700 a year, on average, to take on risky work in hazardous occupations. Given one death per 10,000 risk-takers, on average, the cost to society adds up to $7 million for each life lost, according to Mr. Viscusi's calculation. Mr. Stiglitz and Ms. Bilmes reduced this number to about $6 million, keeping their estimate on the conservative side, as they put it.

None of the heroism or sacrifice for country shows up in the recent research, and for a reason.

"We did not have to fight this war, and we did not have to go to war when we did," Mr. Stiglitz said. "We could have waited until we had more safe body armor and we chose not to wait."

More government interference in health care

HillaryCare Returns January 16, 2006; Page A14

Readers with long memories will recall that one of the reasons HillaryCare was defeated in 1994 was because of its unpopular employer mandate. Well, its diktat that all businesses provide health insurance is making a comeback, albeit at the state level and at first only for the largest companies. But all employers are on Big Labor's target list here.

That's the larger meaning of last week's events in Maryland, where the state legislature overturned the veto of GOP Governor Robert Ehrlich and passed a bill forcing any employer in the state with more than 10,000 employees to spend at least 8% of its payroll on health care or pay the state the difference. There are only a handful of companies that large in Maryland -- Johns Hopkins University and Giant Food, a grocery chain competing with Wal-Mart, among them. Only one meets all the criteria, however, so the legislation was dubbed "The Wal-Mart Bill," which in part it is.

But no one should think this will be an isolated political event. The state AFL-CIO threw everything it had into the bill, including a vow to withdraw support from any lawmaker who didn't vote to override the veto. Democrats who dominate both houses of the Maryland legislature went along. The national AFL-CIO now plans to use the Maryland law as a model for legislation in other states. Union chief John Sweeney has announced a campaign to enact "Fair Share Health Care Legislation" in more than 30 states. Washington and New Hampshire will be early targets.

The details vary by state, but already it's clear the new tax would eventually hit companies a lot smaller than Wal-Mart. In Rhode Island, proposed legislation takes aim at businesses with only 1,000 employees. In other states proposals would mandate payouts of 9% or more. Once the principle is established that employers must allocate a certain share of their payroll to health care, it becomes easier to gradually extend the mandate to all businesses.

Unions and Democrats argue that companies must be commanded to do this because employees without health insurance often turn up on Medicaid, which is busting state budgets. But rather than reform Medicaid to control its costs or stop its rampant fraud, the politicians find it easier to sock it to private business. One result will be that companies will create fewer new jobs, as in Old Europe.

As for Wal-Mart, it is hardly an ogre as an employer for 1.3 million Americans. It now offers an array of health plans to all full and part-time employees with monthly premiums as low as $23 for an individual and $65 for a family anywhere in the country (less in some areas). Employees can also choose to set up health savings accounts with Wal-Mart matching contributions up to $1,000.

The company doesn't disclose its health care spending. But last year officials told Maryland's legislature that Wal-Mart typically spends between 7% and 8% of its payroll on health care. In the fiscal year ending this month, the company expects to have spent $4.7 billion on all benefits (including health, dental, retirement and more) while turning a profit of about $10 billion. One perverse aspect of the Maryland bill is that Wal-Mart won't be able to count any savings from negotiating lower prices with doctors and hospitals toward the 8% threshold. So the bill works against the oft-invoked liberal goal of reducing the nation's overall health-care costs.

Wal-Mart hasn't said how it will respond to the new Maryland law, but we'd suggest at a minimum that it cancel its plans for a new regional distribution center in the state that would create about 800 jobs. Maryland's politicians need to understand that policies punishing business have bad economic consequences. Let a more enlightened state benefit from Wal-Mart's prosperity.

What's really going on here is an attempt to pass the runaway burdens of the welfare state on to private American employers. As we're learning from Old Europe and General Motors, this is bad news for both business and workers in the long run. The U.S. doesn't need a revival of HillaryCare on the installment plan.

URL for this article: http://online.wsj.com/article/SB113737238753647283.html
Copyright 2006 Dow Jones & Company, Inc. All Rights Reserved

Saturday, January 14, 2006

Dictating to Others

The War Against Suburbia

By JOEL KOTKIN January 14, 2006; Page A8

Suburbia, the preferred way of life across the advanced capitalist world, is under an unprecedented attack -- one that seeks to replace single-family residences and shopping centers with an "anti-sprawl" model beloved of planners and environmental activists. The latest battleground is Los Angeles, which gave birth to the suburban metropolis. Many in the political, planning and media elites are itching to use the regulatory process to turn L.A. from a sprawling collection of low-rise communities into a dense, multistory metropolis on the order of New York or Chicago. Mayor Antonio Villaraigosa has outlined this vision, and it does not conform to the way that most Angelenos prefer to live: "This old concept that all of us are going to live in a three-bedroom home, you know this 2,500 square feet, with a big frontyard and a big backyard -- well, that's an old concept."

This kind of imposed "vision" is proliferating in major metropolitan regions around the world. From Australia to Great Britain (and points in between), there is a drive to use the public purse to expand often underused train systems, downtown condominiums, hotels, convention centers, sports stadia and "star-chitect"-designed art museums, often at the expense of smaller business, single-family neighborhoods and local shopping areas. All this reflects a widespread prejudice endemic at planning departments in universities, within city bureaucracies, and in much of the media. Across a broad spectrum of planning schools and practitioners, suburbs and single-family neighborhoods are linked to everything from obesity, rampant consumerism, environmental degradation, the current energy crisis -- and even the predominance of conservative political tendencies.

Acolytes of such worldviews in our City Halls are now working overtime to find ways to snuff out "sprawl" in favor of high-density living. Portland's "urban growth boundary" and the "smart growth" policies promoted by former Maryland Governor Parris Glendening, for example, epitomize the preference of planners to cram populations into ever denser, expensive housing by choking off new land to development. More recently, this notion even has spread to areas where single family homes and suburbs are de rigueur. Planners in Albuquerque have suggested banning backyards -- despised as wasteful and "anti-social" by new urbanists and environmentalists, although it is near-impossible to find a family that doesn't want one. Even the mayor of Boise, Idaho, advocates tilting city development away from private homes, which now dominate the market, toward apartments.

Perhaps the best-known case of anti-sprawl legislation has been the "urban growth boundary," adopted in the late '70s to restrict development to areas closer to established urban areas. To slow the spread of suburban, single-family-home growth, the Portland region adopted a "grow up, not out" planning regime, which stressed dense, multistory development. Mass transit was given priority over road construction, which was deemed to be sprawl-inducing.

Experts differ on the impact of these regulations, but it certainly has not created the new urbanist nirvana widely promoted by Portland's boosters. Strict growth limits have driven population and job growth further out, in part by raising the price of land within the growth boundary, to communities across the Columbia River in Washington state and to distant places in Oregon. Suburbia has not been crushed, but simply pushed farther away. Portland's dispersing trend appears to have intensified since 2000: The city's population growth has slowed considerably, and 95% of regional population increase has taken place outside the city limits.

This experience may soon be repeated elsewhere as planners and self-proclaimed visionaries run up against people's aspirations for a single-family home and low-to-moderate-density environment. Such desires may constitute, as late Robert Moses once noted, "details too intimate" to merit the attention of the university-trained. Even around cities like Paris, London, Toronto and Tokyo -- all places with a strong tradition of central planning -- growth continues to follow the preference of citizens to look for lower-density communities. High energy prices and convenient transit have not stopped most of these cities from continuing to lose population to their ever-expanding suburban rings.

But nowhere is this commitment to low-density living greater than in the U.S. Roughly 51% of Americans, according to recent polls, prefer to live in the suburbs, while only 13% opt for life in a dense urban place. A third would go for an even more low-density existence in the countryside. The preference for suburban-style living continues to be particularly strong among younger families. Market trends parallel these opinions. Despite widespread media exposure about a massive "return to the city," demographic data suggest that the tide continues to go out toward suburbia, which now accounts for two-thirds of the population in our large metropolitan areas. Since 2000, suburbs have accounted for 85% of all growth in these areas. And much of the growth credited to "cities" has actually taken place in the totally suburb-like fringes of places like Phoenix, Orlando and Las Vegas.

These facts do not seem to penetrate the consciousness of the great metropolitan newspapers anymore than the minds of their favored interlocutors in the planning profession and academia. Newspapers from Boston and San Francisco to Los Angeles are routinely filled with anecdotal accounts of former suburbanites streaking into hip lofts and high-rises in the central core. Typical was a risible story that ran in last Sunday's New York Times, titled "Goodbye, Suburbia." The piece tracked the hegira back to the city by sophisticated urbanites who left their McMansions to return to Tribeca (rhymes with "Mecca"). Suburbia, one returnee sniffed, is "just a giant echoing space."

Such reports confirm the cognoscente's notion that the cure for the single-family house lies in the requisite lifting of consciousness, not to mention a couple of spare million in the bank. Yet demographic data suggest the vast majority of all growth in greater New York comes not from migration from the suburbs, but from abroad. Among domestic migrants, far more leave for the "giant echoing spaces" than come back to the city. As a whole, greater New York -- easily the most alluring traditional urban center -- is steadily becoming more, not less, suburban. Since 2000, notes analyst Wendell Cox, New York City has gained less than 95,000 people while the suburban rings have added over 270,000. Growth in "deathlike" places like Suffolk County, in Long Island, Orange County, N.Y., and Morris County, N.J., has been well over three times faster than the city.

So as he unfolds the details of his new urban "vision," Mr. Villaraigosa might do well to consider such sobering statistics. Californians, too, like single-family homes. According to a 2002 poll, 84% prefer them to apartments. Instead of dismissing the suburban single-family neighborhood as "an old concept," L.A.'s mayor might look to how to capitalize on the success of such sections of his city as the San Fernando Valley, where a large percentage of the housing stock is made up of owner-occupied houses and low-rise condominiums. The increasingly multi-ethnic valley already boasts both the city's largest base of homeowners, as well as its strongest economy, including roughly two-thirds of the employment in the critical entertainment industry.

It is time politicians recognized how their constituents actually want to live. If not, they will only hurt their communities, and force aspiring middle-class families to migrate ever further out to the periphery for the privacy, personal space and ownership that constitutes the basis of their common dreams.

Mr. Kotkin, Irvine Senior Fellow with the New America Foundation, is the author of "The City: A Global History" (Modern Library, 2005).

URL for this article: http://online.wsj.com/article/SB113720150260446647.html
Copyright 2006 Dow Jones & Company, Inc. All Rights Reserved

Friday, January 13, 2006

Revenue Rebound

States of Plenty January 11, 2006; Page A14

Just prior to his state of the state address last week, Governor George Pataki announced that New York is looking at a $2 billion surplus this year, or twice what was predicted just three months earlier. Two days later, Governor Arnold Schwarzenegger said California would end the fiscal year with a whopping $5.2 billion in reserves. Arkansas, Florida, Maryland, Oklahoma, Virginia and about three dozen other states are also reporting revenue above forecasts. What's going on here?

A lot of underreported economic good news, that's what. According to the National Conference of State Legislatures, states have closed an aggregate $264 billion budget gap since 2001. "State budget conditions continue to improve, showing signs of recovery in nearly every state," says the report. "Buoyed by robust revenue performance, states are collecting more revenue than they projected for nearly every major tax. In some cases, collections are significantly above forecasts."

[Revenue Rebound]

There's some policy irony here. In order to vote for the Bush tax cuts, some liberal Republicans insisted on a $20 billion giveaway to the states to offset what they thought would be "lost" revenue. But the revenue data show that the Bush tax cuts and the economic expansion that has followed have been a windfall for state coffers.

Since the tax cuts passed in mid-2003, GDP growth has averaged close to 4%, the jobless rate is down to 4.9%, and federal tax receipts have climbed at the fastest pace in more than two decades. More workers and rising incomes translate into more taxable income for states. And because most states tax investment income too, state budgets are also benefiting from an increase in corporate dividend payouts. If the states were smart, they'd offer to repay the $20 billion if Washington will make the tax cuts permanent.

As usual, strong profits on Wall Street are playing a central role in New York's recovery. California cites higher-than-expected corporate tax revenues. In Connecticut, where a $524 million surplus is anticipated, officials point to capital gains on stock sales as the biggest factor. And in Virginia -- where Democratic Governor Mark Warner pled poverty two years ago in order to push through a record tax increase -- a hefty $1.1 billion surplus has been projected. Even some Republican governors, like Mitch Daniels of Indiana, found themselves pushing prematurely for tax increases. For the first three quarters of 2005, Indiana's state tax receipts are up 6.3%.

In fact, the state budget "crisis" that we've been reading about for the past few years always had more to do with overspending than revenue shortfalls. Using Census data, Chris Edwards of the Cato Institute calculates that state revenues did drop 3.2% in 2002, but they rebounded by 4.2% the next year. States saw revenue growth of 8.7% in 2004 and an estimated 8% last year. In other words, if budgets weren't being balanced, it's not because the taxpayers weren't doing their part.

State politicians will claim these newfound riches shouldn't be returned to taxpayers because of rising Medicaid costs. And it's certainly true that that program is claiming a higher and higher percentage of state budgets. But that's not an argument against tax cuts; it's an argument for Medicaid reform. And history shows that, given their druthers, state politicians would much rather use boom cycles to placate special interests by expanding entitlements rather than reforming them.

Voters might also keep all this in mind during the next economic downturn, when their state politicians come asking for more money to tackle problems that they lacked the discipline to address when their coffers were full.

URL for this article: http://online.wsj.com/article/SB113695191556343524.html
Copyright 2006 Dow Jones & Company, Inc. All Rights Reserved

Tuesday, January 10, 2006

Human capital

Catch 'em Young

By JAMES J. HECKMAN January 10, 2006; Page A14

It is a rare public policy initiative that promotes fairness and social justice and, at the same time, promotes productivity in the economy and in society at large. Investing in disadvantaged young children is such a policy. The traditional argument for providing enriched environments for disadvantaged young children is based on considerations of fairness and social justice. But another argument can be made that complements and strengthens the first one. It is based on economic efficiency, and it is more compelling than the equity argument, in part because the gains from such investment can be quantified -- and they are large.

There are many reasons why investing in disadvantaged young children has a high economic return. Early interventions for disadvantaged children promote schooling, raise the quality of the work force, enhance the productivity of schools, and reduce crime, teenage pregnancy and welfare dependency. They raise earnings and promote social attachment. Focusing solely on earnings gains, returns to dollars invested are as high as 15% to 17%.

The equity-efficiency trade-off that plagues so many public policies can be avoided because of the importance of skills in the modern economy and the dynamic nature of the skill-acquisition process. A large body of research in social science, psychology and neuroscience shows that skill begets skill; that learning begets learning. There is also substantial evidence of critical or sensitive periods in the lives of young children. Environments that do not cultivate both cognitive and noncognitive abilities (such as motivation, perseverance and self-restraint) place children at an early disadvantage. Once a child falls behind in these fundamental skills, he is likely to remain behind. Remediation for impoverished early environments becomes progressively more costly the later it is attempted.

* * *

Families are the major source of inequality in American social and economic life. The accident of birth has substantial lifetime consequences. Adverse early environments are powerful predictors of adult failure on several social and economic dimensions. The source of the adversity is the lack of stimulation afforded young children. Experimental interventions that enrich early childhood environments have been shown to produce more successful adults by raising both cognitive and noncognitive skills. At current levels of spending, early interventions targeted toward disadvantaged children have much higher economic returns than later interventions, such as reduced pupil-teacher ratios, public job training, convict rehabilitation programs, tuition subsidies or expenditure on police.

Adverse early environments contribute to many major social problems. One prominent example is the slowdown in the growth of labor force quality. The U.S. will add many fewer college graduates to its work force in the next 20 years than it did in the last 20 years. The percentage of each cohort of Americans who attend college has stagnated in recent decades. Properly counted, the high school dropout rate is increasing at a time when the economic return to schooling has increased. This increase is occurring among native populations and is not solely due to immigration. Crime is another social problem. The estimated net cost of crime in American society is $1.3 trillion per year -- $4,818 per capita. Crime reduction is extremely expensive, and spending on the criminal justice system is still increasing.

When we look at the origins of these and other problems, we find that shortfalls in cognitive and noncognitive ability are major predictors of these social ills. Noncognitive ability is neglected in many public policy discussions, yet it is a major determinant of socioeconomic success. Cognitive and noncognitive ability are both important in explaining schooling attainment, participation in crime and a variety of other outcomes. Moving persons from the bottom to the top of either cognitive or noncognitive distributions has equally strong effects on many measures of social and economic success.

Gaps in rankings of both cognitive and noncognitive ability by socioeconomic status emerge early in the life of the child, widen slightly in the early years of schooling, and stay constant after age eight. Research shows that schooling and school quality play only a small role in accounting for these gaps or in widening or narrowing them. Controlling for early family environments narrows the gaps greatly.

Family environments are major predictors of adult cognitive and noncognitive abilities. This is a source of concern because these environments have deteriorated. Using a variety of measures, relatively more U.S. children are born into disadvantaged environments compared to 40 years ago. The percentage of children born to single parent families, for example, has jumped from less than 5% in 1968 to more than 22% in 2000. Few of those families are headed by well-educated mothers. Interventions that enrich the early years of disadvantaged children improve both cognitive and noncognitive skills and produce successful adults.

A great deal of American public policy discussion judges the success or failure of education programs by their effects on cognitive test-score measurements. Head Start, for instance, was deemed a failure because it did not raise IQ scores. But such judgments are unwise. Consider the Perry Preschool Program, a family environment enrichment given to disadvantaged minority children that was evaluated by a randomized trial. The Perry intervention group had no higher IQ test scores than the control group. Yet, in a follow up to age 40, the Perry treatment children had higher achievement test scores than did the control children and on many dimensions the Perry children are far more successful than the controls. In terms of employment, schooling and participation in crime, among other measures, early interventions can partially compensate for early disadvantage. The Perry program's economic benefits are substantial: Rates of return are 15% to 17%. The benefit-cost ratio is eight to one. Participant noncognitive skills were raised even if their IQs were not.

Perry intervened relatively late (at ages four to six) in the lives of the disadvantaged children. Earlier interventions like the Carolina Abecedarian program that also targeted disadvantaged children and that were administered when subjects are four months old permanently raised the IQ and the noncognitive skills of the treatment group compared to the control group.

Although much public policy discussion focuses on the failings of schools, a major finding from the research literature is that schools and school quality contribute little to the emergence of test-score gaps among children. By the second grade, gaps in ranks of test scores across socioeconomic groups are stable, suggesting that later schooling has little effect in reducing or widening the gaps that appear before students enter school. In work with Pedro Carneiro, I performed a cost-benefit analysis of classroom-size reduction on adult earnings. While smaller classes raise the adult earnings of students, the earnings gains do not offset the costs of hiring additional teachers. The best way to improve schools is to improve the students sent to them. A substantial benefit of early interventions is improvement of the performance of disadvantaged children in schools.

Because of the dynamics of human skill formation, the abilities and motivations that children bring to school play a far greater role in promoting performance than do the traditional schooling input measures that receive so much attention in public policy debates. Other evidence suggests that resources available to children when they make their college attendance decisions play only a small role in accounting for socioeconomic and ethnic differentials. At most 8% of the families in America cannot afford to send their children to school. While policies targeted to this 8% are cost effective, the major source of the gaps in college attendance across socioeconomic groups is the gaps in the abilities that children have in their late teens, which are formed much earlier in life.

Many politicians and citizens place their faith in adolescent and young-adult remediation programs. America is a second-chance society, fundamentally optimistic about the possibility of human change. However, the track records of criminal rehabilitation programs, adult literacy programs and public job-training programs are poor. A few selectively targeted versions yield modest benefits. They do not lift the vast majority of their participants out of poverty.

Studies of the dynamics of human skill formation show that later compensation for deficient early family environments is very costly. A lack of early skill and motivation begets a lack of future skill and motivation. If society waits too long to compensate for the accident of birth, it is economically inefficient to invest in the skills of the disadvantaged. A serious trade-off exists between equity and efficiency for skill policies directed towards adolescents and young adults. There is no such trade-off for policies targeted toward disadvantaged young children.

Important operational details of investment programs for disadvantaged children remain to be determined. Children from advantaged environments, by and large, receive substantial early investment, while children from disadvantaged environments more often do not. There is little basis for providing universal programs at zero cost, although some advocate such a policy. While there is a strong case for public support for funding interventions in the early childhood of disadvantaged children, there is no reason for the interventions to be conducted in public centers. Vouchers that can be used in privately run programs would promote competition and efficiency in the provision of early enrichment programs. They would allow parents to choose the venues and values offered in the programs that enrich their child's earliest years.

Mr. Heckman, Nobel laureate in economics in 2000, is a professor at the University of Chicago.

URL for this article: http://online.wsj.com/article/SB113686119611542381.html
Copyright 2006 Dow Jones & Company, Inc. All Rights Reserved

Reagan redux

The Great Communicator A revisionist profile of Reagan shows a formidable leader who molded reality to suit his purposes.

Reviewed by Jon Meacham Sunday, January 8, 2006; BW03

PRESIDENT REAGAN

The Triumph of Imagination

By Richard Reeves

Simon & Schuster. 571 pp. $30

He finally got it. In the end, after the tantrums, after hanging up on Nancy, after hearing about his own firing from a CNN report, Donald Regan at last came to see the truth about Ronald Reagan, the man he served as secretary of the treasury and chief of staff.

"What was the biggest problem in the White House when you were there?" the biographer Richard Reeves asked Regan.

"Everyone there thought he was smarter than the President," Regan replied.

"Including you?"

"Especially me."

That brief exchange tells us much about Reeves's illuminating new President Reagan and about a significant shift in elite opinion about our 40th president. Long dismissed and derided by the upper reaches of the press and by denizens of the blue-state bubble, the man who swept two national elections, helped bring down the Soviet Union and fundamentally changed the terms of the American debate over government is no longer being viewed as "an unwitting tool of a manipulative staff," in Reeves's phrase. In a way, Reeves took up "Doonesbury" creator Garry Trudeau's challenge and went "In Search of Reagan's Brain." He found a formidable one.

President Reagan marks a surrender of sorts. The establishment has, for the moment at least, given in and decided that Reagan was a great historical figure after all. That Reeves arrived at such a conclusion is particularly notable. Twenty years ago, in 1985, he published The Reagan Detour , arguing that "the Reagan years would be a detour, necessary if sometimes nasty, in the long progression of American liberal democracy."

As it turned out, Reagan's America was neither coldly conservative nor intractably hawkish, and we are still living in the nation he seduced and shaped. Before him, it was difficult to imagine a Democratic president saying, "The era of big government is over," but in 1995, Bill Clinton did, and no Democrat since has tried very hard to make a case for traditional 20th-century American liberalism.

As in two earlier works -- the excellent President Kennedy: Profile of Power (1993), and President Nixon: Alone in the White House (2001), a very strong account that strangely stopped the day senior White House advisers H.R. Haldeman and John Ehrlichman resigned -- Reeves puts a premium on evidence of what the president knew and did moment by moment. Rich in anecdote yet sparingly written, President Reagan puts us in the room with a president who lived what Reeves calls a "life imagined." Like Winston Churchill, Reagan had a remarkable capacity to recast reality to suit his emotional and political purposes.

The child of a pious, theatrical mother and an alcoholic Midwestern shoe salesman, Reagan did not have the happiest of childhoods. In the winter of 1922, when "Dutch" was 11, he found his father, Jack, passed out on the front porch. "He was drunk, dead to the world," Reagan recalled. The boy's first instinct, he admitted, was to "pretend he wasn't there." Something else, though, stirred in him on that cold night. This was the time to take command, "the first moment of accepting responsibility." So he saved the old man, bringing him in out of the cold.

Reagan liked playing the rescuer. His years as a lifeguard on the Rock River were the stuff of legend -- a legend Reagan carefully cultivated even then. When he pulled a swimmer from danger, Reagan would put a notch on a log, in much the same way he would later keep track of his box office in Hollywood or count votes in Sacramento and Washington. From his youth forward, he was never offstage for long. Moving from lifeguard to sportscaster to movie star to union president to GE spokesman to TV host to governor to president, he undertook roles in which he was the central player -- and he was never, as a bitter House Speaker Thomas P. "Tip" O'Neill once said, "lazy and short-sighted." Reagan may not have been the most brilliant man in the room, but he was generally the most powerful, and that he made his rise through the world look so effortless is a tribute to his grace. As for his alleged short-sightededness -- well, you do not hear many Americans speaking of how we are living in the long shadow of Tip O'Neill.

The Reagan that Reeves gives us is a man who did more real work in the White House than many people believed at the time. Reagan hit the phones to sell his tax cuts and programs to Congress, representative by representative, recording his impressions of each call. He was, Reeves reports, the first president since Eisenhower to sit through a nuclear wargame. He understood the importance of words -- not just images, as his critics reflexively say, but words. He said what he thought, Reeves writes, perhaps more than any other politician of the era. (He had a simple strategy for dealing with the Soviets, he said: "We win, they lose!")

Though Reeves argues Reagan was a man not of vision but of imagination, the two are inseparable, at least in Reagan's case. Reagan imagined the world he would like to make, and by convincing so many others of the virtues of that world, he led us there -- not only in fantasy but in reality, by tough, one-on-one negotiation with a Democratic House of Representatives and the Kremlin. To dream it takes imagination; to make it happen, as Reagan did, requires vision.

He thought big but could also be an effective retail politician. When Jerry Falwell attacked the Supreme Court nomination of Sandra Day O'Connor, Sen. Barry Goldwater (R-Ariz.) remarked, "Every good Christian ought to kick Falwell right in the ass." Reagan probably agreed but took what Reeves calls a "less direct" approach. In a call to Falwell, Reagan said: "Jerry, I am going to put forth a lady on the Supreme Court. You don't know anything about her. Nobody does, but I want you to trust my judgment on this one." Falwell immediately caved. "I'll do that, sir," he replied.

The foibles are all here, too. Reagan found refuge in Hollywood stories when he was uncomfortable or wanted to deflect something or someone. On Inauguration Day in 1981, on the awkward ride with President Carter up Pennsylvania Avenue, Reagan told old Tinseltown tales. Afterward Carter asked his communications director, Gerald Rafshoon: "He kept talking about Jack Warner. Who's Jack Warner?" Accustomed to rotating movie casts and crews, Reagan broadly referred to those around him as "the fellas," never forging intimate personal bonds with those who served him. (Nancy was all he needed.) Once, when his longtime aide and image-meister Michael Deaver heard Reagan say "Thank you," Deaver figured it was "about the third time in twenty years he had heard Reagan use those words in private." When an aide wrote in a memo for the president about "members of the FDR" -- the Spanish acronym for the Democratic Revolutionary Front, the political arm of El Salvador's insurgents -- Richard V. Allen, then the national security adviser, sent it back with an irked note: "How many times must I mention that items like 'FDR' are not household terms for the President? Use an asterisk and explain, dammit!"

The drama is vivid throughout. There is White House aide Richard Darman in the Situation Room on the day Reagan was shot, collecting documents about invoking the 25th Amendment and locking them in his safe. There is Secretary of State Alexander Haig, in an early discussion of Cuban influence in Central America, telling Reagan, "Give me the word and I'll turn that island into a [expletive] parking lot." (A shocked Deaver tried to make sure the president was never left alone with his chief diplomat.) There is Reagan himself, worn down by the first lady's relentless talk about this problem or that aide, barking, "That's enough, Nancy!" There is pollster Richard Wirthlin telling the president in early 1983 that his approval rating has hit its lowest point ever, prompting Reagan to reply with a smile: "I know what I can do about that. I'll go out and get shot again." There is the lovely detail that, as Reagan finished calling the Soviet Union "the focus of evil in the modern world" in a March 1983 speech to the National Association of Evangelicals, the band struck up "Onward, Christian Soldiers." There is a churlish president, after blowing the first 1984 debate with Walter Mondale, grumbling, "If I'd had as much make-up as he did I would have looked younger, too." And there is Reagan in Reykjavik murmuring at a televised image of Mikhail Gorbachev's arrival in Iceland for their 1986 summit, "When you stop trying to take over the world, then maybe we can do some business."

Reagan had his dark hours too. He mangled facts; caricatured welfare recipients; opened his 1980 campaign in Philadelphia, Miss., where three civil rights workers were murdered for trying to overthrow Jim Crow; presided over the grim recession of 1982-83; seemed uncaring about the emerging HIV/AIDS crisis; and, in the Iran-contra scandal, came close to -- and may have committed -- impeachable offenses. It is Reeves's achievement that Reagan's complexities and contradictions seem plausible; we can see him in full measure, the good and the bad.

This book could also be usefully read at the highest levels of the Bush administration. Reagan was much more complicated than the Gipper of popular conservative mythology. He was not an uncompromising, inflexible cold warrior who ignored the natterings of critics and the press. He was, instead, a deft negotiator -- the old Screen Actors Guild president doing his thing. Moreover, through Nancy, he knew what Washington was saying about him -- and corrected course when he had to.

In April 1986, at a Library of Congress symposium on the presidency, former secretary of state Henry Kissinger -- a man, Reeves points out, who had been "routinely attacked by Ronald Reagan over the years" for insufficient idealism in foreign policy -- said: "You ask yourself 'How did it ever occur to anybody that Reagan should be governor, much less President?' On the other hand, you have to say also that a man who dominated California for eight years, and now dominates the American political process for five and a half years, as he has, cannot be a trivial figure. It is perfectly possible history will judge Reagan as a most significant President."

It will indeed. Readers are in Reeves's debt for this entertaining, deeply reported and revealing portrait of a man destined to be in death what he was in life: a figure of enduring fascination.

Jon Meacham, the managing editor of Newsweek, is the author of the forthcoming "American Gospel: God, the Founding Fathers, and the Making of a Nation."

© 2006 The Washington Post Company

Thursday, January 05, 2006

It's the Demography, Stupid

THE CENTURY AHEAD

It's the Demography, Stupid The real reason the West is in danger of extinction. BY MARK STEYN Wednesday, January 4, 2006 12:01 a.m. Most people reading this have strong stomachs, so let me lay it out as baldly as I can: Much of what we loosely call the Western world will not survive this century, and much of it will effectively disappear within our lifetimes, including many if not most Western European countries. There'll probably still be a geographical area on the map marked as Italy or the Netherlands--probably--just as in Istanbul there's still a building called St. Sophia's Cathedral. But it's not a cathedral; it's merely a designation for a piece of real estate. Likewise, Italy and the Netherlands will merely be designations for real estate. The challenge for those who reckon Western civilization is on balance better than the alternatives is to figure out a way to save at least some parts of the West.

One obstacle to doing that is that, in the typical election campaign in your advanced industrial democracy, the political platforms of at least one party in the United States and pretty much all parties in the rest of the West are largely about what one would call the secondary impulses of society--government health care, government day care (which Canada's thinking of introducing), government paternity leave (which Britain's just introduced). We've prioritized the secondary impulse over the primary ones: national defense, family, faith and, most basic of all, reproductive activity--"Go forth and multiply," because if you don't you won't be able to afford all those secondary-impulse issues, like cradle-to-grave welfare.

Americans sometimes don't understand how far gone most of the rest of the developed world is down this path: In the Canadian and most Continental cabinets, the defense ministry is somewhere an ambitious politician passes through on his way up to important jobs like the health department. I don't think Don Rumsfeld would regard it as a promotion if he were moved to Health and Human Services.

The design flaw of the secular social-democratic state is that it requires a religious-society birthrate to sustain it. Post-Christian hyperrationalism is, in the objective sense, a lot less rational than Catholicism or Mormonism. Indeed, in its reliance on immigration to ensure its future, the European Union has adopted a 21st-century variation on the strategy of the Shakers, who were forbidden from reproducing and thus could increase their numbers only by conversion. The problem is that secondary-impulse societies mistake their weaknesses for strengths--or, at any rate, virtues--and that's why they're proving so feeble at dealing with a primal force like Islam.

Speaking of which, if we are at war--and half the American people and significantly higher percentages in Britain, Canada and Europe don't accept that proposition--then what exactly is the war about?

We know it's not really a "war on terror." Nor is it, at heart, a war against Islam, or even "radical Islam." The Muslim faith, whatever its merits for the believers, is a problematic business for the rest of us. There are many trouble spots around the world, but as a general rule, it's easy to make an educated guess at one of the participants: Muslims vs. Jews in "Palestine," Muslims vs. Hindus in Kashmir, Muslims vs. Christians in Africa, Muslims vs. Buddhists in Thailand, Muslims vs. Russians in the Caucasus, Muslims vs. backpacking tourists in Bali. Like the environmentalists, these guys think globally but act locally.

Yet while Islamism is the enemy, it's not what this thing's about. Radical Islam is an opportunistic infection, like AIDS: It's not the HIV that kills you, it's the pneumonia you get when your body's too weak to fight it off. When the jihadists engage with the U.S. military, they lose--as they did in Afghanistan and Iraq. If this were like World War I with those fellows in one trench and us in ours facing them over some boggy piece of terrain, it would be over very quickly. Which the smarter Islamists have figured out. They know they can never win on the battlefield, but they figure there's an excellent chance they can drag things out until Western civilization collapses in on itself and Islam inherits by default.

That's what the war's about: our lack of civilizational confidence. As a famous Arnold Toynbee quote puts it: "Civilizations die from suicide, not murder"--as can be seen throughout much of "the Western world" right now. The progressive agenda--lavish social welfare, abortion, secularism, multiculturalism--is collectively the real suicide bomb. Take multiculturalism. The great thing about multiculturalism is that it doesn't involve knowing anything about other cultures--the capital of Bhutan, the principal exports of Malawi, who cares? All it requires is feeling good about other cultures. It's fundamentally a fraud, and I would argue was subliminally accepted on that basis. Most adherents to the idea that all cultures are equal don't want to live in anything but an advanced Western society. Multiculturalism means your kid has to learn some wretched native dirge for the school holiday concert instead of getting to sing "Rudolph the Red-Nosed Reindeer" or that your holistic masseuse uses techniques developed from Native American spirituality, but not that you or anyone you care about should have to live in an African or Native American society. It's a quintessential piece of progressive humbug.

Then September 11 happened. And bizarrely the reaction of just about every prominent Western leader was to visit a mosque: President Bush did, the prince of Wales did, the prime minister of the United Kingdom did, the prime minister of Canada did . . . The premier of Ontario didn't, and so 20 Muslim community leaders had a big summit to denounce him for failing to visit a mosque. I don't know why he didn't. Maybe there was a big backlog, it was mosque drive time, prime ministers in gridlock up and down the freeway trying to get to the Sword of the Infidel-Slayer Mosque on Elm Street. But for whatever reason he couldn't fit it into his hectic schedule. Ontario's citizenship minister did show up at a mosque, but the imams took that as a great insult, like the Queen sending Fergie to open the Commonwealth Games. So the premier of Ontario had to hold a big meeting with the aggrieved imams to apologize for not going to a mosque and, as the Toronto Star's reported it, "to provide them with reassurance that the provincial government does not see them as the enemy."

Anyway, the get-me-to-the-mosque-on-time fever died down, but it set the tone for our general approach to these atrocities. The old definition of a nanosecond was the gap between the traffic light changing in New York and the first honk from a car behind. The new definition is the gap between a terrorist bombing and the press release from an Islamic lobby group warning of a backlash against Muslims. In most circumstances, it would be considered appallingly bad taste to deflect attention from an actual "hate crime" by scaremongering about a purely hypothetical one. Needless to say, there is no campaign of Islamophobic hate crimes. If anything, the West is awash in an epidemic of self-hate crimes. A commenter on Tim Blair's Web site in Australia summed it up in a note-perfect parody of a Guardian headline: "Muslim Community Leaders Warn of Backlash from Tomorrow Morning's Terrorist Attack." Those community leaders have the measure of us.

Radical Islam is what multiculturalism has been waiting for all along. In "The Survival of Culture," I quoted the eminent British barrister Helena Kennedy, Queen's Counsel. Shortly after September 11, Baroness Kennedy argued on a BBC show that it was too easy to disparage "Islamic fundamentalists." "We as Western liberals too often are fundamentalist ourselves," she complained. "We don't look at our own fundamentalisms."

Well, said the interviewer, what exactly would those Western liberal fundamentalisms be? "One of the things that we are too ready to insist upon is that we are the tolerant people and that the intolerance is something that belongs to other countries like Islam. And I'm not sure that's true."

Hmm. Lady Kennedy was arguing that our tolerance of our own tolerance is making us intolerant of other people's intolerance, which is intolerable. And, unlikely as it sounds, this has now become the highest, most rarefied form of multiculturalism. So you're nice to gays and the Inuit? Big deal. Anyone can be tolerant of fellows like that, but tolerance of intolerance gives an even more intense frisson of pleasure to the multiculti masochists. In other words, just as the AIDS pandemic greatly facilitated societal surrender to the gay agenda, so 9/11 is greatly facilitating our surrender to the most extreme aspects of the multicultural agenda.

For example, one day in 2004, a couple of Canadians returned home, to Lester B. Pearson International Airport in Toronto. They were the son and widow of a fellow called Ahmed Said Khadr, who back on the Pakistani-Afghan frontier was known as "al-Kanadi." Why? Because he was the highest-ranking Canadian in al Qaeda--plenty of other Canucks in al Qaeda, but he was the Numero Uno. In fact, one could argue that the Khadr family is Canada's principal contribution to the war on terror. Granted they're on the wrong side (if you'll forgive my being judgmental) but no one can argue that they aren't in the thick of things. One of Mr. Khadr's sons was captured in Afghanistan after killing a U.S. Special Forces medic. Another was captured and held at Guantanamo. A third blew himself up while killing a Canadian soldier in Kabul. Pa Khadr himself died in an al Qaeda shootout with Pakistani forces in early 2004. And they say we Canadians aren't doing our bit in this war!

In the course of the fatal shootout of al-Kanadi, his youngest son was paralyzed. And, not unreasonably, Junior didn't fancy a prison hospital in Peshawar. So Mrs. Khadr and her boy returned to Toronto so he could enjoy the benefits of Ontario government health care. "I'm Canadian, and I'm not begging for my rights," declared the widow Khadr. "I'm demanding my rights."

As they always say, treason's hard to prove in court, but given the circumstances of Mr. Khadr's death it seems clear that not only was he providing "aid and comfort to the Queen's enemies" but that he was, in fact, the Queen's enemy. The Princess Patricia's Canadian Light Infantry, the Royal 22nd Regiment and other Canucks have been participating in Afghanistan, on one side of the conflict, and the Khadr family had been over there participating on the other side. Nonetheless, the prime minister of Canada thought Boy Khadr's claims on the public health system was an excellent opportunity to demonstrate his own deep personal commitment to "diversity." Asked about the Khadrs' return to Toronto, he said, "I believe that once you are a Canadian citizen, you have the right to your own views and to disagree."

That's the wonderful thing about multiculturalism: You can choose which side of the war you want to fight on. When the draft card arrives, just tick "home team" or "enemy," according to taste. The Canadian prime minister is a typical late-stage Western politician: He could have said, well, these are contemptible people and I know many of us are disgusted at the idea of our tax dollars being used to provide health care for a man whose Canadian citizenship is no more than a flag of convenience, but unfortunately that's the law and, while we can try to tighten it, it looks like this lowlife's got away with it. Instead, his reflex instinct was to proclaim this as a wholehearted demonstration of the virtues of the multicultural state. Like many enlightened Western leaders, the Canadian prime minister will be congratulating himself on his boundless tolerance even as the forces of intolerance consume him.

That, by the way, is the one point of similarity between the jihad and conventional terrorist movements like the IRA or ETA. Terror groups persist because of a lack of confidence on the part of their targets: The IRA, for example, calculated correctly that the British had the capability to smash them totally but not the will. So they knew that while they could never win militarily, they also could never be defeated. The Islamists have figured similarly. The only difference is that most terrorist wars are highly localized. We now have the first truly global terrorist insurgency because the Islamists view the whole world the way the IRA view the bogs of Fermanagh: They want it, and they've calculated that our entire civilization lacks the will to see them off.

We spend a lot of time at The New Criterion attacking the elites, and we're right to do so. The commanding heights of the culture have behaved disgracefully for the last several decades. But if it were just a problem with the elites, it wouldn't be that serious: The mob could rise up and hang 'em from lampposts--a scenario that's not unlikely in certain Continental countries. But the problem now goes way beyond the ruling establishment. The annexation by government of most of the key responsibilities of life--child-raising, taking care of your elderly parents--has profoundly changed the relationship between the citizen and the state. At some point--I would say socialized health care is a good marker--you cross a line, and it's very hard then to persuade a citizenry enjoying that much government largesse to cross back. In National Review recently, I took issue with that line Gerald Ford always uses to ingratiate himself with conservative audiences: "A government big enough to give you everything you want is big enough to take away everything you have." Actually, you run into trouble long before that point: A government big enough to give you everything you want still isn't big enough to get you to give anything back. That's what the French and German political classes are discovering.

Go back to that list of local conflicts I mentioned. The jihad has held out a long time against very tough enemies. If you're not shy about taking on the Israelis, the Russians, the Indians and the Nigerians, why wouldn't you fancy your chances against the Belgians and Danes and New Zealanders?

So the jihadists are for the most part doing no more than giving us a prod in the rear as we sleepwalk to the cliff. When I say "sleepwalk," it's not because we're a blasé culture. On the contrary, one of the clearest signs of our decline is the way we expend so much energy worrying about the wrong things. If you've read Jared Diamond's bestselling book "Collapse: How Societies Choose to Fail or Succeed," you'll know it goes into a lot of detail about Easter Island going belly up because they chopped down all their trees. Apparently that's why they're not a G-8 member or on the U.N. Security Council. Same with the Greenlanders and the Mayans and Diamond's other curious choices of "societies." Indeed, as the author sees it, pretty much every society collapses because it chops down its trees.

Poor old Diamond can't see the forest because of his obsession with the trees. (Russia's collapsing even as it's undergoing reforestation.) One way "societies choose to fail or succeed" is by choosing what to worry about. The Western world has delivered more wealth and more comfort to more of its citizens than any other civilization in history, and in return we've developed a great cult of worrying. You know the classics of the genre: In 1968, in his bestselling book "The Population Bomb," the eminent scientist Paul Ehrlich declared: "In the 1970s the world will undergo famines--hundreds of millions of people are going to starve to death." In 1972, in their landmark study "The Limits to Growth," the Club of Rome announced that the world would run out of gold by 1981, of mercury by 1985, tin by 1987, zinc by 1990, petroleum by 1992, and copper, lead and gas by 1993.

None of these things happened. In fact, quite the opposite is happening. We're pretty much awash in resources, but we're running out of people--the one truly indispensable resource, without which none of the others matter. Russia's the most obvious example: it's the largest country on earth, it's full of natural resources, and yet it's dying--its population is falling calamitously.

The default mode of our elites is that anything that happens--from terrorism to tsunamis--can be understood only as deriving from the perniciousness of Western civilization. As Jean-Francois Revel wrote, "Clearly, a civilization that feels guilty for everything it is and does will lack the energy and conviction to defend itself."

And even though none of the prognostications of the eco-doom blockbusters of the 1970s came to pass, all that means is that 30 years on, the end of the world has to be rescheduled. The amended estimated time of arrival is now 2032. That's to say, in 2002, the United Nations Global Environmental Outlook predicted "the destruction of 70 percent of the natural world in thirty years, mass extinction of species. . . . More than half the world will be afflicted by water shortages, with 95 percent of people in the Middle East with severe problems . . . 25 percent of all species of mammals and 10 percent of birds will be extinct . . ."

Etc., etc., for 450 pages. Or to cut to the chase, as the Guardian headlined it, "Unless We Change Our Ways, The World Faces Disaster."

Well, here's my prediction for 2032: unless we change our ways the world faces a future . . . where the environment will look pretty darn good. If you're a tree or a rock, you'll be living in clover. It's the Italians and the Swedes who'll be facing extinction and the loss of their natural habitat.

There will be no environmental doomsday. Oil, carbon dioxide emissions, deforestation: none of these things is worth worrying about. What's worrying is that we spend so much time worrying about things that aren't worth worrying about that we don't worry about the things we should be worrying about. For 30 years, we've had endless wake-up calls for things that aren't worth waking up for. But for the very real, remorseless shifts in our society--the ones truly jeopardizing our future--we're sound asleep. The world is changing dramatically right now, and hysterical experts twitter about a hypothetical decrease in the Antarctic krill that might conceivably possibly happen so far down the road there are unlikely to be any Italian or Japanese enviro-worriers left alive to be devastated by it.

In a globalized economy, the environmentalists want us to worry about First World capitalism imposing its ways on bucolic, pastoral, primitive Third World backwaters. Yet, insofar as "globalization" is a threat, the real danger is precisely the opposite--that the peculiarities of the backwaters can leap instantly to the First World. Pigs are valued assets and sleep in the living room in rural China--and next thing you know an unknown respiratory disease is killing people in Toronto, just because someone got on a plane. That's the way to look at Islamism: We fret about McDonald's and Disney, but the big globalization success story is the way the Saudis have taken what was 80 years ago a severe but obscure and unimportant strain of Islam practiced by Bedouins of no fixed abode and successfully exported it to the heart of Copenhagen, Rotterdam, Manchester, Buffalo . . .

What's the better bet? A globalization that exports cheeseburgers and pop songs or a globalization that exports the fiercest aspects of its culture? When it comes to forecasting the future, the birthrate is the nearest thing to hard numbers. If only a million babies are born in 2006, it's hard to have two million adults enter the workforce in 2026 (or 2033, or 2037, or whenever they get around to finishing their Anger Management and Queer Studies degrees). And the hard data on babies around the Western world is that they're running out a lot faster than the oil is. "Replacement" fertility rate--i.e., the number you need for merely a stable population, not getting any bigger, not getting any smaller--is 2.1 babies per woman. Some countries are well above that: the global fertility leader, Somalia, is 6.91, Niger 6.83, Afghanistan 6.78, Yemen 6.75. Notice what those nations have in common?

Scroll way down to the bottom of the Hot One Hundred top breeders and you'll eventually find the United States, hovering just at replacement rate with 2.07 births per woman. Ireland is 1.87, New Zealand 1.79, Australia 1.76. But Canada's fertility rate is down to 1.5, well below replacement rate; Germany and Austria are at 1.3, the brink of the death spiral; Russia and Italy are at 1.2; Spain 1.1, about half replacement rate. That's to say, Spain's population is halving every generation. By 2050, Italy's population will have fallen by 22%, Bulgaria's by 36%, Estonia's by 52%. In America, demographic trends suggest that the blue states ought to apply for honorary membership of the EU: In the 2004 election, John Kerry won the 16 with the lowest birthrates; George W. Bush took 25 of the 26 states with the highest. By 2050, there will be 100 million fewer Europeans, 100 million more Americans--and mostly red-state Americans.

As fertility shrivels, societies get older--and Japan and much of Europe are set to get older than any functioning societies have ever been. And we know what comes after old age. These countries are going out of business--unless they can find the will to change their ways. Is that likely? I don't think so. If you look at European election results--most recently in Germany--it's hard not to conclude that, while voters are unhappy with their political establishments, they're unhappy mainly because they resent being asked to reconsider their government benefits and, no matter how unaffordable they may be a generation down the road, they have no intention of seriously reconsidering them. The Scottish executive recently backed down from a proposal to raise the retirement age of Scottish public workers. It's presently 60, which is nice but unaffordable. But the reaction of the average Scots worker is that that's somebody else's problem. The average German worker now puts in 22% fewer hours per year than his American counterpart, and no politician who wishes to remain electorally viable will propose closing the gap in any meaningful way.

This isn't a deep-rooted cultural difference between the Old World and the New. It dates back all the way to, oh, the 1970s. If one wanted to allocate blame, one could argue that it's a product of the U.S. military presence, the American security guarantee that liberated European budgets: instead of having to spend money on guns, they could concentrate on butter, and buttering up the voters. If Washington's problem with Europe is that these are not serious allies, well, whose fault is that? Who, in the years after the Second World War, created NATO as a postmodern military alliance? The "free world," as the Americans called it, was a free ride for everyone else. And having been absolved from the primal responsibilities of nationhood, it's hardly surprising that European nations have little wish to reshoulder them. In essence, the lavish levels of public health care on the Continent are subsidized by the American taxpayer. And this long-term softening of large sections of the West makes them ill-suited to resisting a primal force like Islam.

There is no "population bomb." There never was. Birthrates are declining all over the world--eventually every couple on the planet may decide to opt for the Western yuppie model of one designer baby at the age of 39. But demographics is a game of last man standing. The groups that succumb to demographic apathy last will have a huge advantage. Even in 1968 Paul Ehrlich and his ilk should have understood that their so-called population explosion was really a massive population adjustment. Of the increase in global population between 1970 and 2000, the developed world accounted for under 9% of it, while the Muslim world accounted for 26%. Between 1970 and 2000, the developed world declined from just under 30% of the world's population to just over 20%, the Muslim nations increased from about 15% to 20%.

Nineteen seventy doesn't seem that long ago. If you're the age many of the chaps running the Western world today are wont to be, your pants are narrower than they were back then and your hair's less groovy, but the landscape of your life--the look of your house, the layout of your car, the shape of your kitchen appliances, the brand names of the stuff in the fridge--isn't significantly different. Aside from the Internet and the cell phone and the CD, everything in your world seems pretty much the same but slightly modified.

And yet the world is utterly altered. Just to recap those bald statistics: In 1970, the developed world had twice as big a share of the global population as the Muslim world: 30% to 15%. By 2000, they were the same: each had about 20%.

And by 2020?

So the world's people are a lot more Islamic than they were back then and a lot less "Western." Europe is significantly more Islamic, having taken in during that period some 20 million Muslims (officially)--or the equivalents of the populations of four European Union countries (Ireland, Belgium, Denmark and Estonia). Islam is the fastest-growing religion in the West: In the U.K., more Muslims than Christians attend religious services each week.

Can these trends continue for another 30 years without having consequences? Europe by the end of this century will be a continent after the neutron bomb: The grand buildings will still be standing, but the people who built them will be gone. We are living through a remarkable period: the self-extinction of the races who, for good or ill, shaped the modern world.

What will Europe be like at the end of this process? Who knows? On the one hand, there's something to be said for the notion that America will find an Islamified Europe more straightforward to deal with than M. Chirac, Herr Schroeder & Co. On the other hand, given Europe's track record, getting there could be very bloody. But either way this is the real battlefield. The al Qaeda nutters can never find enough suicidal pilots to fly enough planes into enough skyscrapers to topple America. But unlike us, the Islamists think long-term, and, given their demographic advantage in Europe and the tone of the emerging Muslim lobby groups there, much of what they're flying planes into buildings for they're likely to wind up with just by waiting a few more years. The skyscrapers will be theirs; why knock 'em over?

The latter half of the decline and fall of great civilizations follows a familiar pattern: affluence, softness, decadence, extinction. You don't notice yourself slipping through those stages because usually there's a seductive pol on hand to provide the age with a sly, self-deluding slogan--like Bill Clinton's "It's about the future of all our children." We on the right spent the 1990s gleefully mocking Mr. Clinton's tedious invocation, drizzled like syrup over everything from the Kosovo war to highway appropriations. But most of the rest of the West can't even steal his lame bromides: A society that has no children has no future.

Permanence is the illusion of every age. In 1913, no one thought the Russian, Austrian, German and Turkish empires would be gone within half a decade. Seventy years on, all those fellows who dismissed Reagan as an "amiable dunce" (in Clark Clifford's phrase) assured us the Soviet Union was likewise here to stay. The CIA analysts' position was that East Germany was the ninth biggest economic power in the world. In 1987 there was no rash of experts predicting the imminent fall of the Berlin Wall, the Warsaw Pact and the USSR itself.

Yet, even by the minimal standards of these wretched precedents, so-called post-Christian civilizations--as a prominent EU official described his continent to me--are more prone than traditional societies to mistake the present tense for a permanent feature. Religious cultures have a much greater sense of both past and future, as we did a century ago, when we spoke of death as joining "the great majority" in "the unseen world." But if secularism's starting point is that this is all there is, it's no surprise that, consciously or not, they invest the here and now with far greater powers of endurance than it's ever had. The idea that progressive Euro-welfarism is the permanent resting place of human development was always foolish; we now know that it's suicidally so.

To avoid collapse, European nations will need to take in immigrants at a rate no stable society has ever attempted. The CIA is predicting the EU will collapse by 2020. Given that the CIA's got pretty much everything wrong for half a century, that would suggest the EU is a shoo-in to be the colossus of the new millennium. But even a flop spook is right twice a generation. If anything, the date of EU collapse is rather a cautious estimate. It seems more likely that within the next couple of European election cycles, the internal contradictions of the EU will manifest themselves in the usual way, and that by 2010 we'll be watching burning buildings, street riots and assassinations on American network news every night. Even if they avoid that, the idea of a childless Europe ever rivaling America militarily or economically is laughable. Sometime this century there will be 500 million Americans, and what's left in Europe will either be very old or very Muslim. Japan faces the same problem: Its population is already in absolute decline, the first gentle slope of a death spiral it will be unlikely ever to climb out of. Will Japan be an economic powerhouse if it's populated by Koreans and Filipinos? Very possibly. Will Germany if it's populated by Algerians? That's a trickier proposition.

Best-case scenario? The Continent winds up as Vienna with Swedish tax rates.

Worst-case scenario: Sharia, circa 2040; semi-Sharia, a lot sooner--and we're already seeing a drift in that direction.

In July 2003, speaking to the U.S. Congress, Tony Blair remarked: "As Britain knows, all predominant power seems for a time invincible but, in fact, it is transient. The question is: What do you leave behind?"

Excellent question. Britannia will never again wield the unrivalled power she enjoyed at her imperial apogee, but the Britannic inheritance endures, to one degree or another, in many of the key regional players in the world today--Australia, India, South Africa--and in dozens of island statelets from the Caribbean to the Pacific. If China ever takes its place as an advanced nation, it will be because the People's Republic learns more from British Hong Kong than Hong Kong learns from the Little Red Book. And of course the dominant power of our time derives its political character from 18th-century British subjects who took English ideas a little further than the mother country was willing to go.

A decade and a half after victory in the Cold War and end-of-history triumphalism, the "what do you leave behind?" question is more urgent than most of us expected. "The West," as a concept, is dead, and the West, as a matter of demographic fact, is dying.

What will London--or Paris, or Amsterdam--be like in the mid-'30s? If European politicians make no serious attempt this decade to wean the populace off their unsustainable 35-hour weeks, retirement at 60, etc., then to keep the present level of pensions and health benefits the EU will need to import so many workers from North Africa and the Middle East that it will be well on its way to majority Muslim by 2035. As things stand, Muslims are already the primary source of population growth in English cities. Can a society become increasingly Islamic in its demographic character without becoming increasingly Islamic in its political character?

This ought to be the left's issue. I'm a conservative--I'm not entirely on board with the Islamist program when it comes to beheading sodomites and so on, but I agree Britney Spears dresses like a slut: I'm with Mullah Omar on that one. Why then, if your big thing is feminism or abortion or gay marriage, are you so certain that the cult of tolerance will prevail once the biggest demographic in your society is cheerfully intolerant? Who, after all, are going to be the first victims of the West's collapsed birthrates? Even if one were to take the optimistic view that Europe will be able to resist the creeping imposition of Sharia currently engulfing Nigeria, it remains the case that the Muslim world is not notable for setting much store by "a woman's right to choose," in any sense.

I watched that big abortion rally in Washington in 2004, where Ashley Judd and Gloria Steinem were cheered by women waving "Keep your Bush off my bush" placards, and I thought it was the equivalent of a White Russian tea party in 1917. By prioritizing a "woman's right to choose," Western women are delivering their societies into the hands of fellows far more patriarchal than a 1950s sitcom dad. If any of those women marching for their "reproductive rights" still have babies, they might like to ponder demographic realities: A little girl born today will be unlikely, at the age of 40, to be free to prance around demonstrations in Eurabian Paris or Amsterdam chanting "Hands off my bush!"

Just before the 2004 election, that eminent political analyst Cameron Diaz appeared on the Oprah Winfrey show to explain what was at stake:

"Women have so much to lose. I mean, we could lose the right to our bodies. . . . If you think that rape should be legal, then don't vote. But if you think that you have a right to your body," she advised Oprah's viewers, "then you should vote."

Poor Cameron. A couple of weeks later, the scary people won. She lost all rights to her body. Unlike Alec Baldwin, she couldn't even move to France. Her body was grounded in Terminal D.

But, after framing the 2004 presidential election as a referendum on the right to rape, Miss Diaz might be interested to know that men enjoy that right under many Islamic legal codes around the world. In his book "The Empty Cradle," Philip Longman asks: "So where will the children of the future come from? Increasingly they will come from people who are at odds with the modern world. Such a trend, if sustained, could drive human culture off its current market-driven, individualistic, modernist course, gradually creating an anti-market culture dominated by fundamentalism--a new Dark Ages."

Bottom line for Cameron Diaz: There are worse things than John Ashcroft out there.

Mr. Longman's point is well taken. The refined antennae of Western liberals mean that whenever one raises the question of whether there will be any Italians living in the geographical zone marked as Italy a generation or three hence, they cry, "Racism!" To fret about what proportion of the population is "white" is grotesque and inappropriate. But it's not about race, it's about culture. If 100% of your population believes in liberal pluralist democracy, it doesn't matter whether 70% of them are "white" or only 5% are. But if one part of your population believes in liberal pluralist democracy and the other doesn't, then it becomes a matter of great importance whether the part that does is 90% of the population or only 60%, 50%, 45%.

Since the president unveiled the so-called Bush Doctrine--the plan to promote liberty throughout the Arab world--innumerable "progressives" have routinely asserted that there's no evidence Muslims want liberty and, indeed, that Islam is incompatible with democracy. If that's true, it's a problem not for the Middle East today but for Europe the day after tomorrow. According to a poll taken in 2004, over 60% of British Muslims want to live under Shariah--in the United Kingdom. If a population "at odds with the modern world" is the fastest-breeding group on the planet--if there are more Muslim nations, more fundamentalist Muslims within those nations, more and more Muslims within non-Muslim nations, and more and more Muslims represented in more and more transnational institutions--how safe a bet is the survival of the "modern world"?

Not good.

"What do you leave behind?" asked Tony Blair. There will only be very few and very old ethnic Germans and French and Italians by the midpoint of this century. What will they leave behind? Territories that happen to bear their names and keep up some of the old buildings? Or will the dying European races understand that the only legacy that matters is whether the peoples who will live in those lands after them are reconciled to pluralist, liberal democracy? It's the demography, stupid. And, if they can't muster the will to change course, then "What do you leave behind?" is the only question that matters.

Mr. Steyn is a syndicated columnist and theater critic for The New Criterion, in whose January issue this article appears.

Copyright © 2006 Dow Jones & Company, Inc. All Rights Reserved.