The path from supply-side economics to deficit spending

  • Share
  • Read Later

I got an e-mail a couple of weeks ago from Ben Etheridge, a high school senior in Marietta, Georgia, who had come across a 2003 article I wrote on the Bush tax cuts. Ben said the article was “more helpful in trying understand supply side economics than many other sources on the Internet” but that, well, he still didn’t understand supply-side economics.

This may indicate that I don’t understand the subject either, but Ben asked me if I could take another stab at explaining it. With the midterm elections less than two weeks away for a Congress loaded with apparent supply-siders, now seems as good a time as any to try:

(Sadly, the great popularizer of supply-side economics, former Wall Street Journal editorial writer Jude Wanniski, is no longer around to critique what I come up with–although you can read his annotation of my 2003 article here.)

At its core, supply-side economics is the economics that reigned before John Maynard Keynes came along. You could also call it traditional economics, neoclassical economics, or mainstream economics. It assumes that people respond rationally to economic incentives, and unfettered markets arrive at something close to optimal results. Saving, in this worldview, is a good thing–because savings are always put to use in productive investments that make the economy grow.

During the Great Depression of the 1930s, with banks failing and people stuffing the money they still had in mattresses, English economist/investor Keynes became convinced that savings weren’t always put to good use and government needed to intervene to stimulate economic activity with tax cuts or–better yet, since the money from the tax cuts might get stuffed in mattresses too–spending.

Keynes’s argument was vindicated by the American experience during World War II, when massive deficit spending brought full employment and strong economic growth. A few years later, monetary policy was added to the picture–many economists came to believe that the Federal Reserve could reliably fight unemployment by keeping interest rates low (and putting up with moderate inflation). Economic policymaking in the U.S. thus came to focus on manipulating demand through taxing, spending and tweaking interest rates. This wasn’t just a Democrat thing. Declared Republican President Richard Nixon in 1971: “Now, I am a Keynesian.”

Not long after Nixon said that, though, Keynesianism seemed to stop working. Despite government deficits and high inflation, the economy sputtered. The strong growth in productivity (usually measured as economic output per hour worked) that had brought vastly increased prosperity from the 1940s through the 1960s slowed to a Perimeter-at-rush-hour crawl.

To explain why this was happening, economists found themselves returning to pre-Keynesian ideas about incentives and the importance of savings and investment. I think it’s fair to say that most academic economists now think that while Keynes was onto something about short-run economic fluctuations, it’s more productive to focus on what drives long-run growth. That means things like the incentive effects of tax policy, the human capital created by education, and the ways in which legal and regulatory systems enable investment and entrepreneurship. It’s the supply side (labor supply, capital supply, etc.) that interests them more than the demand side.

Most of these economists would, however, cringe at being called “supply-siders.” That’s partly because the term has become identified with the Republican Party and, even though economists are perceived as the right wingers on most college campuses, they’re still on college campuses, which means they’re usually Democrats. But it’s also because Wanniski attached the label to a wildly oversimplified version of traditional economics in which the only thing that mattered was tax policy, and tax cuts were always a good idea.

Wanniski arrived at the Journal editorial page in 1972 knowing nothing about economics. Watching how flummoxed the Keynesians were by the strange events that followed, he soon concluded that most economists didn’t know much about economics either. But he was impressed by two professors who had seen at least some of the troubles of the mid-1970s coming: Robert Mundell of Columbia University (who won a Nobel in 1999 for his work in international economics) and Arthur Laffer of the University of Southern California (who now runs an economic consulting firm).

Wanniski’s contribution was to take what he learned from Mundell and Laffer and adapt it to political reality. He adopted the term “supply-sider” after being labeled as such by the chairman of Nixon’s Council of Economic Advisers, Herb Stein (Ben‘s dad). He converted his boss at the Journal, Robert Bartley, to the cause and wrote a 1978 book, How the World Works, that laid out his philosophy in detail. He became an adviser to presidential hopeful Ronald Reagan, and after Reagan won in 1980 he helped craft the dramatic tax cuts that Reagan pushed through Congress in 1981.

Wanniski’s rallying cry was what he dubbed the “Laffer curve,” a simple chart illustrating how lower tax rates can bring in higher revenue by stimulating economic activity (or at least cutting back on tax avoidance). According to Wanniski, Laffer sketched the curve on a napkin during a December 1974 dinner at the Two Continents restaurants in the Hotel Washington with him and White House aides Dick Cheney and Donald Rumsfeld, whom you might have heard of. Laffer himself later cast some minor aspersions on this account. He also disclaimed authorship of the idea, giving earlier economists (among them Keynes!) all the credit. But the name “Laffer curve” stuck.

The Laffer curve enabled Wanniski to sell his supply-side ideas as a free lunch, which is what made them so politically successful. You could cut taxes, yet not cut spending–the best of all worlds for an elected official.

Economists generally don’t believe in free lunches, but most agree with Laffer that when tax rates get high enough, lowering them brings in more revenue. The question is how high the rates have to be, and no one has a reliable answer to that. With personal income tax, it’s probably somewhere upwards of a 50% marginal rate. (The top marginal rate was 70% when Reagan took office and 28% when he left; it’s 35% now.) With taxes on capital gains, dividends, and interest, the cutoff is probably lower. That’s partly because such taxes are easier to avoid, but also because they weigh more directly on the savings and investment that bring long-run growth.

The reduction of the top income tax rate in the Reagan years did have a Laffer effect, but his tax cuts as a whole did not. Are we in Laffer curve territory now? I wouldn’t be entirely surprised if a reduction in the U.S. corporate income tax rate eventually brought in higher receipts, given as how it’s currently among the highest on the planet. Beyond that, I’m doubtful.

Serious economists with sup
ply-side leanings–like former Bush economic adviser Glenn Hubbard, now the dean of Columbia Business School–think the dividend, capital gains and income tax cuts enacted during the Bush presidency can increase economic growth by several tenths of a percentage point a year. (That may not sound like much, but compound three-tenths of a percentage point in added growth over 50 years and you get $7,000 more dollars a year in the pocket of the average American.)

I haven’t been able to find any such economists, though, claiming that the tax cuts paid for themselves, Laffer-style. That sort of talk has been the sole province of polemicists and politicians. Here’s how President Bush put it in a speech in February:

What happened was we cut taxes and in 2004, revenues increased 5.5 percent. And last year those revenues increased 14.5 percent, or $274 billion. And the reason why is cutting taxes caused the economy to grow, and as the economy grows there is more revenue generated in the private sector, which yields more tax revenues.

The problem with this argument is that the economy, and with it tax receipts, would have grown in 2004 and 2005 even if there hadn’t been any tax cuts. Growing happens to be something the U.S. economy does most every year (you can look it up). The tax cuts may have have made it grow a little bit faster, but not enough to make up for the revenue loss caused by the lower tax rates.

This isn’t just my opinion; it’s also the verdict of the Congressional Budget Office, the nonpartisan maker of deficit projections currently run by a former Bush administration economist. Even after making some pretty liberal assumptions about how much the tax cuts will boost long-run economic growth, the CBO estimated earlier this year that extending them past 2010 would still reduce government revenue, not increase it.

Even tax cuts that don’t pay for themselves can be a good idea–I happen to be a big fan of the cut in taxes on dividend income that the President (egged on by Hubbard) pushed through Congress in 2003. But such cuts do eventually have to be paid for, either by cutting spending or raising some other tax. The current administration has so far opted to shunt this burden to future generations (or current generations, a few years down the road).

As I’ve written before, the Bush administration’s deficit spending isn’t necessarily a disaster. But neither is it really supply-side economics, because the increased saving by individuals and businesses enabled by the tax cut has been largely gobbled up by increased government borrowing. That makes it either (1) a wartime necessity, (2) closet Keynesianism, or (3) buck passing.

UPDATE: I’ve responded to one of the comments, which claims that “every time major individual tax cuts have gone through, tax receipts go up considerably quicker than they did during the preceding period,” here.