ads1

Senin, 19 Juni 2023

Hope from the left

One ray of hope in the current political scene comes from the land of deep blue.  However one views the immense expenditure on solar panels, windmills and electric cars, (produced in the US by US union labor, of course), plus forced electrification of heat and cooking, a portion of the blue-state left has noticed that this program cannot possibly work given laws and regulations that have basically shut down all new construction. And a substantial reform may follow.  

I am prodded to write by Ezra Kleins' interesting oped in the New York Times, "What the Hell Happened to the California of the ’50s and ’60s?," a question repeatedly asked to Governor Gavin Newsom. The answer is, of course "you happened to it." For those who don't know, California in the 50s and 60s was famous for quickly building new dams, aqueducts, freeways, a superb public education system, and more.   

Gavin Newsom states the issue well. 

"..we need to build. You can’t be serious about climate and the environment without reforming permitting and procurement in this state.”

You can't be serious about business, housing, transportation, wildfire control, water, and a whole lot else without reforming permitting and procurement, but heck it's a start. 

Hitting these [climate] goals requires California to almost quadruple the amount of electricity it can generate — and shift what it now gets from polluting fuels to clean sources. That means turning huge areas of land over to solar farms, wind turbines and geothermal systems.

or, heaven forbid, nuclear, which among other things works at night.  I don't think most of San Francisco's progressive gentry really understand how massive their envisioned "transition" really is. 

It means building the transmission lines to move that energy from where it’s made to where it’s needed. It means dotting the landscape with enough electric vehicle charging stations to make the state’s proposed ban on cars with internal combustion engines possible. Taken as a whole, it’s a construction task bigger than anything the state has ever attempted, and it needs to be completed at a speed that nothing in the state’s recent history suggests is possible....John Podesta, a senior adviser to President Biden on clean energy, said in a speech last month. “We got so good at stopping projects that we forgot how to build things in America.”

Newsom:

“I watched as a mayor and then a lieutenant governor and now governor as years became decades on high-speed rail,” he said. “People are losing trust and confidence in our ability to build big things.

Losing? That train left long ago, unlike the high speed one. 

The part that really caught my eye: Klein complains that Newsom's current proposal is 

 a collection of mostly modest, numbingly specific policies. When a lawsuit is brought under the California Environmental Quality Act, should all emails sent between agency staff members be part of the record or only those communications seen by the decision makers? Should environmental litigation be confined to 270 days for certain classes of infrastructure? Should the California Department of Transportation contract jobs out by type, or does it need to run a new contracting process for each task? Should 15 endangered species currently classified as fully protected be reclassified as threatened to make building near them less onerous? And on it goes.

Maybe, as Klein suggests, this is a measure of the bill being small and marginal. But I think the point is deeper: this is what regulatory reform is all about. Which is why regulatory reform is so hard. "Stimulus" is easy to understand: Hand out money. Regulatory reform, especially reform to stop the litany of lawsuits and dozens of veto points which are the central problem in the US, is all about the mind-numbing details.  "should all emails sent between agency staff members be part of the record" sounds like a mind-numbing detail. But think how these lawsuits work. Is discovery and testimony going to allow this entire record to be searched for an email where staffer Jane writes to staffer Bob one line that can be used to restart the whole proceedings? "Only" 270 days rather than 10 years? That matters a lot. Contracting process, which can be the basis for a lawsuit. 

I'll retell a joke. Fixing regulation is a Marie-Kondo job; long hard and unpleasant, each drawer at a time. 

The article is also interesting on the fight within the left. There is really a deep philosophical divide. On the one hand are basically technocrats who really do see climate as an issue, and want to do something about it. They believe their own ideology that time matters too. If it takes 10 years to permit every high power line, Al Gore's oceans will boil before anything gets done. 

On the other side are basically conservatives and degrowthers. "Conservative" really is the appropriate word -- people who want to keep things exactly the way they are with no building anything new. Save our neighborhoods they say, though those were built willy nilly by developers in the 1950s. (Palo Alto now applies historic preservation to 1950s tract houses, and forbids second stories in those neighborhoods to preserve the look and feel. How can you not call this "conservative?") "Degrowth" is a self-chosen word for the Greta Thunberg branch of the environmental movement. Less, especially less for the lower classes, not really for us who jet around the world to climate conferences. Certainly do not allow the teeming billions of India and Africa to approach our prosperity. I think "deliberate impoverishment" is a better word. Some of it has an Amish view of technology as evil. And some is, I guess, just habit, we've been saying no to everything since 1968, why stop now. 

Klein characterizes the opponents: 

More than 100 environmental groups — including the Sierra Club of California and The Environmental Defense Center — are joining to fight a package Newsom designed to make it easier to build infrastructure in California.

... opposition groups say that moving so fast “excludes the public and stakeholders and avoids open and transparent deliberation of important and complicated policies.”

...The California Environmental Justice Alliance sent me a statement that said, in bold type, “Requiring a court to resolve an action within 270 days to the extent feasible is harmful to low-income and EJ” — which stands for environmental justice — “communities.” It doesn’t get much clearer than that.

I am delighted to see in the New York Times, finally, the word "communities" adorned with scare quotes. But there is the tension: You can't both be really serious that climate change is a looming existential threat to humanity that demands an end to carbon emission by year 20X in the near future, and the view that in 270 days we cannot possibly figure out how to do so in a way that protects "communities." Climate must not really be that bad, or perhaps it was just an unserious talking point in a larger political project. 

These are the beginning stages of a transition from a liberalism that spends to a liberalism that builds. It’s going to be messy. Until now, progressives have been mostly united in the fight against climate change. They wanted more money for clean energy and more ambitious targets for phasing out fossil fuels and got them. Now that new energy system needs to be built, and fast. And progressives are nowhere near agreement on how to do that.

The last three sentences are telling. Did they really want just to announce goals and spend a few hundred billions and feel good? Or did they actually want all the windmills, solar cells, and power lines involved?  

But the fight isn’t just about this package. Everyone involved believes there are many permitting reforms yet to come, as the world warms and the clock ticks down on California’s goals and the federal government begins to apply more pressure.

Once something becomes partisan in the US, it freezes and little gets done. I am hopeful here, because it plays out within one party. California is a one-party state, but that does not put it above politics. It does mean that progress is more likely. Can we hope that "a liberalism that builds," in reasonable time and somewhat less than astronomical cost, projects that might be actually useful, could emerge from all this? 

In the larger picture, a movement among good progressive democrats in places like California has figured out that if we want more housing at more reasonable prices, just letting people build houses might be a good idea. Houses, apartments, any houses and apartments, not just dollops of incredible expensive government-allocated ("affordable") and homeless housing. This is the YIMBY movement in California. It is sadly instantly opposed by Republicans, but maybe that's for the better given how reviled that brand is in Sacramento. And it is also making slow headway. 

Minggu, 18 Juni 2023

The perennial fantasy

Two attacks, and one defense, of classical liberal ideas appeared over the weekend. "War and Pandemic Highlight Shortcomings of the Free-Market Consensus" announces Patricia Cohen on p.1 of the New York Times news section.  As if the Times had ever been part of such a "consensus." And Deirdre McCloskey reviews Simon Johnson and Daron Acemoglu's "Power and Progress," whose central argument is, per Deirdre, "The state, they argue, can do a better job than the market of selecting technologies and making investments to implement them." (I have not yet read the book. This is a review of the review only.) 

I'll give away the punchline. The case for free markets never was their perfection. The case for free markets always was centuries of experience with the failures of the only alternative, state control. Free markets are, as the saying goes, the worst system; except for all the others. 

In this sense the classic teaching of economics does  a disservice. We start with the theorem that free competitive markets can equal -- only equal -- the allocation of an omniscient benevolent planner. But then from week 2 on we study market imperfections -- externalities, increasing returns, asymmetric information -- under which markets are imperfect, and the hypothetical planner can do better. Regulate, it follows. Except econ 101 spends zero time on our extensive experience with just how well -- how badly -- actual planners and regulators do. That messy experience underlies our prosperity, and prospects for its continuance. 

Starting with Ms. Cohen at the Times, 

The economic conventions that policymakers had relied on since the Berlin Wall fell more than 30 years ago — the unfailing superiority of open markets, liberalized trade and maximum efficiency — look to be running off the rails.

During the Covid-19 pandemic, the ceaseless drive to integrate the global economy and reduce costs left health care workers without face masks and medical gloves, carmakers without semiconductors, sawmills without lumber and sneaker buyers without Nikes.

That there ever was a "consensus" in favor of "the unfailing superiority of open markets, liberalized trade and maximum efficiency" seems a mighty strange memory. But if the Times wants to think now that's what they thought then, I'm happy to rewrite a little history. 

Face masks? The face mask snafu in the pandemic is now, in the Times' rather hilarious memory, the prime example of how a free and unfettered market fails. It was  a result of "the ceaseless drive to integrate the global economy and reduce costs?" 

(Here, I have a second complaint -- the ceaseless drive to remove subjects from sentences. Who is doing this "ceaseless drive?" Where is the great conspiracy, the secret meeting of old white men "driving" the economy? Nowhere. That's the point of free markets.)

The free market has a plan, imperfect as it might be, for masks in a pandemic. Prices rise. People who really want and need masks -- doctors, nurses, police -- pay what it takes to get them. People who don't really need them -- nursery schools -- look at the price, think about the benefit, and say, "maybe not," or take other measures. People reuse masks. Producers, seeing high prices, work day and night to produce more masks. Others, knowing that every 10 years there is a spike in prices, pay the costs of storing masks to make great profits when the time comes. 

The actual story of masks in the pandemic is the exact opposite. Price controls, of course. Instantly, governments started prosecuting businesses for "price gouging" who dared to raise the price of toilet paper. Governments redistribute income; markets allocate resources efficiently. As usual, the desire to redistribute tiny amounts of income to those willing to stand in line to get toilet paper won out. An entrepreneur tried to start producing masks. The FDA shut him down. (I hope I recall that story right, send comments if not.) China wanted to ship us masks. Yes, China the new villain of globalization gone mad. But their masks were certified and labeled by EU rules, not US rules, so like baby formula they couldn't be imported and sold. 

More deeply, even I, devoted free-marketer; even at the late night beer sessions at the CATO institute, nobody puts mask distribution in a pandemic as the first job of free markets. There is supposed to be a public health function of government; infectuous diseases are something of an externality; safety protocols in government labs doing government funded research are not a free-market function. As we look at the covid catastrophe, do we not see failures of government all over the place, not failures of some hypothetical free market? California even had mobile hospitals after H1N1. Governor Brown shut them down to save money for his high speed train. We might as well blame free markets for the lines at the DMV.  

The idea that trade and shared economic interests would prevent military conflicts was trampled last year under the boots of Russian soldiers in Ukraine.

Does anyone think a prime function of free market economics is to stops wars, usually prosecuted by, eh, governments? The standard history of WWI is enough. We do allege that free markets, and free markets alone, make a country wealthy enough to fight and win wars, if the country has the will and desire to do so. The US and NATO military budget vs. Russia's, larger by a factor of 10 at least, seems to bear that out, along with the much greater quality of our weapons.  Heaven help us militarily once the protectionists lead us to state-directed penury. 

inflation, thought to be safely stored away with disco album collections, returned with a vengeance.

Did anyone every vaguely hint that inflation control is a function of free markets? Inflation comes from government monetary and fiscal policy.  

And increasing bouts of extreme weather that destroyed crops, forced migrations and halted power plants has illustrated that the market’s invisible hand was not protecting the planet.

Doe the Times even vaguely think of news as fact not narrative? There have been a lot of migrations. "Forced?" Many due to violence, poverty, ill government. None due to temperature. Halted power plants (more passive voice)? Yes, it was that pesky unfettered free market that shut down power plants... 

The favored economic road map helped produce fabulous wealth, lift hundreds of millions of people out of poverty and spur wondrous technological advances.

Well, a peek of sunlight, an actual correct fact! 

But there were stunning failures as well. Globalization hastened climate change and deepened inequalities. 

More fact free narrative spinning. How are "inequalities" plural? Globalization brought the sharpest decline in global inequality in the history of our species. Perhaps it "hastened climate change" in that if China had stayed desperately poor they wouldn't be building a new coal fired power plant a week. US emissions went down because of... choose 1: enlightened policy 2: fracking, a shift to natural gas made only possible by the curious US property rights system absent in Europe, and pretty much over the dead body of the entire energy regulatory apparatus. 

***

Meanwhile over at WSJ, Deirdre is in classic form. (Again, I have not read the book, so this is Deirdre coverage.) The paragraph that caught my attention and demanded a blog post: 

We need [according to Acemoglu and Johnson] ... the legislation currently being pushed by left and right to try again the policies of antitrust, trade protection, minimum wage and, above all, subsidy for certain technologies. Messrs. Acemoglu and Johnson are especially eager to regulate digital technologies such as artificial intelligence. “Technology should be steered in a direction that best uses a workforce’s skills,” they write, “and education should . . . adapt to new skill requirements.” How the administrators of the Economic Development Administration at the Department of Commerce would know the new direction to steer, or the new skills required, remains a sacred mystery.

"Technology should be steered." There it is, the full glory of the regulatory passive voice. Steered by who? Deirdre answers the question with that gem of rhetoric, specificity.  "Administrators of the Economic Development Administration at the Department of Commerce" for example. 

The theme uniting the two essays: If there is one lesson of the last 20 years it is this: The catastrophic failure of our government institutions. From bungled wars, a snafu of financial regulation in 2008 just now repeated in FTX, SVB, and inflation the evident collapse of the FDA CDC and plain commonsense in the pandemic, the free market is bravely forestalling a collapse of government (and associated, i.e. universities) institutions. 

we need the state to use its powers “to induce the private sector to move away from excessive automation and surveillance, and toward more worker-friendly technologies.” Fear of surveillance is a major theme of the book; therefore “antitrust should be considered as a complementary tool to the more fundamental aim of redirecting technology away from automation, surveillance, data collection, and digital advertising.”

The question what institution has the technical competence to do this seems to be begging.  

“Government subsidies for developing more socially beneficial technologies,” the authors declare, “are one of the most powerful means of redirecting technology in a market economy.” 

Well, interpreting the sentence literally,  you have to give it to them. Government subsidies are powerful means of "redirecting technology." Usually to ratholes. 

Messrs. Acemoglu and Johnson warmly admire the U.S. Progressive Movement of the late 19th century as a model for their statism: experts taking child-citizens in hand.

Their chapters then skip briskly through history...seeking to show how at each turn new innovations tended to empower certain sections of society at the expense of others. The “power” that concerns them, in other words, is private power.

This is, in fact, the central question dividing free-marketers and others. Private power being subject to competition, we worry more about state power. The essence of state power is monopoly, and a monopoly of coercion, fundamentally violence.  

The heart of the book is that technological gains create winners and losers, and Acemoglu and Johnson want that directed by a nebulous bureaucracy. Which will somehow never be infected by, oh, Republicans, or turn in to the endless stagnation of most of the last millennium which actually did pursue policies that forbade technological improvement in order to sustain the incomes of incumbents. Deirdre, who coined the lovely phrase "trade tested betterment" takes it on. 

 During the past two centuries, the world has become radically better off, by fully 3,000% inflation adjusted. Even over the past two decades the lives of the poor have improved. The “great enrichment” after 1800 and its resulting superabundance has brought us out of misery. Even the poor workers who did not benefit in the short run have done so enormously in the long run. In 1960, 4 billion of the 5 billion people on the planet lived on $2 a day. Now it’s fallen to 1 billion out of 8, and the income average is $50 a day. The state didn’t do it, and forcing short-run egalitarianism or handing power to the Office of Economic Development can kill it, as it regularly has. Messrs. Acemoglu and Johnson see great imperfections in the overwhelmingly private sources of the enrichment. With such imperfections, who needs perfection?

Another way to see the problem is to remember the common sense, refined in Economics 101 and Biology 101, of entry at the smell of profit. ...The great fortunes they deprecate have the economic function of encouraging entry into the economy by other entrepreneurs who want to get rich. This competition cheapens goods and services, which then accrues to the poor as immense increases in real income.

Many fortunes, for instance, were made by the invention of the downtown department store. The profit attracted suburban competitors, and at the mall the department-store model began to fail. Jeff Bezos reinvented the mail-order catalog. He is imitated, and the fortunes are dissipated in enormous benefit to consumers called workers. 

.... It’s what happened and happens in a liberal economy.

The book uses a lot of history, surveyed by McCloskey. As before, it's criticized a bit as history lite. The history Deirdre covers has the usual imperfections of the free market. 

I wonder if the book has any history of success of this plan, of governments successfully guiding technological transformations to protect the rights and incomes of incumbents, without in the process killing technical change.  Governments habitually screw up basics like rent control. Figuring out what new technology will do is pretty much beyond the capacity of private investors and book-writing economists. The  idea that bureaucracy has the capacity to figure out not just what new technology will work, but to guide its social and distributional consequences seems... far beyond the historical record of bureaucratic accomplishment. But I am straying beyond my promise to review the review, not the book, before reading the latter. 

****

I recognize the desire on both sides. Partisan politics needs "new" ideas and a "new" propaganda. In particular, the right is aching for something shiny and new that it can sell to voters, which it regards with the same sort of noblesse-oblige intellectual disdain as the left does. Mind the store, mend the institutions, freedom, rights, opportunity and make your own prosperity are, apparently, not sexy enough. So both sides need new initiatives, expanded governments, to excite the rabble. But we're not here to supply that demand, merely to meditate on actual cause-and-effect truth of what works. Beware the temptation. 

Update:  In retrospect, perhaps the issue is much simpler. The bulk of economic regulation serves exactly the purpose McCloskey basically alleges of Acemoglu and Johnson: Preserve rents of incumbents against the threats of technological improvements. From medieval guilds to trade protection to taxis vs. Ubers, that is really its main function. So we have an extensive bureaucracy that is very good at it, and extensive experience of just how well it works. Which is, very well, at protecting rents and stifling growth.  



Selasa, 13 Juni 2023

The barn door

Kevin Warsh has a nice WSJ oped warning of financial problems to come.  The major point of this essay: "countercyclical capital buffers" are another bright regulatory idea of the 2010s that now has fallen flat. 

As in previous posts, a lot of banks have lost asset value equal or greater than their entire equity due to plain vanilla interest rate risk. The ones that haven't run are now staying afloat only because you and me keep our deposits there at ridiculously low interest rates. Commercial real estate may be next. Perhaps I'm over-influenced by the zombie-apocalypse goings on in San Francisco -- $755 million default on the Hilton and Parc 55, $558 million default on the whole Westfield mall after Nordstrom departed and on and on. How much of this debt is parked in regional banks? I would have assumed that the Fed's regulatory army could see something so obvious coming, but since they completely missed plain vanilla interest rate risk, and the fact that you don't have to stand in line any more to run on your bank, who knows?

So, banks are at risk; the Fed now knows it, and is reportedly worried that more interest rates to lower inflation will cause more problems. To some extent that's a feature not a bug -- the whole theory behind the Fed lowering inflation is that higher interest rates "cool economic activity," i.e. make banks hesitant to lend, people lose their jobs, and through the Phillips curve (?) inflation comes down. But the Fed wants a minor contraction, not full-on 2008. (That did bring inflation down though!) 

I don't agree with all of Kevin's essay, but I always cherry pick wisdom where I find it, and there is plenty. On what to do: 

Ms. Yellen and the other policy makers on the Financial Stability Oversight Council should take immediate action to mitigate these risks. They should promote the private recapitalization of small and midsize banks so they survive and thrive.

Yes! But. I'm a capital hawk -- my answer is always "more." But we shouldn't be here in the first place. 

Repeating a complaint I've been making for a while, everything since the great treasury market bailout of March 2020 reveals how utterly broken the premises and promises of post-2008 financial regulation are. One of the most popular ideas was "countercyclical capital buffers." A nice explainer from Kaitlyn Hoevelmann at the St. Louis Fed (picked because it came up first on a Google search), 

"A countercyclical capital buffer would raise banks’ capital requirements during economic expansions, with banks required to maintain a higher capital-to-asset ratio when the economy is performing well and loan volumes are growing rapidly.  " 

Well, that makes sense, doesn't it? Buy insurance on a clear day, not when the forest fire is half a mile upwind. 

More deeply, remember "capital" is not "reserves" or "liquid assets." "Capital" is one way banks have of getting money, by selling stock, rather than selling bonds or taking deposits. (There is lots of confusion on this point. If someone says "hold" capital that's a sign of confusion.) It has the unique advantage that equity holders can't run to get their money out at any time. In bad times, the stock price goes down and there's nothing they can do about it. But also obviously, it's a lot easier to sell bank stock for a high price in good times than it is just after it has been revealed that the bank has lost a huge amount of money, i.e. like now.  

Why don't banks naturally issue more equity in good times? Well, because buying insurance is expensive, and most of all there is no deposit insurance or too big to fail guarantee subsidizing stock. So banks always leverage as much as they can. Behavioralists will add that bankers get over enthusiastic and happy to take risks in good times. Why don't regulators demand more capital in good times, so banks are ready for the bad times ahead? That's the natural idea of "countercyclical capital buffers." And after 2008, all worthy opinion said regulators should do that. Only some cynical types like me opined that the regulators will be just as human, just as behavioral, just as procyclically risk averse, just as prey to political pressures in the future as they were in the past. 

And so it has turned out. Despite 15 years of writing about procyclical capital, of "managing the credit cycle," here we are again -- no great amounts of capital issued in the good times, and now we want banks to do it when they're already in trouble, and anyone buying bank stock will be providing money that first of all goes to bail out depositors and other debt holders. As the ship is sinking, go on amazon to buy lifeboats. Just as in 2008, regulators will be demanding capital in bad times, after the horse has left the barn. So, the answer has to be, more capital always! 

Kevin has more good points:  

Bank regulators have long looked askance at capital from asset managers and private equity firms, among others. But this is no time for luxury beliefs.

Capital is capital, even from disparaged sources. 

Policy makers should also green-light consolidation among small, midsize and even larger regional banks. I recognize concerns about market power. But the largest banks have already secured a privileged position with their “too big to fail” status. Hundreds of banks need larger, stronger franchises to compete against them, especially in an uncertain economy. Banks need prompt regulatory approval to be confident that proposed mergers will close. Better to allow bank mergers before weak institutions approach the clutches of the Federal Deposit Insurance Corp.’s resolution process. Voluntary mergers at market prices are preferable to rushed government auctions that involve large taxpayer losses and destruction of significant franchise value.

It is a bit funny to see the Administration against all mergers, and then when a bank fails, Chase gets to swallow up failing banks with government sweeteners. Big is bad is another luxury belief. 

Yes, banks are uncompetitive. Look at the interest on your deposits (mine, Chase, 0.01%) and you'll see it just as clearly as you can see lack of competition in a medical bill. But most of that competition comes from regulation, not evil behavior. As per Kevin: 

The past decade’s regulatory policies have undermined competition and weakened resiliency in the banking business. 

A final nice point: 

The Fed’s flawed inflation forecasts in the past couple of years are a lesson in risk management. Policy makers shouldn’t bet all their chips on hopes for low prices or anything else. Better to evaluate the likely costs if the forecast turns out to be wrong.

Maybe the lesson of the massive failure to forecast inflation is that inflation is just bloody hard to forecast. Rather than spend a lot of effort improving the forecast, spend effort recognizing the uncertainty of any forecast, and being ready to react to contingencies as they arise. (I'm repeating myself, but that's the blogger's prerogative.)  

Senin, 12 Juni 2023

Papers: Dew-Becker on Networks



I've been reading a lot of macro lately. In part, I'm just catching up from a few years of book writing. In part,  I want to understand inflation dynamics, the quest set forth in "expectations and the neutrality of interest rates," and an  obvious next step in the fiscal theory program. Perhaps blog readers might find interesting some summaries of recent papers, when there is a great idea that can be summarized without a huge amount of math. So, I start a series on cool papers I'm reading. 

Today: "Tail risk in production networks" by Ian Dew-Becker, a beautiful paper. A "production network" approach recognizes that each firm buys from others, and models this interconnection. It's a hot topic for lots of reasons, below.  I'm interested because prices cascading through production networks might induce a better model of inflation dynamics. 

(This post uses Mathjax equations. If you're seeing garbage like [\alpha = \beta] then come back to the source  here.) 

To Ian's paper: Each firm uses other firms' outputs as inputs. Now, hit the economy with a vector of productivity shocks. Some firms get more productive, some get less productive. The more productive ones will expand and lower prices, but that changes everyone's input prices too. Where does it all settle down? This is the fun question of network economics. 

Ian's central idea: The problem simplifies a lot for large shocks. Usually when problems are complicated we look at first or second order approximations, i.e. for small shocks, obtaining linear or quadratic ("simple") approximations. 


On the x axis, take a vector of productivity shocks for each firm, and scale it up or down. The x axis represents this overall scale. The y axis is GDP. The right hand graph is Ian's point: for large shocks, log GDP becomes linear in log productivity -- really simple. 

Why? Because for large enough shocks, all the networky stuff disappears. Each firm's output moves up or down depending only on one critical input. 

To see this, we have to dig deeper to complements vs. substitutes. Suppose the price of an input goes up 10%. The firm tries to use less of this input. If the best it can do is to cut use 5%, then the firm ends up paying 5% more overall for this input, the "expenditure share" of this input rises. That is the case of "complements." But if the firm can cut use of the input 15%, then it pays 5% less overall for the input, even though the price went up. That is the case of "substitutes." This is the key concept for the whole question: when an input's price goes up, does its share of overall expenditure go up (complements) or down (substitutes)? 

Suppose inputs are complements. Again, this vector of technology shocks hits the economy. As the size of the shock gets bigger, the expenditure of each firm, and thus the price it charges for its output, becomes more and more dominated by the one input whose price grows the most. In that sense, all the networkiness simplifies enormously. Each firm is only "connected" to one other firm. 

Turn the shock around. Each firm that was getting a productivity boost now gets a productivity reduction. Each price that was going up now goes down. Again, in the large shock limit, our firm's price becomes dominated by the price of its most expensive input. But it's a different input.  So, naturally, the economy's response to this technology shock is linear, but with a different slope in one direction vs. the other. 

Suppose instead that inputs are substitutes. Now, as prices change, the firm expands more and more its use of the cheapest input, and its costs and price become dominated by that input instead. Again, the network collapsed to one link.  

Ian: "negative productivity shocks propagate downstream through parts of the production process that are complementary (\(\sigma_i < 1\)), while positive productivity shocks propagate through parts that are substitutable (\(\sigma_i > 1\)). ...every sector’s behavior ends up driven by a single one of its inputs....there is a tail network, which depends on \(\theta\) and in which each sector has just a single upstream link."

Equations: Each firm's production function is (somewhat simplifying Ian's (1)) \[Y_i = Z_i L_i^{1-\alpha} \left( \sum_j A_{ij}^{1/\sigma} X_{ij}^{(\sigma-1)/\sigma} \right)^{\alpha \sigma/(\sigma-1)}.\]Here \(Y_i\) is output, \(Z_i\) is productivity, \(L_i\) is labor input, \(X_{ij}\) is how much good j firm i  uses as an input, and \(A_{ij}\) captures how important each input is in production. \(\sigma>1\) are substitutes, \(\sigma<1\) are complements. 

Firms are competitive, so price equals marginal cost, and each firm's price is \[ p_i = -z_i + \frac{\alpha}{1-\sigma}\log\left(\sum_j A_{ij}e^{(1-\sigma)p_j}\right).\; \; \; (1)\]Small letters are logs of big letters.  Each price depends on the prices of all the inputs, plus the firm's own productivity.  Log GDP, plotted in the above figure is \[gdp = -\beta'p\] where \(p\) is the vector of prices and \(\beta\) is a vector of how important each good is to the consumer. 

In the case \(\sigma=1\) (1)  reduces to a linear formula. We can easily solve for prices and then gdp as a function of the technology shocks: \[p_i = - z_i + \sum_j A_{ij} p_j\] and hence \[p=-(I-\alpha A)^{-1}z,\]where the letters represent vectors and matrices across \(i\) and \(j\). This expression shows some of the point of networks, that the pattern of prices and output reflects the whole network of production, not just individual firm productivity. But with \(\sigma \neq 1\) (1) is nonlinear without a known closed form solution. Hence approximations. 

You can see Ian's central point directly from (1). Take the \(\sigma<1\) case, complements. Parameterize the size of the technology shocks by a fixed vector \(\theta = [\theta_1, \ \theta_2, \ ...\theta_i,...]\) times a scalar \(t>0\), so that \(z_i=\theta_i \times t\). Then let \(t\) grow keeping the pattern of shocks \(\theta\) the same. Now, as the \(\{p_i\}\) get larger in absolute value, the term with the greatest \(p_i\) has the greatest value of \( e^{(1-\sigma)p_j} \). So, for large technology shocks \(z\), only that largest term matters, the log and e cancel, and \[p_i \approx -z_i + \alpha \max_{j} p_j.\] This is linear, so we can also write prices as a pattern \(\phi\) times the scale \(t\), in the large-t limit \(p_i = \phi_i t\),  and  \[\phi_i =  -\theta_i + \alpha \max_{j} \phi_j.\;\;\; (2)\] With substitutes, \(\sigma<1\), the firm's costs, and so its price, will be driven by the smallest (most negative) upstream price, in the same way. \[\phi_i \approx -\theta_i + \alpha \min_{j} \phi_j.\] 
To express gdp scaling with \(t\), write \(gdp=\lambda t\), or when you want to emphasize the dependence on the vector of technology shocks, \(\lambda(\theta)\). Then we find gdp by \(\lambda =-\beta'\phi\). 

In this big price limit, the \(A_{ij}\) contribute a constant term, which also washes out. Thus the actual "network" coefficients stop mattering at all so long as they are not zero -- the max and min are taken over all non-zero inputs. Ian: 
...the limits for prices, do not depend on the exact values of any \(\sigma_i\) or \(A_{i,j}.\) All that matters is whether the elasticities are above or below 1 and whether the production weights are greater than zero. In the example in Figure 2, changing the exact values of the production parameters (away from \(\sigma_i = 1\) or \(A_{i,j} = 0\)) changes...the levels of the asymptotes, and it can change the curvature of GDP with respect to productivity, but the slopes of the asymptotes are unaffected.
...when thinking about the supply-chain risks associated with large shocks, what is important is not how large a given supplier is on average, but rather how many sectors it supplies...
For a full solution, look at the (more interesting) case of complements, and suppose every firm uses a little bit of every other firm's output, so all the \(A_{ij}>0\). The largest input  price in (2) is the same for each firm \(i\), and you can quickly see then that the biggest price will be the smallest technology shock. Now we can solve the model for prices and GDP as a function of technology shocks: \[\phi_i \approx -\theta_i - \frac{\alpha}{1-\alpha} \theta_{\min},\] \[\lambda \approx  \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min}.\] We have solved the large-shock approximation for prices and GDP as a function of technology shocks. (This is Ian's example 1.) 

The graph is concave when inputs are complements, and convex when they are substitutes. Let's do complements. We do the graph to the left of the kink by changing the sign of \(\theta\).  If the identity of \(\theta_{\min}\) did not change, \(\lambda(-\theta)=-\lambda(\theta)\) and the graph would be linear; it would go down on the left of the kink by the same amount it goes up on the right of the kink. But now a different \(j\) has the largest price and the worst technology shock. Since this must be a worse technology shock than the one driving the previous case, GDP is lower and the graph is concave.  \[-\lambda(-\theta) = \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\max} \ge\beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min} = \lambda(\theta).\] Therefore  \(\lambda(-\theta)\le-\lambda(\theta),\) the left side falls by more than the right side rises. 

Does all of this matter? Well, surely more for questions when there might be a big shock, such as the big shocks we saw in a pandemic, or big shocks we might see in a war. One of the big questions that network theory asks is, how much does GDP change if there is a technology shock in a particular industry? The \(\sigma=1\) case in which expenditure shares are constant gives a standard and fairly reassuring result: the effect on GDP of a shock in industry i is given by the ratio of i's output to total GDP. ("Hulten's theorem.") Industries that are small relative to GDP don't affect GDP that much if they get into trouble. 

You can intuit that constant expenditure shares are important for this result. If an industry has a negative technology shock, raises its prices, and others can't reduce use of its inputs, then its share of expenditure will rise, and it will all of a sudden be important to GDP. Continuing our example, if one firm has a negative technology shock, then it is the minimum technology, and [(d gdp/dz_i = \beta_i + \frac{\alpha}{1-\alpha}.\] For small firms (industries) the latter term is likely to be the most important.  All the A and \(\sigma\) have disappeared, and basically the whole economy is driven by this one unlucky industry and labor.   

Ian: 
...what determines tail risk is not whether there is granularity on average, but whether there can ever be granularity – whether a single sector can become pivotal if shocks are large enough.
For example, take electricity and restaurants. In normal times, those sectors are of similar size, which in a linear approximation would imply that they have similar effects on GDP. But one lesson of Covid was that shutting down restaurants is not catastrophic for GDP, [Consumer spending on food services and accommodations fell by 40 percent, or $403 billion between 2019Q4 and 2020Q2. Spending at movie theaters fell by 99 percent.] whereas one might expect that a significant reduction in available electricity would have strongly negative effects – and that those effects would be convex in the size of the decline in available power. Electricity is systemically important not because it is important in good times, but because it would be important in bad times. 
Ben Moll turned out to be right and Germany was able to substitute away from Russian Gas a lot more than people had thought, but even that proves the rule: if it is hard to substitute away from even a small input, then large shocks to that input imply larger expenditure shares and larger impacts on the economy than its small output in normal times would suggest.

There is an enormous amount more in the paper and voluminous appendices, but this is enough for a blog review. 

****

Now, a few limitations, or really thoughts on where we go next. (No more in this paper, please, Ian!) Ian does a nice illustrative computation of the sensitivity to large shocks:


Ian assumes \(\sigma>1\), so the main ingredients are how many downstream firms use your products and a bit their labor shares. No surprise, trucks, and energy have big tail impacts. But so do lawyers and insurance. Can we really not do without lawyers? Here I hope the next step looks hard at substitutes vs. complements.

That raises a bunch of issues. Substitutes vs. complements surely depends on time horizon and size of shocks. It might be easy to use a little less water or electricity initially, but then really hard to reduce more than, say, 80%. It's usually easier to substitute in the long run than the short run. 

The analysis in this literature is "static," meaning it describes the economy when everything has settled down.  The responses -- you charge more, I use less, I charge more, you use less of my output, etc. -- all happen instantly, or equivalently the model studies a long run where this has all settled down. But then we talk about responses to shocks, as in the pandemic.  Surely there is a dynamic response here, not just including capital accumulation (which Ian studies). Indeed, my hope was to see prices spreading out through a production network over time, but this structure would have all price adjustments instantly. Mixing production networks with sticky prices is an obvious idea, which some of the papers below are working on. 

In the theory and data handling, you see a big discontinuity. If a firm uses any inputs at all from another firm,  if \(A_{ij}>0\), that input can take over and drive everything. If it uses no inputs at all, then there is no network link and the upstream firm can't have any effect. There is a big discontinuity at \(A_{ij}=0.\) We would prefer a theory that does not jump from zero to everything when the firm buys one stick of chewing gum. Ian had to drop small but nonzero elements of the input-output matrix to produces sensible results. Perhaps we should regard very small inputs as always substitutes? 

How important is the network stuff anyway? We tend to use industry categorizations, because we have an industry input-output table. But how much of the US industry input-output is simply vertical: Loggers sell trees to mills who sell wood to lumberyards who sell lumber to Home Depot who sells it to contractors who put up your house? Energy and tools feed each stage, but don't use a whole lot of wood to make those. I haven't looked at an input-output matrix recently, but just how "vertical" is it? 

****

The literature on networks in macro is vast. One approach is to pick a recent paper like Ian's and work back through the references. I started to summarize, but gave up in the deluge. Have fun. 

One way to think of a branch of economics is not just "what tools does it use?" but "what questions is it asking?  Long and Plosser "Real Business Cycles," a classic, went after idea that the central defining feature of business cycles (since Burns and Mitchell) is comovement. States and industries all go up and down together to a remarkable degree. That pointed to "aggregate demand" as a key driving force. One would think that "technology shocks" whatever they are would be local or industry specific. Long and Plosser showed that an input output structure led idiosyncratic shocks to produce business cycle common movement in output. Brilliant. 

Macro went in another way, emphasizing time series -- the idea that recessions are defined, say, by two quarters of aggregate GDP decline, or by the greater decline of investment and durable goods than consumption -- and in the aggregate models of Kydland and Prescott, and the stochastic growth model as pioneered by King, Plosser and Rebelo, driven by a single economy-wide technology shock.  Part of this shift is simply technical: Long and Plosser used analytical tools, and were thereby stuck in a model without capital, plus they did not inaugurate matching to data. Kydland and Prescott brought numerical model solution and calibration to macro, which is what macro has done ever since.  Maybe it's time to add capital, solve numerically, and calibrate Long and Plosser (with up to date frictions and consumer heterogeneity too, maybe). 

Xavier Gabaix (2011)  had a different Big Question in mind: Why are business cycles so large? Individual firms and industries have large shocks, but \(\sigma/\sqrt{N}\) ought to dampen those at the aggregate level. Again, this was a classic argument for aggregate "demand" as opposed to "supply."  Gabaix notices that the US has a fat-tailed firm distribution with a few large firms, and those firms have large shocks. He amplifies his argument via the Hulten mechanism, a bit of networkyiness, since the impact of a firm on the economy is sales / GDP,  not value added / GDP. 

The enormous literature since then  has gone after a variety of questions. Dew-Becker's paper is about the effect of big shocks, and obviously not that useful for small shocks. Remember which question you're after.

My quest for a new Phillips curve in production networks is better represented by Elisa Rubbo's "Networks, Phillips curves and Monetary Policy," and Jennifer La'o and  Alireza Tahbaz-Salehi's “Optimal Monetary Policy in Production Networks,” If I can boil those down for the blog, you'll hear about it eventually.  

The "what's the question" question is doubly important for this branch of macro that explicitly models heterogeneous agents and heterogenous firms. Why are we doing this? One can always represent the aggregates with a social welfare function and an aggregate production function. You might be interested in how aggregates affect individuals, but that doesn't change your model of aggregates. Or, you might be interested in seeing what the aggregate production or utility function looks like -- is it consistent with what we know about individual firms and people? Does the size of the aggregate production function shock make sense? But still, you end up with just a better (hopefully) aggregate production and utility function. Or, you might want models that break the aggregation theorems in a significant way; models for which distributions matter for aggregate dynamics, theoretically and (harder) empirically. But don't forget you need a reason to build disaggregated models. 

Expression (1) is not easy to get to. I started reading Ian's paper in my usual way:  to learn a literature start with the latest paper and work backward. Alas, this literature has evolved to the point that authors plop results down that "everybody knows" and will take you a day or so of head-scratching to reproduce. I complained to Ian, and he said he had the same problem when he was getting in to the literature! Yes, journals now demand such overstuffed papers that it's hard to do, but it would be awfully nice for everyone to start including ground up algebra for major results in one of the endless internet appendices.  I eventually found Jonathan Dingel's notes on Dixit Stiglitz tricks, which were helpful. 

Update:

Chase Abram's University of Chicago Math Camp notes here  are also a fantastic resource. See Appendix B starting p. 94 for  production network math. The rest of the notes are also really good. The first part goes a little deeper into more abstract material than is really necessary for the second part and applied work, but it is a wonderful and concise review of that material as well. 









 

Jumat, 09 Juni 2023

The Fed and the Phillips curve


I just finished a new draft of "Expectations and the neutrality of interest rates," which includes some ruminations on inflation that may be of interest to blog readers. 

A central point of the paper is to ask whether and how higher interest rates lower inflation, without a change in fiscal policy. That's intellectually interesting, answering what the Fed can do on its own. It's also a relevant policy question. If the Fed raises rates, that raises interest costs on the debt. What if Congress refuses to tighten to pay those higher interest costs? Well, to avoid a transversality condition violation (debt that grows forever) we get more inflation, to devalue outstanding debt. That's a hard nut to avoid.  

But my point today is some intuition questions that come along the way. An implicit point: The math of today's macro is actually pretty easy. Telling the story behind the math, interpreting the math, making it useful for policy, is much harder. 

1. The Phillips curve

The Phillips curve is central to how the Fed and most policy analysts think about inflation. In words, inflation is related to expected future inflation and by some measure if economic tightness, factor \(x\). In equations, \[ \pi_t = E_t \pi_{t+1} + \kappa x_t.\] Here \(x_t\) represents the output gap (how much output is above or below potential output), measures of labor market tightness like unemployment (with a negative sign), or labor costs. (Fed Governor Chris Waller has a great speech on the Phillips curve, with a nice short clear explanation. There are lots of academic explanations of course, but this is how a sharp sitting member of the FOMC thinks, which is what we want to understand. BTW, Waller gave an even better speech on climate and the Fed. Go Chris!)  

So how does the Fed change inflation? In most analysis, the Fed raises interest rates; higher interest rates cool down the economy lowering factor x; that pushes inflation down. But does the equation really say that? 

This intuition thinks of the Phillips curve as a causal relation, from right to left. Lower \(x\) causes lower inflation. That's not so obvious. In one story, the Phillips curve represents how firms set prices, given their expectation of other's prices and costs. But in another story, aggregate demand raises prices, and that causes firms to hire more (Chris Waller emphasized these stories). 

This reading may help to digest an otherwise puzzling question: Why are the Fed and its watchers so obsessed with labor markets? This inflation certainly didn't start in labor markets, so why put so much weight on causing a bit of labor market slack? Well, if you read the Phillips curve from right to left, that looks like the one lever you have. Still, since inflation clearly came from left to right, we still should put more emphasis in curing it that way. 

2. Adjustment to equilibrium vs. equilibrium dynamics. 

But does the story work? Lower \(x_t\) lowers inflation \(\pi_t\) relative to expected future inflation \(E_t \pi_{t+1}\). Thus, it describes inflation that is rising over time.  This does not seem at all what the intuition wants. 

So how do we get to the intuition that lower \(x_t\) leads to inflation got goes down over time?  (This is on p. 16 of the paper by the way.) An obvious answer is adaptive expectations: \(E_t \pi_{t+1} = \pi_{t-1}\).  Then lower \(x_t\) does mean inflation today lower than it was in the past. But the Fed and most commenters really don't want to go there. Expectations may not be "rational," and in most commentary they are either "anchored" by faith in the Fed, or  driven by some third force. But they aren't mechanically last year's inflation. If they were, we would need much higher interest rates to get real interest rates above zero. Perhaps the intuition comes from remembering these adaptive expectations dynamics, and not realizing that the new view that expectations are forward looking, even if not rational, undermines those dynamics. 

Another answer  may be confusion between adjustment to equilibrium and movement of equilibrium inflation over time. Lower \(x_t\) means lower inflation \(\pi_t\) than would otherwise be the case. But that  reduction is an adjustment to equilibrium. It's not how inflation we observe -- by definition, equilibrium inflation -- evolves over time. 

This is, I think, a common confusion. It's not always wrong. In some cases, adjustment to equilibrium does describe how an equilibrium quantity changes, and in a more complex model that adjustment plays out as a movement over time. For example, a preference or technology shock might give a sudden increase in capital; add adjustment costs and capital increases slowly over time. A fiscal shock or money supply shock gives a sudden increase in the price level; add sticky prices and you get a slow increase in the price level over time. 

But we already have sticky prices. This is supposed to be the model, the dynamic model, not a simplified model. Here, inflation lower than it otherwise would be is not the same thing as inflation that goes down slowly over time. It's just a misreading of equations. 

Another possibility is that verbal intuition refers to the future, \[ E_t \pi_{t+1} = E_t \pi_{t+2} + \kappa E_t x_{t+1} .\]Now, perhaps, raising interest rates today lowers future factor x, which then lowers future inflation \(E_t\pi_{t+1}\) relative to today's inflation \(\pi_t\). That's still a stretch however. First, the standard new-keynesian model does not have such a delay. \[x_t = E_t x_{t+1} - \sigma(i_t - E_t \pi_{t+1})\]says that higher interest rates also immediately lower output, and lower output relative to future output. Higher interest rates also raise output growth. This one is more amenable to adding frictions -- habits, capital accumulation, and so forth -- but the benchmark model not only does not have long and variable lags, it doesn't have any lags at all.  Second, maybe we lower inflation \(\pi_{t+1}\) relative to its value \(\pi_t\), in equilibrium, but we still have inflation growing from \(t+1\) to \( t+2\). We do not have inflation gently declining over time, which the intuition wants. 

We are left -- and this is some of the point of my paper -- with a quandary. Where is a model in which higher interest rates lead to inflation that goes down over time? (And, reiterating the point of the paper, without implicitly assuming that fiscal policy comes to the rescue.) 

3. Fisherian intuition

A famous economist, who thinks largely in the ISLM tradition, once asked me to explain in simple terms just how higher interest rates might raise inflation. Strip away all price stickiness to make it simple, still, the Fed raises interest rates and... now what? Sure point to the equation \( i_t = r + E_t\pi_{t+1} \) but what's the story? How would you explain this to an undergraduate or MBA class?  I fumbled a bit, and it took me a good week or so to come up with the answer. From p. 15 of the paper, 

First,  consider the full consumer first-order condition \[x_t = E_t x_{t+1} - \sigma(i_t -E_t \pi_{t+1})\] with no pricing frictions.  Raise the nominal interest rate \(i_t\).  Before prices change, a higher nominal interest rate is a higher real rate, and induces people  to demand less today \(x_t\) and more next period \(x_{t+1}\).  That change in demand pushes down the price level today \(p_t\) and hence current inflation \(\pi_t = p_t - p_{t-1}\), and it pushes up  the expected price level next period \(p_{t+1}\) and thus expected future inflation \(\pi_{t+1}=p_{t+1}-p_t\). 

So, standard intuition is correct, and refers to a force that can lower current inflation. Fisherian intuition is correct too, and refers to a natural force that can raise expected future inflation. 

But which is it, lower \(p_t\) or higher \(p_{t+1}\)? This consumer first-order condition, capturing an  intertemporal substitution effect, cannot tell us. Unexpected inflation and the overall price level is determined by a wealth effect. If we pair the higher interest rate with no change in surpluses, and thus no wealth effect, then the initial price level \(p_t\) does not change [there is no devaluation of outstanding debt] and the entire effect of higher interest rates is a rise in \(p_{t+1}\).  A concurrent rise in expected surpluses leads to a lower price level \(p_t\) and less current inflation \(\pi_t\). Thus in this context standard intuition also implicitly assumes that fiscal policy acts in concert with monetary policy. 

In both these stories, notice how much intuition depends on describing how equilibrium forms. It's not rigorous. Walrasian equilibrium is just that, and does not come with a price adjustment process. It's a fixed point, the prices that clear markets, period. But believing and understanding how a model works needs some sort of equilibrium formation story. 

4. Adaptive vs. rational expectations 

The distinction between rational, or at least forward-looking and adaptive or backward-looking expectations is central to how the economy behaves. That's a central point of the paper.  It would seem easy to test, but I realize it's not. 

Writing in May 2022, I thought about adaptive (backward-looking) and rational (forward-looking), and among other points that under adaptive expectations we need nominal interest rates above current inflation  -- i.e. much higher -- to imply real interest rates, while that isn't necessarily true with forward-looking expectations. You might be tempted to test for rational expectations, or look at surveys to pronounce them "rational" vs. "behavioral," a constant temptation. I realize now it's not so easy (p. 44): 

Expectations may seem adaptive.  Expectations must always be, in equilibrium, functions of variables that people observe, and likely weighted to past inflation. The point of "rational expectations'' is that those forecasting rules are likely to change as soon as a policy maker changes policy rules, as Lucas  famously pointed out in his "Critique."  Adaptive expectations may even be model-consistent [expectations of the model equal expectations in the model] until you change the model.

That observation is important in the current policy debate. The proposition that interest rates must be higher than current inflation in order to lower inflation assumes that expected inflation equals current inflation -- the simple one-period lagged adaptive expectations that I have specified here. Through 2021-2022, market and survey expectations were much lower than current (year on year) inflation. Perhaps that means that markets and surveys have rational expectations: Output is temporarily higher than the somewhat reduced post-pandemic potential, so inflation is higher than expected future inflation (\(\pi_t = E_t \pi_{t+1} + \kappa x_t\)). But that observation could also mean that inflation expectations are a long slow-moving average of lagged inflation, just as Friedman speculated in 1968 (\(\pi^e_t = \sum_{j=1}^\infty \alpha_j \pi_{t-j}\)). In either case, expected inflation is much lower than current inflation, and interest rates only need to be higher than that low expectation to reduce inflation. Tests are hard, and you can't just look at in-sample expectations to proclaim them rational or not. 

Rational expectations change when policy deviates from a rule, or when the policy rule changes. That's their key feature. We should talk perhaps about rational vs. exogenous expectations. 

5. A few final Phillips curve potshots

It is still a bit weird that so much commentary is so focused on the labor market to judge pressure on inflation. This inflation did not come from the labor market! 

Some of this labor market focus makes sense in the new-Keynesian interpretation of the Phillips curve: Firms set prices based on expected future prices of their competitors and marginal costs, which are largely labor costs. That echoes the 1960s "cost push" view of inflation (as opposed to its nemesis "demand pull" inflation). But it begs the question, well, why are labor costs going up? The link from interest rates to wages is about as direct as the link from interest rates to pries. This inflation did not come from labor costs, maybe we should fix the actual problem? Put another way, the Phillips curve is not a model. It is part of a model, and lots of equations have inflation in them. Maybe our focus should be elsewhere. 

Back to Chris Waller, whose speech seems to me to capture well sophisticated thinking at the Fed.  Waller points out how unreliable the Phillips curve is 

What do economic data tell us about this relationship? We all know that if you simply plot inflation against the unemployment rate over the past 50 years, you get a blob. There does not appear to be any statistically significant correlation between the two series.


In more recent years, since unemployment went up and down but inflation didn't go far, the Phillips curve seemed "flat," 

the Phillips curve was very flat for the 20-plus years before the pandemic, 

You can see this in the decline of unemployment through 2020, as marked, with no change in inflation. Then, unemployment surged in 2021, again with no deflation.  2009 was the last time there was any slope at all to the Phillips curve. 

But is it "flat" -- a stable, exploitable, flat relationship -- or is it just a stretched out "blob", two series with no stable relationship, one of which just got stable? 

In any case, as unemployment went back down to 3.5 percent in 2022, inflation surged. You can forgive the Fed a bit: We had 3.5% unemployment with no inflation in 2020, why should we worry about 3.5% unemployment in 2022? I think the answer is, because inflation is driven by a whole lot more than unemployment -- stop focusing on labor markets! 

A flat curve, if it is a curve, is depressing news: 

 Based on the flatness of the Phillips curve in recent decades, some commentators argued that unemployment would have to rise dramatically to bring inflation back down to 2 percent. 

At best, we retrace the curve back to 2021 unemployment. But (I'll keep harping on this), note the focus on the error-free Phillips curve as if it is the entire economic model. 

Waller views the new Phillips curve as a "curve," that has become steeper, and cites confirming evidence that prices are changing more often and thus becoming more flexible.   

... considering the data for 2021... the Phillips curve suddenly looked relatively steep.. since January 2022, the Phillips curve is essentially vertical: The unemployment rate has hovered around 3.6 percent, and inflation has varied from 7 percent (in June) to 5.3 percent (in December).

Waller concludes 

A steep Phillips curve means inflation can be brought down quickly with relatively little pain in terms of higher unemployment. Recent data are consistent with this story.

Isn't that nice -- from horizontal to vertical all on its own, and in the latest data points inflation going straight down. 

Still, perhaps the right answer is that this is still a cloud of coincidence and not the central, causal, structural relationship with which to think about how interest rates affect inflation. 

If only I had a better model of inflation dynamics...




Kamis, 08 Juni 2023

Cost Benefit Comments



The Biden Administration is proposing major changes to cost-benefit analysis used in all regulations. The preamble here, and the full text here. It is open for public comments until June 20

Economists don't often comment on proposed regulations. We should do so more often. Agencies take such comments seriously. And they can have an afterlife. I have seen comments cited in litigation and by judicial decisions. Even if you doubt the Biden Administration's desire to hear you on cost-benefit analysis, a comment is a marker that the inevitable eventual Supreme Court case might well consider. Comments tend only to come from interested parties and lawyers. Regular economists really should comment more often. I don't do it enough either. 

You can see existing comments:  Search for Circular A-4 updates to get to https://www.regulations.gov/docket/OMB-2022-0014, then select “browse all comments.” (Thanks to a good friend who sent this tip.) 

Take a look at comments from an MIT team led by Deborah Lucas here and by Josh Rauh. These are great models of comments. You don't have to review everything.   Make one good point. 

Cost benefit analysis is useful even if imprecise. Lots of bright ideas in Washington (and Sacramento!) would struggle to document any net benefits at all. Yes, these exercises can lie, cheat, and steal, but having to come up with a quantitative lie can lay bare just how hare-brained many regulations are. 

Both Josh and the MIT response focus on the draft proposal's use of ultra-low discount rates, ranging from historic TIPS yields to arguments for zero or negative "social" discount rates. Josh emphasizes a beautiful compromise: always show the annual stream of costs and benefits. Then it's easy enough to apply different discount rates. No Black Boxes. 

Discount rates seem like a technical issue. But they matter a lot for climate policies, or for policies with substantial cost but putatively permanent benefits, because of the long horizons. For example, climate change is alleged to create costs of 5% of GDP in 100 years. So, let's assume a 0% discount rate -- treat the future just like the present. How much is it worth spending this year to eliminate additional climate change in 2100? Spend means real spending, real reductions in everyone's standard of living, not just funny money billions on twitter. 

If you answered "5% of GDP" (roughly $3,500 per person) that's wrong, for two crucial reasons. First, the economy grows over time. At a modest 2% real growth, US GDP will be 7.4 times as large in 100 years as it is today, or 640% greater. (e^2=7.4). Thus, 5% of GDP in 100 years, discounted at 0%,  is 7.4 x 5% or 37% of today's GDP, or $17,500 per person today. Second, the gain is forever -- 5% of 2123 GDP, but 5% of 2124 GDP, and so on. Discounted at a zero rate, 5% of 2123 forever after that is worth... an infinite amount today. But GDP keeps growing after 2123. If you discount at anything less than the growth rate of GDP -- 2% in my example -- 5% of (growing) GDP forever is worth an infinite amount!  So what if $250 billion subsidizing huge battery long range electric cars made by union labor in the USA from hypothetical US made lithium mines might, all in, save a thimbleful of carbon per car (is it even positive?), if the benefits are infinite, go for it. 

If you discount by a low, but somewhat more reasonable number like 7%, then a dollar in 100 years is worth 0.09 cents today (100 x e^-7). Now you know where to put your thumb on the climate scales! 

You might be wondering, if our great grandchildren are going to be so fantastically better off than we are, let them deal with it. Or you may be wondering that maybe there are other things we could do with money today that might speed up this magical growth process and do 5% better. For an infinite amount of money, is there nothing we can do to raise the growth rate from 2% to 2.05%? 

The latter opportunity cost question is, I think, a good way to think of discount rates. The average real return on stocks is something like 5%, at least. The average pre-tax marginal product of capital is higher; pick you number but it's in the range of 10% not 1%. The right "discount rate" is the rate of return on alternative uses of money. Josh and the MIT team are exactly right to point out that using the rate of return on risk free government bonds is a completely mistaken way to discount the very risky costs of climate damage -- that 5% is a very poorly known number -- and the even riskier benefits of the government's shifting climate policy passions. But I think phrasing the experiment in terms of opportunity costs rather than proper discounting of risky streams makes it more salient, despite the decades I have spent (and an entire book!) on the latter approach. Businesses can take $1 today and turn it in to, on average, $1.07 next year. Why take away that money for a project that yields $1.00 or $1.01 next year? 

The former question has a deeper consequence. Why should we suffer to help people, even our grandchildren, who will be on average 7.4 times better off than we are? How much would you ask your great grandparents to sacrifice to make you 5% better off than you are today? 

Here the low discount rate clashes interestingly with another part of the proposal: equity and transfers. 

From the preamble p. 12: 

A standard assumption in economics, informed by empirical evidence (as discussed below), is that an additional $100 given to a low-income individual increases the welfare of that individual more than an additional $100 given to a wealthy individual. Traditional benefit-cost analysis, which applies unitary weights to measures of willingness to pay, does not usually take into account how distributional effects may affect aggregate welfare because of differences in individuals’ marginal utility of income. Related to the topic of distributional analysis is the question of whether agencies should be permitted or encouraged to develop estimates of net benefits using weights that take account of these differences.26 The proposed revisions to Circular A-4 suggest that agencies may wish to consider weights for each income group affected by a regulation that equal the median income of the group divided by median U.S. income, raised to the power of the elasticity of marginal utility times negative one.

Now wait a darn-tootin' minute. The "standard" doctrine in economics is that you cannot make intra-personal utility comparisons. Utility is ordinal, not cardinal. Here cardinal-utility utilitarianism with equal Pareto-weights is about to be carved into federal stone. (To decide social benefit of taking from A and giving to B, you construct a social welfare function \(u(c_A) + \lambda u(c_B)\). This needs you to use the same \(u()\) for A and B, and agree on a Pareto-weight \(\lambda\) implicitly one here.) 

Imagine a simple regulation: take a dollar from Joe ($100,000 income) and give it to Kathy ($50,000 income). By this standard such a straightforward transfer passes a cost-benefit test.  

But this does not get applied over time. Taking a dollar from you and me, and at a discount rate of 0% giving it to our great grandchildren who will be 7.4 times better off should set off massive inequity alarm bells. Nope. 

Indeed, you can deduce a discount rate from the inequality goal. Pure undiscounted intergenerational equity requires a discount rate proportional to the economic growth rate. 

(With power utility, an intervention that costs A $1 to give B $\(e^{rt}\) just passes a cost-benefit test if \[c_A^{-\gamma} = e^{rt} (c_B)^{-\gamma}.\]  If B is \(e^{gt}\) times as well off as A, \(c_B=e^{gt} \times c_A\) then we need  \(r=\gamma g\). \( \gamma\) is usually a number a bit bigger than one.  The preamble's discussion of \(\gamma\) values is pretty good, settling on a number between one and two. However, they haven't really heard of the finance literature: 

Evidence on risk aversion can be used to estimate the elasticity of marginal utility. In a constant-elasticity utility specification, the coefficient of relative risk aversion is the elasticity of marginal utility. There are numerous different estimates of the coefficient of relative risk aversion (CRRA), using data from a variety of different markets, including labor supply markets,29 the stock market,30 and insurance markets.31 Relevant estimates vary widely, though assumed values of the CRRA between 1 and 2 are common.32

30 Robert S. Pindyck, “Risk Aversion and Determinants of Stock Market Behavior,” The Review of Economics and Statistics 70, no. 2 (1988): 183-90 uses stock market data and estimates the CRRA to be “in the range of 3 to 4"

Since then, of course, the whole equity premium literature sprang up with coefficients 10 to 50. Shh. That would justify insane levels of equity. 

The draft also encourages all sorts of unquantifiable non-economic "benefits," but I'll leave that for another day. 

Read and comment. 

BTW, despite my negative tone and picking on these elements, much of the draft is quite good. Here is a particularly nice piece, from p. 26 of the full text 

j. A Note Regarding Certain Types of Economic Regulation

In light of both economic theory and actual experience, it is particularly difficult to demonstrate positive net benefits for any of the following types of regulations:

 price controls in well-functioning competitive markets;

 production or sales quotas in well-functioning competitive markets;

 mandatory uniform quality standards for goods or services, if the potential problem can be adequately dealt with through voluntary standards or by disclosing information of the hazard to buyers or users; or

 controls on entry into employment or production, except (a) where needed to protect health and safety (e.g., Federal Aviation Administration tests for commercial pilots) or (b) to manage the use of common property resources (e.g., fisheries, airwaves, Federal lands, and offshore areas).

Well, FAA tests and rules for commercial pilots is not actually quite so obvious and really needs a cost benefit test. "Commercial pilot" does not mean "airline pilot," it means can you do anything in an airplane and get money for it. But leave that for another day, these principles if applied could clean out a lot of mischief. Well, I guess many on the progressive left or nascent national-conservative right would deny there is such a thing as a "well-functioning competitive market." 

Update: 

I should have added: It's insane to make a once and for all cost benefit analysis, especially for projects with 100 year horizons. All regs should be re evaluated every 5 to 10 years, and use experience to update costs and benefits. 

 

   

Senin, 05 Juni 2023

Stephens at Chicago; effective organizations

Brett Stephens gave a great commencement speech (NYT link, HT Luis Garicano) at the University of Chicago. One part stood out to me, and worthy of comment. Bret starts with the problem of Groupthink:

Why did nobody at Facebook — sorry, Meta — stop Mark Zuckerberg from going all in on the Metaverse, possibly the worst business idea since New Coke? Why were the economists and governors at the Federal Reserve so confident that interest rates could remain at rock bottom for years without running a serious risk of inflation? Why did the C.I.A. believe that the government of Afghanistan could hold out against the Taliban for months but that the government of Ukraine would fold to the Russian Army in days? Why were so few people on Wall Street betting against the housing market in 2007? Why were so many officials and highly qualified analysts so adamant that Saddam Hussein had weapons of mass destruction? Why were so many people convinced that overpopulation was going to lead to catastrophic food shortages, and that the only sensible answers were a one-child policy and forced sterilizations?

Oh, and why did so many major polling firms fail to predict Donald Trump’s victory in 2016?

Conspicuous institutional failures are the question of our age We could add the SBV regulatory fiasco, the 2007 financial regulatory failure, the CDC FDA and numerous governments under Covid,  and many more.  Systemic incompetence doesn't just include disasters, but ongoing wounds from the Jones act to California's billions wasted on obviously ineffective homeless spending. 

The list is a bit unfair, of course. Selection bias: These are the grand failures, but large organizations occasionally produce some successes. For every Metaverse there is an iPhone, which I certainly thought a dumb idea at the time. And it's always easy to see idiocy with hindsight, but it's a lot harder in real time. De-growthing our economies and spending trillions in the name of carbon reduction will be seen, 20 years from now, either as a farsighted visionary move that saved civilization, or a grand collective delusion. Which is it? Who is the naked emperor and who is the little girl on the sidelines of the parade? Remember too that the gadflies are usually wrong. 

But the question on my mind is this: How do you structure large organizations to avoid such catastrophic mistakes? As an economist, and a macroeconomist at that, it's something I don't know anywhere near enough about. 

Bret: 

... Why is it that, when you bring together a lot of smart people in a room, their collective intelligence tends to go down, not up? Why do they always seem to press the mute button on their critical faculties when confronted with propositions that, as an old colleague of mine liked to say, ought to vanish in the presence of thought? 

It's not obvious people's critical faculties are impaired, but their incentives to speak out about them are. 

First, the problem isn’t that people aren’t smart. It’s that they are scared.

To yell stop when everyone else says go — or go when everyone else says stop — takes guts, and guts aren’t part of any kind of normal college curriculum. In my generation, the hardest people to say “no” to were the people who had professional power over us. In your generation, I think, it’s the people who are in your own ideological tribe. Whatever it is, how many of us, if we’re honest with ourselves, really have that kind of courage?

Second, there is the problem of rationalization — of smart people convincing themselves, and others, of some truly dumb things.

Robert McNamara, one of the original “Whiz Kids” and probably one of the brighter bulbs in 20th-century American public life, was one of the fathers of the Vietnam War when he was at the Pentagon, and of the Third World debt crisis when he was at the World Bank. Somehow, he always managed to convince the other smart people in the room that he was right. Will you be able to notice the underlying flaw in an idea when the arguments for it sound so persuasive?

 Or, he convinced them to silence their doubts and go along. 

Third, there is the psychological dimension.

Some people are inveterate truth seekers. They are almost congenitally willing to risk rejection, ostracism, even hatred for the sake of being right. But most people just want to belong, and the most essential elements of belonging are agreeing and conforming. ...the usual emotional companion to intellectual independence isn’t pride or self-confidence. It’s loneliness and sometimes crippling self-doubt.

This is insightful, but it's not getting us to the question on my mind: Why do some institutions seem more prone to groupthink disasters than others? Bret's final insight gets to that: 

here’s a fourth factor, maybe the most crucial. It’s culture. Does the culture of a society, or of an institution, encourage us to stand out or to fit in; to speak up or to bury our doubts? Does it serve as a conduit to groupthink, or as an obstacle to it?

I mentioned a moment ago that all of us like to think of ourselves as independent thinkers, even if comparatively few of us really are. There’s an institutional corollary. Nearly every American institution outside of certain religious orders claims to encourage open debate and — that awful cliché — thinking “outside the box.” Apple’s famous slogan, “Think Different,” was one of the most successful ad campaigns of my lifetime...

But, at least in my experience, very few institutions truly welcome it, at least when it exposes them to any sort of pressure or criticism, much less loss of social capital or potential revenue...

But this doesn’t always have to be the case. Institutions can, in fact, practice what they preach. They can declare principles, set a tone, announce norms and expectations — and then live up to their principles through regular practice. They can explain to every incoming class of students or new employees that they champion independent thinking and free expression in both word and deed. They can prove that they won’t cave to outrage mobs and other forms of public pressure, either by canceling invited speakers or by never inviting controversial speakers in the first place.

There’s a way this is done. It’s called leadership. You have one magnificent example of it right on this stage, in the person of John Boyer. And you have had a historic example of it in the person of Bob Zimmer. I want to say a few words about him.

That's as far as Bret goes, appropriately for a graduation speech at Chicago. So we have one answer to my question: Some institutions have cultures that welcome emperor is naked commentary, and most do not. Leaders can set cultures. 

I think this just scratches the surface. A college's free speech culture is nowhere near as consequential as a government making a decision to go to war, or any of Bret's other examples. Institutions eventually have to have mechanisms for coming to a decision, closing ranks and pursuing it. If you're going to go to war or not, you have to make a decision and not keep arguing about it forever. If you've ever participated in any group decision you know there are gadflies bringing up stupid points over and over, and if you have too much discussion you're never going to get anywhere. I think institutions in today's government are in CYA mode for good political reasons. The Fed doesn't have a groupthink culture because it wants to, but because in today's Washington admitting mistakes would lead to a completely ineffective institution under constant attack. Again, the gadflys are also mostly wrong too! 

I do think there are additional institutional structures that could help to promote good decision making. An official devil's advocate to big decisions, and making sure that isn't a career dead end is one useful concept I've heard of. But the larger question of just what those are remains something I'd like to know more about. 


Fiscal inflation and interest rates

Economics is about solving lots of little puzzles. At a July 4th party, a super smart friend -- not a macroeconomist -- posed a puzzle I sho...