Inflation Targeting Revisited

If factors outside of the monetary system create inflation, how can the U.S. Federal Reserve Open Markets Committee (FOMC) be confident it will be able to use its inflation targeting model to steer inflation towards the 2% goal?  There is no consensus opinion on the matter.  Pioneered by the central bank of New Zealand in 1990 and later codified into law in Great Britain and Canada, among others, the U.S. Federal Reserve adopted the inflation targeting model as policy in 2012 under then-Chairman Ben Bernanke, who was guided to the target level by then-Vice Chairman Janet Yellen that is widely-credited with determining the appropriateness of the 2% figure.  A New York Times article from December 2014 explains: “The practice was so successful in making the high inflation of the 1970s and ’80s a thing of the past that all of the world’s most advanced nations have emulated it in one form or another,” such that, “the 2% inflation target is now the norm across much of the world.”  It’s a nice story.  Supporters of the approach argue that companies and households are better positioned and more incentivized to invest capital for the future, when price-points are increasing in a modest and predictable manner, often called the ‘normalized rate’ of inflation by economists.  The FOMC and other central banks put this model into practice by changing the short-term benchmark interest rate, with the idea that increases in the rate reduce the amount of available credit and decreases in the rate augment lending levels.  As such, the model assumes inflation is driven in a top-down manner, where decisions made by central bankers on rate levels determine the availability of credit for companies and households, which causes changes in the demand for goods/services and, in turn, consumer prices.  That’s the flow-chart.   

Challengers of the inflation-targeting model focus on the fact that it has never actually been deployed to create inflation.  There is no debate over the validity of the original definition of the “Taylor Rule,” which the inflation-targeting model hinges upon.  John Taylor (1993) observed inflation fall precipitously after FOMC Chairman Paul Volcker increased the benchmark lending rate above 17% for 28 consecutive months in the early 1980s, and his original work that dictates a trade-off between higher interest levels and lower inflation, simply made the relationship a formal one.  But, the argument changed in 1999, when the “Taylor Rule” was reformulated to make the relationship between inflation and interest rates implicit, which meant central bankers could use the benchmark lending rate to also create inflation in a ‘knob-like’ fashion.  Unfortunately, non-preventative government policies rarely work as intended.  While it is easy and common for regulators to use policy as a way to slow or stop activity, the opposite is unusual because of its ineffectiveness. Imagine a traffic cop trying to get cars to drive faster?  Most of us have seen them and it’s, sadly, inefficient.  To be fair to policy makers, and more importantly traffic cops, the idea of incentivizing buyers with lower costs to pull future purchases forward is common practice and is readily observed in corporate board rooms and across kitchen tables.  But, without identifiable demand, one cannot expect companies to build-out additional capacity, right?  And would we even want that?  Instead of increasing consumer inflation, the FOMC policy of low interest rates has made more credit available to channels already in place, in particular, property, shale energy, and student loans.  This has simply led to asset price appreciation.  “Most banking system actions and most credit creation in modern economies funds a competition between each ourselves for the ownership of assets that already exist, in particular that’s real estate,” notes Lord Adair Turner (2015).  “This means that when we try to stimulate the economy simply by having low interest rates, the vast majority of the impetus of that simply goes into asset purchases.”  Turner supports this view with data showing the property sector accounting for 60% of all bank loans across 17-advanced countries in 2007 compared with just 30% in 1928. 

Ultimately, debt is not a zero-sum game, where losses to creditors are equally offset by gains to borrowers.  Rather, when loans are secured against property, and prices falls sharply, both the lender and the borrower suffer; the borrower loses his deposit (and possibly his home) while the lender has to write-down the value of the loan.  The same situation is unfolding in the U.S. shale sector, where $1 trillion has been invested since 2011 and 50% could be forced offline at today’s prices.  This means that losses in energy will translate to higher borrowing costs for all corporate credits.  It’s already in motion too: yields on bonds issued by non-energy firms are up to 7.7% from 5.3% in June 2014.  What this tells us is that the financial markets, rather than the U.S. Federal Reserve, will determine the future rate path for corporate rates.  This will happen quickly and when defaults rise, rates will rise sharply.  With all due respect to New Zealand, there is a strong argument that the monetary policy of a 4.5 million person island-nation with 30 million cows may not be the best fit for the world’s reserve currency nation.  As Dylan Grice (2013) observed, the last time the U.S. Federal Reserve practiced a policy of price-stabilization was in the 1920s under Chairman Benjamin Strong and that did not end very well.  The build-up in asset-prices will come undone.