First advanced in the Great Moderation,
That would have the Fed set
The cost of our debt
Based on output as well as inflation.
Then one of those Keynsian guys
Came and asked: "Do you think it is wise
To posit, post-crisis,
The worst is that prices
May lag, as the model implies?"
"A recession of untold ferocity
And slow monetary velocity
Demand a new means
To grease the machines,
And yours lacks the needed viscosity."
The Taylor Rule is one of those economic concepts of which I often hear mention, but on which I rarely focus. Created by Stanford professor John Taylor and others in the early '90s, the Rule would have the Federal Reserve raise (or lower) the base interest rate by about 1.5 percentage points for every one-percentage-point change in inflation. A 1% point change in GDP would call for a 0.5% point change in rates. (For those who appreciate the beauty of algebra, the Taylor equation below is explained in the link above.) Depending on which economist you talk to, Prof. Taylor has given us either a useful rule of thumb, or an article of faith.
This weekend, I was absorbed by a blog post from University of Oregon professor Mark Thoma, which questions adherence to the Taylor Rule orthodoxy in these days of deleveraging-driven Great Recession. In the Economist's View, Prof. Thoma and others argue that it's silly to hold the Fed to a rule that assumes no economic frictions except for "mild price stickiness." Evidently, the Taylor Rule would have the Fed setting rates much higher than zero, but even future Nobel Prize winners should know that unquestioning adherence to a model may have adverse real-world consequences.