I will be running a MOOC (massively online) class this fall. Follow the link for information. The class will roughly parallel my PhD asset pricing class. We'll run through most of the "Asset Pricing" textbook. The videos are all shot, now I'm putting together quizzes... which accounts for some of my recent blog silence.
So, if you're interested in the theory of academic asset pricing, or you've wanted to work through the book, here's your chance. It's designed for PhD students, aspiring PhD students, advanced MBAs, financial engineers, people who are working in industry who might like to study PhD level finance but don't have the time, and so on. It's not easy, we start with a stochastic calculus review! But I'm emphasizing the intuition, what the models mean, why we use them, and so on, over the mathematics.
Showing posts with label Thesis topics. Show all posts
Showing posts with label Thesis topics. Show all posts
Friday, August 23, 2013
Tuesday, June 18, 2013
Two seconds
The weekend wall street journal had an interesting article about high speed trading, Traders Pay for an Early Peek at Key Data. Through Thompson-Reuters, traders can get the University of Michigan consumer confidence survey results two seconds ahead of everyone else. They then trade S&P500 ETFs on the information.
Naturally, the article was about whether this is fair and ethical, with a pretty strong sense of no (and surely pressure on the University of Michigan not to offer the service.)
It didn't ask the obvious question: Traders need willing counterparties. Knowing that this is going on, who in their right mind is leaving limit orders on the books in the two seconds before the confidence surveys come out?
OK, you say, mom and pop are too unsophisticated to know what's going on. But even mom and pop place their orders through institutions which use trading algorithms to minimize price impact. It takes one line of code to add "do not leave limit orders in place during the two seconds before the consumer confidence surveys come out."
In short, the article leaves this impression that investors are getting taken. But it's so easy to avoid being taken, so it seems a bit of a puzzle that anyone can make money at this game.
I hope readers with more market experience than I can answer the puzzle: Who is it out there that is dumb enough to leave limit orders for S&P500 ETFs outstanding in the 2 seconds before the consumer confidence surveys come out?
![]() |
Source: Wall Street Journal |
Naturally, the article was about whether this is fair and ethical, with a pretty strong sense of no (and surely pressure on the University of Michigan not to offer the service.)
It didn't ask the obvious question: Traders need willing counterparties. Knowing that this is going on, who in their right mind is leaving limit orders on the books in the two seconds before the confidence surveys come out?
OK, you say, mom and pop are too unsophisticated to know what's going on. But even mom and pop place their orders through institutions which use trading algorithms to minimize price impact. It takes one line of code to add "do not leave limit orders in place during the two seconds before the consumer confidence surveys come out."
In short, the article leaves this impression that investors are getting taken. But it's so easy to avoid being taken, so it seems a bit of a puzzle that anyone can make money at this game.
I hope readers with more market experience than I can answer the puzzle: Who is it out there that is dumb enough to leave limit orders for S&P500 ETFs outstanding in the 2 seconds before the consumer confidence surveys come out?
Labels:
Finance,
Thesis topics
Friday, March 8, 2013
Crunch time
David Greenalw, Jim Hamilton, Peter Hooper and Rick Mishkin have a nice op-ed in the Wall Street Journal summarizing their recent paper, Crunch Time: Fiscal Crises and the Role of Monetary Policy, (The link goes to from Jim's website there is also an executive summary.)
David, Jim, Peter and Rick are after the same question in my last WSJ oped and Blog post: Suppose the Fed wants to raise interest rates with a huge debt outstanding. With, say, $18 trillion outstanding, raising interest rates to 5% means raising the deficit by $900 billion a year. That's real fiscal resources. In a present value sense, monetary tightening costs someone $900 billion a year of taxes. There is no chance that current tax revenues can go up that much, or current spending can go down that much. So, raising interest rates to 5% with a lot of debt outstanding means we will borrow it, the debt will grow $900 billion a year faster, and the larger taxes /lower spending will come someday in the far off future.
Or maybe not. David, Jim, Peter and Rick delve in to the "tipping point" I alluded to.
Southern Europe was basically on a similar death spiral until the ECB stepped in and said it would print euros to buy up any debt as needed. The big contribution of the paper: facts.
When the Fed raised real rates in the late 1970s, Federal debt was “only” 32% of GDP. Interest payments did swell, from 1.5% to 3% of GDP, accounting for more than half of the Reagan deficits. And long-term real interest rates were high for a decade, usually interpreted as the market's worry that we would go back to inflation, which is the same thing as saying that the government might not have the stomach to pay off all this debt. But strong growth and tax reform led the US to large primary surpluses, and we paid off that extra debt.
We go in to this one with over 100% debt to GDP ratio, and much weaker growth prospects. The experience of how "easy" tightening was in the early 1980s should not lull us in to a sense of security.
They made a small, but I think crucial omission:
Growth. Tax revenue = tax rate x income. You can broaden the base as much as you want, without economic growth the long-term US budget is a disaster. And the current alarming projections assume that we will, someday, return to strong growth. All the reining in, soaking the rich, and base broadening in the world will not save us without growth. We prescribe "structural reform" for Greece. Why not for the US?
Note to graduate students. The theory here is actually less well worked out than you think. Suppose the Fed follows a Taylor rule, hoping to control inflation by raising interest rates when inflation breaks out. But suppose there is a Laffer limit on taxes, total tax revenue is less than T. In this paper and my own speculations there is a conjecture that inflation can get out of control, and a sense of multiple run-prone equilibria, and a sense that current debt/GDP is an important state variable. It needs better working out.
David, Jim, Peter and Rick are after the same question in my last WSJ oped and Blog post: Suppose the Fed wants to raise interest rates with a huge debt outstanding. With, say, $18 trillion outstanding, raising interest rates to 5% means raising the deficit by $900 billion a year. That's real fiscal resources. In a present value sense, monetary tightening costs someone $900 billion a year of taxes. There is no chance that current tax revenues can go up that much, or current spending can go down that much. So, raising interest rates to 5% with a lot of debt outstanding means we will borrow it, the debt will grow $900 billion a year faster, and the larger taxes /lower spending will come someday in the far off future.
Or maybe not. David, Jim, Peter and Rick delve in to the "tipping point" I alluded to.
Countries with high debt loads are vulnerable to an adverse feedback loop in which doubts by lenders about fiscal sustainability lead to higher government bond rates, which in turn make debt problems more severe.
Southern Europe was basically on a similar death spiral until the ECB stepped in and said it would print euros to buy up any debt as needed. The big contribution of the paper: facts.
Using statistical methods, case studies and a wealth of recent data on fiscal crises, we have found that countries with gross debt above 80% of GDP and persistent current-account deficits—as is currently the case in the United States—face sharply increasing risk of escalating interest payments on their debt. This means even higher budget deficits and debt levels and could lead to a fiscal crunch—a point where government bond rates shoot up and a funding crisis ensues.The vitally important point: it's nonlinear. Evidence from times and countries with lower debts does not apply.
When the Fed raised real rates in the late 1970s, Federal debt was “only” 32% of GDP. Interest payments did swell, from 1.5% to 3% of GDP, accounting for more than half of the Reagan deficits. And long-term real interest rates were high for a decade, usually interpreted as the market's worry that we would go back to inflation, which is the same thing as saying that the government might not have the stomach to pay off all this debt. But strong growth and tax reform led the US to large primary surpluses, and we paid off that extra debt.
We go in to this one with over 100% debt to GDP ratio, and much weaker growth prospects. The experience of how "easy" tightening was in the early 1980s should not lull us in to a sense of security.
They made a small, but I think crucial omission:
With sufficient political will, the U.S. government can avoid fiscal dominance and achieve long-run budget sustainability by gradually reining in spending on entitlement programs such as Medicare, Medicaid and Social Security, while increasing tax revenue by broadening the base.Quiz question: What's missing here?
Growth. Tax revenue = tax rate x income. You can broaden the base as much as you want, without economic growth the long-term US budget is a disaster. And the current alarming projections assume that we will, someday, return to strong growth. All the reining in, soaking the rich, and base broadening in the world will not save us without growth. We prescribe "structural reform" for Greece. Why not for the US?
Note to graduate students. The theory here is actually less well worked out than you think. Suppose the Fed follows a Taylor rule, hoping to control inflation by raising interest rates when inflation breaks out. But suppose there is a Laffer limit on taxes, total tax revenue is less than T. In this paper and my own speculations there is a conjecture that inflation can get out of control, and a sense of multiple run-prone equilibria, and a sense that current debt/GDP is an important state variable. It needs better working out.
Tuesday, February 28, 2012
Weird stuff in high frequency markets
On the left is a graph from a really neat paper, "Low-Latency Trading" by Joel Hasbrouck and Gideon Saar (2011). You're looking at the flow of "messages"--limit orders placed or canceled--on the NASDAQ. The x axis is time, modulo 10 seconds. So, you're looking at the typical flow of messages over any 10 second time interval.
As you can see, there is a big crush of messages on the top of the second, which rapidly tails off in the milliseconds following the even second. There is a second surge between 500 and 600 milliseconds.
Evidently, lots of computer programs reach out and look at the markets once per second, or once per half second. The programs clocks are tightly synchronized to the exchange's clock, so if you program a computer "go look once per second," it's likely to go look exactly on the second (or half second). The result is a flurry of activity on the even second.
It's likely the even-second traders are what Joel and Gideon call "Agency traders." They're trying to buy or sell a given quantity, but spread it out to avoid price impact. Their on-the-second activity spawns a flurry of responses from the high frequency traders, whose computers monitor markets constantly.
There's a natural question: Is this an accident, or is there intentional "on the second" bunching? You can see that a programmer who didn't think about it would check once per second, not realizing that means exactly on the top of the second. But sometimes there is more liquidity when we all agree to meet at the same time. Volume has always been higher at the open and close. Joel and Gideon show the pattern lasted from 2007 to 2008, so was not an obvious short-term programming bug. (Do notice the vertical scale however. The range is from 9 to 13, not 0 to 13.) I'd be curious to know if it's still going on.
Here's another one, found by one of my students on nanex.net here. (Teaching has many benefits when the students know more about markets than you do!).
You're looking at bids, asks, and (white dot) trades in the natural gas futures markets. From nanex:
The Economist reports an interesting related story.
There is also the more prosaic question whether high frequency traders "provide liquidity" and thus are in some sense beneficial to markets, or if they are somehow making markets worse. A question for another day (there is some interesting new research).
There are lots of reports of how profitable it is. But high frequency trading is a zero sum game. Anything you do in milliseconds can only talk to another computer. By definition, they can't all be making money off each other.
As you can see, there is a big crush of messages on the top of the second, which rapidly tails off in the milliseconds following the even second. There is a second surge between 500 and 600 milliseconds.
Evidently, lots of computer programs reach out and look at the markets once per second, or once per half second. The programs clocks are tightly synchronized to the exchange's clock, so if you program a computer "go look once per second," it's likely to go look exactly on the second (or half second). The result is a flurry of activity on the even second.
It's likely the even-second traders are what Joel and Gideon call "Agency traders." They're trying to buy or sell a given quantity, but spread it out to avoid price impact. Their on-the-second activity spawns a flurry of responses from the high frequency traders, whose computers monitor markets constantly.
There's a natural question: Is this an accident, or is there intentional "on the second" bunching? You can see that a programmer who didn't think about it would check once per second, not realizing that means exactly on the top of the second. But sometimes there is more liquidity when we all agree to meet at the same time. Volume has always been higher at the open and close. Joel and Gideon show the pattern lasted from 2007 to 2008, so was not an obvious short-term programming bug. (Do notice the vertical scale however. The range is from 9 to 13, not 0 to 13.) I'd be curious to know if it's still going on.
Here's another one, found by one of my students on nanex.net here. (Teaching has many benefits when the students know more about markets than you do!).
You're looking at bids, asks, and (white dot) trades in the natural gas futures markets. From nanex:
On June 8, 2011, starting at 19:39 Eastern Time, trade prices began oscillating almost harmonically along with the depth of book. However, prices rose as bid were executed, and prices declined when offers were executed .....price oscillates from low to high when trades are executing against the highest bid price level. After reaching a peak, prices then move down as trades execute against the highest ask price level. This is completely opposite of normal market behavior....It's almost as if someone is executing a new algorithm that has it's buying/selling signals crossed. Most disturbing to us is the high volume violent sell off that affects not only the natural gas market, but all the other trading instruments related to it.I'm generally give efficient markets the benefit of doutbt, but it's hard not to suspect that some programming bugs are working against each other here. It's hard enough to debug a program to work alone, but when 17 programs work against each other all sorts of interesting weirdness can spill out. I am reminded of work in game theory in which computer programs fight out the prisoner's dilemma and all sorts of weird stuff erupts. If so, this will settle down, but it may take a while.
The Economist reports an interesting related story.
ON FEBRUARY 3RD 2010, at 1.26.28 pm, an automated trading system operated by a high-frequency trader (HFT) called Infinium Capital Management malfunctioned. Over the next three seconds it entered 6,767 individual orders to buy light sweet crude oil futures... Enough of those orders were filled to send the market jolting upwards.
A NYMEX business-conduct panel investigated what happened that day.... Infinium had finished writing the algorithm only the day before it introduced it to the market, and had tested it for only a couple of hours in a simulated trading environment to see how it would perform. .... When the algorithm started its frenetic buying spree, the measures designed to shut it down automatically did not work. One was supposed to turn the system off if a maximum order size was breached, but because the machine was placing lots of small orders rather than a single big one the shut-down was not triggered. The other measure was meant to prevent Infinium from selling or buying more than a certain number of contracts, but because of an error in the way the rogue algorithm had been written, this, too, failed to spot a problem. ..High frequency trading presents a lot of interesting puzzles. The Booth faculty lunchroom has hosted some interesting discussions: "what possible social use is it to have price discovery in a microsecond instead of a millisecond?" "I don't know, but there's a theorem that says if it's profitable it's socially beneficial." "Not if there are externalities" "Ok, where's the externality?" At which point we all agree we don't know what the heck is going on.
There is also the more prosaic question whether high frequency traders "provide liquidity" and thus are in some sense beneficial to markets, or if they are somehow making markets worse. A question for another day (there is some interesting new research).
There are lots of reports of how profitable it is. But high frequency trading is a zero sum game. Anything you do in milliseconds can only talk to another computer. By definition, they can't all be making money off each other.
Tuesday, January 31, 2012
Consumer financial protection, 1984
The Financial Times reports an amazing interview with Martin Wheatley, the "head of the UK's new consumer protection watchdog."
I can't wait to see the Nanny State plan to help day traders to ditch those losing stocks faster.
Behavioral economics does not imply aristocratic paternalism. Behavioral economics, if you take it seriously, leads to a much more libertarian outlook.
Which kinds of institutions are likely to lead to behavioral biases: highly competitve, free institutions that must adapt or fail? Or a government bureacracy, pestered by rent-seeking lobbyists, free to indulge in the Grand Theory of the Day, able to move the lives of millions on a whim and by definition immune from competition?
Sure, the market will get it wrong. But behavioral economics, if you take it seriously, predicts that the regulator (the regulatory committee) will get it far worse. For regulators, even those that went to the right schools, are just as human and "behavioral" as the rest of us, and they are placed in institutions that lack many protections against bad decisions.
More generally, the case for free markets never was that markets always get it right. The case has always been based on the centuries of experience that governments get it far more wrong.
Serious behaviorists know this. Thaler and Sunstein's "Nudge" is pretty careful not to jump from "people make mistakes" to "a benevolent bureacracy must take care of the charming moronic pesantry." Alas, fans of 19th century aristocratic paternalism, who call themselves "liberals" today, make the jump with alacrity. They love to (mis-) cite behavioral economics as cover for their interventions. As, apparaently, Mr. Wheatley and the UK "protection" scheme he will now lead.
If he were to take behavioralism seriously, the interview would reveal a deep reflection on how he was going to keep his new agency from displaying all those biases likely to lead to bad decisions.
For example, his new power to tell bank A that its products are "mis-sold" will quickly and predictably lead to bank B taking his employees out to lunch to explain how terrible bank A's products are and how it must be stopped. "Consumer protection" has quickly morphed into "protection from competitors" the world over, and the behavioral biases of regulators (salience, social networks, etc.) are part of the story. "Watchdogs" become lap-dogs.
Where are the behavioral Stigler and Buchanan? It seems high time for a thoroughgoing behavioral analysis of the functioning of government bureacracy, legislation, and regulation.
Here's some real "financial protection" advice: Look at the elephants in the room.
The first thing the average American should do is get out of a highly leveraged, very illiquid investment that poses huge idiosyncratic risk. That's called an "owner-occupied home." Rent, and put the money in the stock market. Or buy a smaller home, that you can afford. Our government is still nudging us in exactly the wrong direction
The seond thing the average American should do is save a whole lot more. Our government is pushing more subsidies for student, homeowner, and business loans, and dramatically raising the already high taxes on saving and investment. When the American consumer tried to start saving a bit more in 2008, our Government responded with massive "stimulus" whose explicit purpose was to undo this bout of national thriftiness and get us to consume more, now.
Who's behavioral here?
Update: (response to some comments).
There is a huge difference between the justifications for regulation. 1) Protecting people from fraud. This is enforcing contracts and property rights, which is an obvious function of government. 2) Protecting people from definable and remediable market failures. That's more tenuous, but still a justifiable form of regulation. Though it's dangerous, see the capture exmaples, and often backfires. 3) "Protecting" people because the beuracracy just thinks it knows how to run people's lives better than they do. This used to be called aristocratic paternalism. Now it's defended by a misreading of behavioral economics. That's what the post is about. I hope that helps. I see it's an issue worth revisiting.
Investors cannot be counted on to make rational choices so regulators need to “step into their footprints” and limit or ban the sale of potentially harmful products,
“You have to assume that you don’t have rational consumers. Faced with complex decisions or too much information, they default ... They hide behind credit rating agencies or behind the promises that are given to them by the salesperson,” said Mr Wheatley..
The new approach rests on research in behavioural economics that shows investors often make decisions contrary to their own interests because of their aversion to losses or unwillingness to ditch a losing strategy. It represents a profound shift in regulatory stance.
Rather than simply ensuring that consumers are provided with complete and accurate information, the FCA will be monitoring firms to make sure that the right kinds of products get sold to the right kinds of people.
I can't wait to see the Nanny State plan to help day traders to ditch those losing stocks faster.
Behavioral economics does not imply aristocratic paternalism. Behavioral economics, if you take it seriously, leads to a much more libertarian outlook.
Which kinds of institutions are likely to lead to behavioral biases: highly competitve, free institutions that must adapt or fail? Or a government bureacracy, pestered by rent-seeking lobbyists, free to indulge in the Grand Theory of the Day, able to move the lives of millions on a whim and by definition immune from competition?
Sure, the market will get it wrong. But behavioral economics, if you take it seriously, predicts that the regulator (the regulatory committee) will get it far worse. For regulators, even those that went to the right schools, are just as human and "behavioral" as the rest of us, and they are placed in institutions that lack many protections against bad decisions.
More generally, the case for free markets never was that markets always get it right. The case has always been based on the centuries of experience that governments get it far more wrong.
Serious behaviorists know this. Thaler and Sunstein's "Nudge" is pretty careful not to jump from "people make mistakes" to "a benevolent bureacracy must take care of the charming moronic pesantry." Alas, fans of 19th century aristocratic paternalism, who call themselves "liberals" today, make the jump with alacrity. They love to (mis-) cite behavioral economics as cover for their interventions. As, apparaently, Mr. Wheatley and the UK "protection" scheme he will now lead.
If he were to take behavioralism seriously, the interview would reveal a deep reflection on how he was going to keep his new agency from displaying all those biases likely to lead to bad decisions.
For example, his new power to tell bank A that its products are "mis-sold" will quickly and predictably lead to bank B taking his employees out to lunch to explain how terrible bank A's products are and how it must be stopped. "Consumer protection" has quickly morphed into "protection from competitors" the world over, and the behavioral biases of regulators (salience, social networks, etc.) are part of the story. "Watchdogs" become lap-dogs.
Where are the behavioral Stigler and Buchanan? It seems high time for a thoroughgoing behavioral analysis of the functioning of government bureacracy, legislation, and regulation.
Here's some real "financial protection" advice: Look at the elephants in the room.
The first thing the average American should do is get out of a highly leveraged, very illiquid investment that poses huge idiosyncratic risk. That's called an "owner-occupied home." Rent, and put the money in the stock market. Or buy a smaller home, that you can afford. Our government is still nudging us in exactly the wrong direction
The seond thing the average American should do is save a whole lot more. Our government is pushing more subsidies for student, homeowner, and business loans, and dramatically raising the already high taxes on saving and investment. When the American consumer tried to start saving a bit more in 2008, our Government responded with massive "stimulus" whose explicit purpose was to undo this bout of national thriftiness and get us to consume more, now.
Who's behavioral here?
Update: (response to some comments).
There is a huge difference between the justifications for regulation. 1) Protecting people from fraud. This is enforcing contracts and property rights, which is an obvious function of government. 2) Protecting people from definable and remediable market failures. That's more tenuous, but still a justifiable form of regulation. Though it's dangerous, see the capture exmaples, and often backfires. 3) "Protecting" people because the beuracracy just thinks it knows how to run people's lives better than they do. This used to be called aristocratic paternalism. Now it's defended by a misreading of behavioral economics. That's what the post is about. I hope that helps. I see it's an issue worth revisiting.
Thursday, January 26, 2012
A brief parable of over-differencing
The Grumpy Economist has sat through one too many seminars with triple differenced data, 5 fixed effects and 30 willy-nilly controls. I wrote up a little note (7 pages, but too long for a blog post), relating the experience (from a Bob Lucas paper) that made me skeptical of highly processed empirical work.
The graph here shows velocity and interest rates. You can see the nice sensible relationship.
(The graph has an important lesson for policy debates. There is a lot of puzzling why people and companies are sitting on so much cash. Well, at zero interest rates, the opportunity cost of holding cash is zero, so it's a wonder they don't hold more. This measure of velocity is tracking interest rates with exactly the historical pattern.)
But when you run the regression, the econometrics books tell you to use first differences, and then the whole relationship falls apart. The estimated coefficient falls by a factor of 10, and a scatterplot shows no reliable relationship. See the the note for details, but you can see in the second graph how differencing throws out the important variation in the data.
The perils of over differencing, too many fixed effects, too many controls, and that GLS or maximum likelihood will jump on silly implications of necessarily simplified theories are well known in principle. But a few clear parables might make people more wary in practice. Needed: a similarly clear panel-data example.
The graph here shows velocity and interest rates. You can see the nice sensible relationship.
(The graph has an important lesson for policy debates. There is a lot of puzzling why people and companies are sitting on so much cash. Well, at zero interest rates, the opportunity cost of holding cash is zero, so it's a wonder they don't hold more. This measure of velocity is tracking interest rates with exactly the historical pattern.)
But when you run the regression, the econometrics books tell you to use first differences, and then the whole relationship falls apart. The estimated coefficient falls by a factor of 10, and a scatterplot shows no reliable relationship. See the the note for details, but you can see in the second graph how differencing throws out the important variation in the data.
The perils of over differencing, too many fixed effects, too many controls, and that GLS or maximum likelihood will jump on silly implications of necessarily simplified theories are well known in principle. But a few clear parables might make people more wary in practice. Needed: a similarly clear panel-data example.
Saturday, January 21, 2012
New Keynesian Stimulus
One piece of interesting economics did come up while I was looking through the stimulus blogwars.
Paul Krugman pointed to New Keynesian stimulus models in a recent post, When Some Rigor Helps.
I wrote a paper about New Keynesian models, published in the Journal of Political Economy (appendix, html on JSTOR). I haven't totally digested the NK stimulus literature -- In addition to Mike's paper, Christiano, Eichenbaum and Rebelo; Gauti Eggertsson; Leeper Traum and Walker; Cogan, Cwik, Taylor, and Wieland are on my reading list -- but I've gotten far enough to have some sharp questions worth passing on in a blog post.
Krugman continues,
One thing I know for sure: This is wrong. (It's an understandable mistake, and many people make it.) The New Keynesian models are radically different from Old-Keynesian ISLM models. They are not a magic wand that lets you silence Lucas and Sargent and go back to the good old days.
New-Keynesian models have multiple equilibria. The model's responses -- such as the response of output to government spending or to monetary policy shocks -- are not controlled by demand and supply. They occur by cajoling the economy to jump to a different one of many possible equilibria. If you're going to write an honest op-ed about New Keynesian models, you really have to say "government spending will make the economy jump from one equilibrium to another." Good luck!
New Keynesian models offer a fundamentally different mechanism from the IS-LM or standard stories that Krugman -- and Bernanke, and lots of sensible people who think about policy -- find "actually more useful."
For example, the common-sense story for inflation control via the Taylor rule is this: Inflation rises 1%, the Fed raises rates 1.5% so real rates rise 0.5%, "demand" falls, and inflation subsides. In a new-Keynesian model, by contrast, if inflation rises 1%, the Fed engineers a hyperinflation where inflation will rise more and more! Not liking this threat, the private sector jumps to an alternative equilibrium in which inflation doesn't rise in the first place. New Keynesian models try to attain "determinacy" -- choose one of many equilibria -- by supposing that the Fed deliberately introduces "instability" (eigenvalues greater than one in system dynamics). Good luck explaining that honestly!
In the context of the zero bound and multipliers, not even this mechanism can work, because the interest rate is stuck at zero. There are "multiple locally-bounded equilibria." Some stimulus models select equilbria by supposing that for any but the chosen one, people expect that the Fed will l hyperinflate many years in the future once the zero bound is lifted. Hmmm.
These problems can be fixed, and my paper shows how. Alas, the fix completely changes the model dynamics and predictions for the economy's reaction to shocks.
Or maybe not. I know the simple New Keynesian models suffer these problems. (That's what the JPE paper is about.) Do they apply to the stimulus models? I don't know yet. I certainly have some sharp questions to ask, and I don't see anything in the models I've looked at with a hope of solving these problems.
Moreover, even taken at face value, the predictions of New Keynesian models are a lot different from Krugman's advertisement that more G gives more Y.
Every NK stimulus model that I have read is "Ricardian." Government spending has very large effects, even if it is financed by current taxes. Good luck writing an op-ed that says, "The government should grab a trillion of new taxes this year and spend it. We'll all be a trillion and a half better off by Christmas." The popular appeal of stimulus comes from the idea that borrowed money doesn't transparently reduce demand as much as taxed money. But that's the iron discipline of models -- you can't take one prediction without the other. If you don't believe in taxed stimulus, you can't use a Ricardian New Keynesian model to defend borrowed stimulus. (Or you have to construct one in which there is a big difference, which I have not found so far.)
More weird stuff, from Gauti Eggertsson's introduction
I also notice that "deflationary spirals" are a big part of the analysis. For example, in Christiano et al.,
Back to reading. I'll post again if I get more NK stimulus insights. It may take a while. I still think it's yesterday's news. Sovereign default seems more important for the future.
Paul Krugman pointed to New Keynesian stimulus models in a recent post, When Some Rigor Helps.
But take an NK [New-Keynesian] model like Mike Woodford’s (pdf) — a model in which everyone maximizes given a budget constraint, in which by construction all the accounting identities are honored, and in which it is assumed that everyone perfectly anticipates future taxes and all that— and you find immediately that a temporary rise in G produces a rise in Y"...As it happens, I've spent a lot of time reading and teaching New Keynesian models.
So I guess I’d urge all the people now engaging in contorted debates about what S=I does and does not imply to read Mike first, and see whether you have any point left.
I wrote a paper about New Keynesian models, published in the Journal of Political Economy (appendix, html on JSTOR). I haven't totally digested the NK stimulus literature -- In addition to Mike's paper, Christiano, Eichenbaum and Rebelo; Gauti Eggertsson; Leeper Traum and Walker; Cogan, Cwik, Taylor, and Wieland are on my reading list -- but I've gotten far enough to have some sharp questions worth passing on in a blog post.
Krugman continues,
That doesn’t mean that you have to use Mike’s model or something like it every time you think about policy; by and large, ad hoc models like IS-LM are actually more useful, in my judgment
One thing I know for sure: This is wrong. (It's an understandable mistake, and many people make it.) The New Keynesian models are radically different from Old-Keynesian ISLM models. They are not a magic wand that lets you silence Lucas and Sargent and go back to the good old days.
New-Keynesian models have multiple equilibria. The model's responses -- such as the response of output to government spending or to monetary policy shocks -- are not controlled by demand and supply. They occur by cajoling the economy to jump to a different one of many possible equilibria. If you're going to write an honest op-ed about New Keynesian models, you really have to say "government spending will make the economy jump from one equilibrium to another." Good luck!
New Keynesian models offer a fundamentally different mechanism from the IS-LM or standard stories that Krugman -- and Bernanke, and lots of sensible people who think about policy -- find "actually more useful."
For example, the common-sense story for inflation control via the Taylor rule is this: Inflation rises 1%, the Fed raises rates 1.5% so real rates rise 0.5%, "demand" falls, and inflation subsides. In a new-Keynesian model, by contrast, if inflation rises 1%, the Fed engineers a hyperinflation where inflation will rise more and more! Not liking this threat, the private sector jumps to an alternative equilibrium in which inflation doesn't rise in the first place. New Keynesian models try to attain "determinacy" -- choose one of many equilibria -- by supposing that the Fed deliberately introduces "instability" (eigenvalues greater than one in system dynamics). Good luck explaining that honestly!
In the context of the zero bound and multipliers, not even this mechanism can work, because the interest rate is stuck at zero. There are "multiple locally-bounded equilibria." Some stimulus models select equilbria by supposing that for any but the chosen one, people expect that the Fed will l hyperinflate many years in the future once the zero bound is lifted. Hmmm.
These problems can be fixed, and my paper shows how. Alas, the fix completely changes the model dynamics and predictions for the economy's reaction to shocks.
Or maybe not. I know the simple New Keynesian models suffer these problems. (That's what the JPE paper is about.) Do they apply to the stimulus models? I don't know yet. I certainly have some sharp questions to ask, and I don't see anything in the models I've looked at with a hope of solving these problems.
Moreover, even taken at face value, the predictions of New Keynesian models are a lot different from Krugman's advertisement that more G gives more Y.
Every NK stimulus model that I have read is "Ricardian." Government spending has very large effects, even if it is financed by current taxes. Good luck writing an op-ed that says, "The government should grab a trillion of new taxes this year and spend it. We'll all be a trillion and a half better off by Christmas." The popular appeal of stimulus comes from the idea that borrowed money doesn't transparently reduce demand as much as taxed money. But that's the iron discipline of models -- you can't take one prediction without the other. If you don't believe in taxed stimulus, you can't use a Ricardian New Keynesian model to defend borrowed stimulus. (Or you have to construct one in which there is a big difference, which I have not found so far.)
More weird stuff, from Gauti Eggertsson's introduction
Cutting taxes on labor or capital is contractionary under the special circumstances the United States is experiencing today. Meanwhile, the effect of temporarily increasing government spending is large, much larger than under normal circumstances. Similarly, some other forms of tax cuts, such as a reduction in sales taxes and investment tax credits, as suggested, for example, by Feldstein (2002) in the context of Japan’s “Great Recession,” are extremely effective....Tax cuts are contractionary? The stimulus failed because the large tax cut component dragged output down? That's new, and I didn't hear Krugman complaining! Maybe it's right, but you can see we're a long long way from simple ISLM logic. Also, it's clear that these models make a sharp distinction between zero and nonzero rates, that stimulus advocates certainly do not make.
At positive interest rates, a labor tax cut is expansionary, as the literature has emphasized in the past. But at zero interest rates, it flips signs and tax cuts become contractionary. Similarly while capital tax cuts are almost irrelevant in the model at a positive interest rate (up to the second decimal point) they become strongly negative at zero. Meanwhile, the multiplier of government spending not only stays positive at zero interest rates but becomes almost five times larger.
I also notice that "deflationary spirals" are a big part of the analysis. For example, in Christiano et al.,
But, in contrast to the textbook scenario, the zero-bound scenario studied in the modern literature involves a deflationary spiral which contributes to and accompanies the large fall in output.OK, but we have near zero short-term government rates, a 3% positive rate of inflation and far from zero corporate and long term rates. Does the analysis apply?
Back to reading. I'll post again if I get more NK stimulus insights. It may take a while. I still think it's yesterday's news. Sovereign default seems more important for the future.
Wednesday, January 4, 2012
The VAT, a libertarian dilemma
Dan Mitchell wrote an interesting op-ed in the Wall Street Journal (Cato link for those without WSJ access), highlighting a great libertarian dilemma: is a consumption tax (VAT or similar) a good thing?
Every bit of economic analysis says yes. Economists hate distortions, taxes that lead to bad economic behavior. Our tax system is full of them. Broaden the base, lower the rate, tax consumption not savings, dramatically simplify the code, and you can get the same revenue with much less economic damage.
A political argument disagrees: An efficient tax code can also raise a lot more revenue. Dan opposes the VAT (and similar consumption taxes) on that grounds. Yes it looks good to start, but politicians will soon raise the rate to the sky and spend the results. (Becker and Posner have also tackled this one several times.)
It's a strking dilemma: should we keep an atrocious tax system to limit the size of government? Is there no way to get an efficient tax system and a limited government?
Implicit in Dan's argument is a deeply pessimistic view of our Government: sooner or later, Congress will get to the top of the Laffer curve of a given tax structure. (The top of the Laffer curve is the point where the government is getting the most possible tax revenue. If the government raises tax rates any more, people work less, hire lawyers and lobbyists to evade taxes, businesses move offshore or just don't get started, and so on. The government can end up with less total money even though tax rates are higher.)
Most economists seem to disagree that we are at the top of the Laffer curve. They think Congress has not squeezed every drop out of the current tax code, so they would place more hope in future Congressional restraint. In their view, Europe first decided on a welfare state and then decided how to fund it, not the other way around.
I'm almost as pessimistic as Dan. Sure, raising tax rates can generate more revenue for a few years. But most economic analyses don't look at the long-run, growth effects of tax distortions. The full disincentive effects don't show up for years. If taxes just lower growth a few fractions of a percent, that soon compounds to drastic reductions in income and tax revenue.
The U.S. Federal income tax seems to take in about 18% of GDP with top rates anywhere from 35% to 90%. And the disincentives are bigger than you think. They result from the full sum of federal, state, local, estate, sales, etc. taxes. Greg Mankiw figured his marginal tax rate at 93% -- and he forgot sales taxes. (A bit more on long-run Laffer curves on p. 20 here)
But I'm not not quite that pessimistic. Perhaps I'm just being soft hearted, but I surely hope our political system is not quite so broken. On the other hand, it seems naive to simply count on self-restraint to keep spending under control if the Laffer curve all of a sudden is less painful.
This whole question begs for a lot more serious thought:
What is the long-run Laffer curve tradeoff, really? How much do distorting taxes affect growth, and hence long-run revenues? Is our government really close to the top of the long-run Laffer curve?
Economic distortions are not exactly the same as revenue reductions. If we accept the dark view of Government, how do we design a tax system that minimizes economic distortions, yet keeps the top of the long-run Laffer curve at, say, 20% of GDP? That is a fun optimal-taxation optimization problem. Surely the current abomination is not the answer to that question!
In one sense the top-of-the-Laffer curve view is demonstrably wrong. Dan argues that the government grabs as much revenue as it can given the current tax system, but it can't easily change the tax system. If so, it would have already enacted a VAT! Is there hope for similar hard-to-change constraints on overall tax revenue within a better system? Can we pass an effective law that says revenue may be no more than 20% of GDP?
What are the relative welfare effects of a government that is too large vs. a distorting tax system? Maybe firing all the tax lawyers accountants and lobbyists is worth putting up with a slightly bigger welfare state?
Any PhD students out there looking for thesis topics?
Every bit of economic analysis says yes. Economists hate distortions, taxes that lead to bad economic behavior. Our tax system is full of them. Broaden the base, lower the rate, tax consumption not savings, dramatically simplify the code, and you can get the same revenue with much less economic damage.
A political argument disagrees: An efficient tax code can also raise a lot more revenue. Dan opposes the VAT (and similar consumption taxes) on that grounds. Yes it looks good to start, but politicians will soon raise the rate to the sky and spend the results. (Becker and Posner have also tackled this one several times.)
It's a strking dilemma: should we keep an atrocious tax system to limit the size of government? Is there no way to get an efficient tax system and a limited government?
Implicit in Dan's argument is a deeply pessimistic view of our Government: sooner or later, Congress will get to the top of the Laffer curve of a given tax structure. (The top of the Laffer curve is the point where the government is getting the most possible tax revenue. If the government raises tax rates any more, people work less, hire lawyers and lobbyists to evade taxes, businesses move offshore or just don't get started, and so on. The government can end up with less total money even though tax rates are higher.)
Most economists seem to disagree that we are at the top of the Laffer curve. They think Congress has not squeezed every drop out of the current tax code, so they would place more hope in future Congressional restraint. In their view, Europe first decided on a welfare state and then decided how to fund it, not the other way around.
I'm almost as pessimistic as Dan. Sure, raising tax rates can generate more revenue for a few years. But most economic analyses don't look at the long-run, growth effects of tax distortions. The full disincentive effects don't show up for years. If taxes just lower growth a few fractions of a percent, that soon compounds to drastic reductions in income and tax revenue.
The U.S. Federal income tax seems to take in about 18% of GDP with top rates anywhere from 35% to 90%. And the disincentives are bigger than you think. They result from the full sum of federal, state, local, estate, sales, etc. taxes. Greg Mankiw figured his marginal tax rate at 93% -- and he forgot sales taxes. (A bit more on long-run Laffer curves on p. 20 here)
But I'm not not quite that pessimistic. Perhaps I'm just being soft hearted, but I surely hope our political system is not quite so broken. On the other hand, it seems naive to simply count on self-restraint to keep spending under control if the Laffer curve all of a sudden is less painful.
This whole question begs for a lot more serious thought:
What is the long-run Laffer curve tradeoff, really? How much do distorting taxes affect growth, and hence long-run revenues? Is our government really close to the top of the long-run Laffer curve?
Economic distortions are not exactly the same as revenue reductions. If we accept the dark view of Government, how do we design a tax system that minimizes economic distortions, yet keeps the top of the long-run Laffer curve at, say, 20% of GDP? That is a fun optimal-taxation optimization problem. Surely the current abomination is not the answer to that question!
In one sense the top-of-the-Laffer curve view is demonstrably wrong. Dan argues that the government grabs as much revenue as it can given the current tax system, but it can't easily change the tax system. If so, it would have already enacted a VAT! Is there hope for similar hard-to-change constraints on overall tax revenue within a better system? Can we pass an effective law that says revenue may be no more than 20% of GDP?
What are the relative welfare effects of a government that is too large vs. a distorting tax system? Maybe firing all the tax lawyers accountants and lobbyists is worth putting up with a slightly bigger welfare state?
Any PhD students out there looking for thesis topics?
Labels:
Commentary,
Taxes,
Thesis topics
Subscribe to:
Posts (Atom)