Winner of the New Statesman SPERI Prize in Political Economy 2016


Thursday 30 August 2012

The pernicious politics of immigration


There can be a rational debate about the costs and benefits of immigration, and what that implies about immigration controls. And there can be political debate, which is nearly always something different. I know this is an issue in the US, but I suspect we in the UK have probably more experience in how to be really nasty to foreigners who want to come here.

Probably most UK academics have some experience of how the UK authorities deal with student visas. A recent case I was involved with concerned a student who had their visa refused because of a mistake that the immigration officials acknowledged was their own. However they would only overturn the decision if the student went through an expensive appeals process, or reapplied through a solicitor, which was still expensive but less so. They did the latter, successfully and with university support, but the whole process took time and caused considerable distress. Not only did the bureaucracy make a mistake, it also made the innocent party pay for the bureaucracy’s mistake.

The latest example is the UK Border Agency’s decision to revoke the right of the London Metropolitan University to sponsor students from outside the EU. The Agency has problems with the university’s monitoring of these students. Whether or not the Agency has a case against the university, the decision means that over 2,500 students, many of whom are midway through their course, have 60 days to find an alternative institution to sponsor them or face deportation. The Agency has no reason to believe, and has not claimed, that the majority of them are not perfectly genuine students who have paid good money to study in the UK. The Agency does not have to punish innocent students to punish the university. I guess it might call them collateral damage, but in this case the damage seems easily avoidable.

The government has apparently set up a ‘task force’ to help these students. Its work will not be easy, but it is certainly not going to make the emotional distress these students are currently suffering go away. What it does illustrate is that this decision is no unhappy accident due to an overzealous arm of government. It looks like a deliberate government attempt to show that it is being ‘tough on immigration’.

Aside from the human cost, there is the economic damage this does to an important UK export industry. There are around 300,000 overseas students in the UK. Universities UK estimates that these students contribute £5 billion a year (0.3% of GDP) in fees and off-campus expenditure. Unlike the rest of the UK economy, this is an export industry that has been growing rapidly, but in a highly competitive market. Changes to visa regulations already announced has led to study visas issued in the year to June 2012 falling sharply compared with the previous 12 months. It is pretty obvious what impact the most recent decision involving London Met will have on prospective students trying to decide whether to come to the UK or go elsewhere.
    
Student visas are not the only area involving immigration where rational argument and sensible cost benefit analysis (of the economic or more general kind) goes out of the window when political decisions are made. Jonathan Portes notes renewed pressure from parts of government to further deregulate the UK labour market. While this seems a little strange for a labour market which is much less regulated than most in Europe, it ignores the huge increase in regulation the government has created as a result of tightening immigration rules. He says "The extra employment regulation that the Government has imposed on employers wishing to employ migrant workers—the cap on skilled migration—will, using the Government's own methodology, reduce UK output by between £2 and 4 billion by the end of the Parliament."

Numbers like this are important, and it makes you wonder how serious the government is about doing everything it can to get the economy moving again. But what really makes me angry is the human misery this kind of decision causes. Having seen one case at first hand, I can imagine what 2,500 others are currently going through. But of course they do not have a vote, and it would seem that in the eyes of the Minister responsible, Damian Green, the votes he thinks he has gained by this decision are worth this collateral damage.

Arguments for ending the microfoundations hegemony


Should all macroeconomic models in good journals include their microfoundations? In terms of current practice the answer is almost certainly yes, but is that a good thing? In earlier posts I’ve tried to suggest why there might be a case for sometimes starting with an aggregate macro model, and discussing the microfoundations of particular relationships (or lack of) by reference to other papers. This is a pretty controversial suggestion, which will appear for many to be a move backwards not just in time but in terms of progress. As a result I started with what I thought would be one fairly uncontroversial (but not exactly essential) reason for doing this. However let me list here what I think are the more compelling reasons for this proposal.

1) Empirical evidence. There may be strong empirical evidence in favour of an aggregate relationship which has as yet no clear microfoundation. A microfoundation may emerge in time, but policy makers do not have time to wait for this to happen. (It may take decades, as in the microfoundations for price rigidity.) Academics may have useful things to say to policy makers about the implications of this, as yet not microfounded, aggregate relationship. A particularly clear case is where you model what you can see rather than what you can microfound. For further discussion see this post.

2) Complexity. In a recent post I discussed how complexity driven by uncertainty may make it impossible to analytically derive microfounded relationships, and the possible responses to this. Two of the responses I discussed stayed within microfoundations methodology, but both had unattractive features. A much more tractable alternative may be to work directly with aggregate relationships that appear to capture some of this complexity. (The inspiration for this post was Carroll’s paper that suggested Friedman’s PIH did just that.)

3) Heterogeneity. At first sight heterogeneity that matters should spur the analysis of heterogeneous agent models of the kind analysed here, which remain squarely within the microfoundations framework. Indeed it should. However in some cases this work could provide a rationalisation for aggregate models that appear robust to this heterogeneity, and which are more tractable. Alan Blinder famously found that there was no single front runner for causes of price rigidity. If this is because an individual firm is subject to all these influences at once, then this is an example of complexity. However if different types of firm have different dominant motives, then this is an example of heterogeneity. Yet a large number of microfoundations for price rigidity appear to result in an aggregate equation that looks like a Phillips curve. (For a recent example, see Gertler and Leahy here.) This might be one case where working with aggregate relationships that appear to come from a number of different microfoundations gives you greater generality, as I argued here.

4) Aggregate behaviour might not be reducible to the summation of individuals optimising. This argument has a long tradition, associated with Alan Kirman and others. I personally have not been that persuaded by these arguments because I’ve not seen clear examples where it matters for bread and butter macro, but that may be my short-sightedness.

5) Going beyond simple microeconomics. The microeconomics used to microfound macromodels is normally pretty simple. But what if the real world involves a much more substantial departure from these simple models? Attitudes to saving, for example, may be governed by social norms that are not always mimicked by our simple models, but which may be fairly invariant over some macro timescales, as Akerlof has suggested. This behaviour may be better captured by aggregate approximations (that can at least be matched to the data) than a simple microfoundation. We could include under this umbrella radical departures from simple microfoundations associated with heterodox economists. I do not think the current divide between mainstream and heterodox macro is healthy for either side.

If this all seems very reasonable to you, then you are probably not writing research papers in the macroeconomics mainstream. Someone who is could argue that once you lose the discipline of microfoundations, then anything goes. My response is that empirical evidence should, at least in principle, be able to provide an alternative discipline. In my earlier post I suggested that the current hegemony of microfoundations owed as much to a loss of faith in structural time series econometrics as it did to the theoretical shortcomings of non-microfounded analysis. However difficulties involved in doing time series econometrics should not mean that we give up on looking at how individual equations fit. In addition, there is no reason why we cannot compare the overall fit of aggregate models to microfounded alternatives.

While this post lists all the reasons why sometimes starting with aggregate models would be a good idea, I find it much more difficult to see how what I suggest might come about. Views among economists outside macro, and policy makers, about the DSGE approach can be pretty disparaging, yet it is unclear how this will have any influence on publications in top journals. The major concern amongst all but the most senior (in terms of status) academic macroeconomists is to get top publications, which means departing from the DSGE paradigm is much too risky. Leaders in the field have other outlets when they want to publish papers without microfoundations (e.g. Michael Woodford here).

Now if sticking with microfoundations meant that macroeconomics as a whole gradually lost relevance, then you could see why the current situation would become unsustainable. Some believe the recent crisis was just such an event. While I agree that insistence on microfoundations discouraged research that might have been helpful during and after the crisis, there is now plenty of DSGE analysis of various financial frictions (e.g. Gertler and Kiyotaki here) that will take the discipline forward. I think microfoundations macro deserves to be one of, if not the, major way macro is done. I just do not think it is the only route to macroeconomic wisdom, but the discipline at the moment acts as if it is.

Saturday 25 August 2012

Costing Incomplete Fiscal Plans: Ryan and the CBO


Some of the regular blogs I read are currently preoccupied (understandably) with the US Presidential election. This is not my territory, but the role of fiscal councils – in this case the CBO – in costing budget proposals is, and the two connect with the analysis of the Ryan budget plan. The Ryan ‘plan’ involves cutting the US budget deficit, but contains hardly any specifics about how that will be done.

There is nothing unique to the US here. In the 2010 UK elections, both main parties acknowledged the need for substantial reductions in the budget deficit over time, but neither party fully specified how these would be achieved. Now as the appropriate speed of deficit reduction was a key election issue, this might seem surprising. In particular, why did one party not fully specify its deficit reduction programme, and then gain votes by suggesting the other was not serious about the issue?

The answer has to be that any gains in making the plans credible would be outweighed by the political costs of upsetting all those who would lose out on specific measures. People can sign up to lower deficits, as long as achieving them does not involve increasing their taxes or reducing their benefits. However, I think it’s more than this. If people were fully aware of the implications of what deficit reduction plans might entail, you would guess that lack of information might be even more damaging than full information. As people tend to be risk averse, the (more widespread) fear that their benefits might be cut could be more costly in electoral terms than a smaller number knowing the truth.

The fact that this logic does not operate suggests to me that (at least among swing voters) there is a bigger disconnect in people’s minds between aggregate deficit plans and specific measures. Saying you will be tough on the deficit does not panic swing voters, but adds to your credibility in being serious about the deficit ‘problem’. Indeed, from my memory of the UK election, claims by one side about secret plans of the other were effectively neutralised as scaremongering.

This can be seen as the reverse side of a familiar cause of deficit bias. A political party can gain votes by promising things to specific sections of the electorate, but does not lose as many votes because of worries about how this will be paid for. The media can correct this bias by insisting on asking where the money will come from (or in the reverse case, where the cuts will come from), but they may have limited ability to check or interrogate the answer. This is where a fiscal council, which has authority as a result of being set by government but also independent of government, can be useful.

For some time the Netherlands Bureau for Economic Policy Analysis (often called the CPB) has offered to cost political parties fiscal proposals before elections. The interesting result is that all the major parties take up this offer. Not having your fiscal plans independently assessed appears to be a net political cost.

What the fiscal council is doing in this case is conferring an element of legitimacy on aggregate fiscal plans, a legitimacy that is more valuable than uncosted fiscal sweeteners. Which brings me to the question of what a fiscal council should do if these plans are clearly incomplete? In particular, suppose plans include some specific proposals that are deficit increasing or neutral, but unspecified plans to raise taxes or cut spending which lead to the deficit being reduced. By ‘should do’ here I do not mean what it is legally obliged to do, but what would be the right thing to do.

It seems to me clear that the right thing to do is not to cost the overall budget. What, after all, is being achieved by doing so? Many people or organisations can put a set of numbers for aggregate spending and taxes into a spreadsheet and calculate implied deficits, and the adding up can easily be checked. By getting the fiscal council to do this fairly trivial task serves no other purpose than to give the plan a legitimacy that it does not have.

In this situation, a fiscal council that does calculate deficit numbers for a plan that leaves out all the specifics is actually doing some harm. Instead of asking the difficult questions, it is giving others cover to avoid answering them. It is no excuse to say that what was done is clear in the text of the report. The fiscal council is there partly so people do not have to read the report. So I wonder if the CBO had any discretion in this respect. If Ryan was playing the system, perhaps the system needs changing to give the CBO a little more independence. 

Friday 24 August 2012

Multiplier theory: one is the magic number


I have written a bit about multipliers, particularly of the balanced budget kind, but judging by comments some recap and elaboration may be useful. So here is why, for all government spending multipliers, one is the number to start from. To make it a bit of a challenge (for me), I’ll not use any algebra.

Any discussion has to be context specific. Imagine a two period world. The first period is demand deficient because interest rates are stuck at the zero lower bound[1], but in the (longer) second period monetary policy ensures output is fixed at some level independent of aggregate demand (i.e. its supply determined). Government spending increases in period 1 only. That is the context when these multipliers are likely to be important as a policy tool.

1) Balanced budget multiplier

To recap, for a balanced budget multiplier (BBM), here is a simple proof in terms of sector balances for a closed economy. A BBM by definition does not change the public sector’s finance balance (FB). It seems very reasonable to assume that consumers consume a proportion less than one of any change to their first period post-tax income. So if higher taxes reduced income, their consumption falls by less, so their FB moves into deficit. But as the sum of the public and private sector’s FB sums to zero, it cannot do this. So post-tax income cannot fall. Hence pre-tax income must rise to just offset the impact of higher taxes. The BBM is one.

The nice thing about this result is that it holds whatever fraction of current income is consumed (as long as it’s less than one), so it is independent of the degree of consumption smoothing. What about lower consumption in the second period? No need to worry, as monetary policy ensures demand is adequate in the second period.

Although a good place to start, allowing for an impact on expected inflation and therefore real interest rates will raise this number above one. In addition, as DeLong and Summers discuss, hysteresis effects will also raise period 2 output and income from the supply side, some of which consumers will consume in period 1. We would get similar effects if the higher government spending was in the form of useful intrastructure investment. So in this case one is the place to start, but it looks like a lower bound.

2) BBM in an open economy

I’m still seeing people claim that the BBM in an open economy is small. It could be, if the government acts foolishly. Suppose the government increases its spending entirely on defence, which in turn consists of buying a new fighter jet from an overseas country. The impact on the demand for domestic output is zero. But consumers are paying for this through higher taxes, so their spending decreases – we get a negative multiplier.

Now consider the opposite: the additional government spending involves no imported goods whatsoever. The multiplier is one. You can do the maths, but it is easy to show that this is a solution by thinking about the BBM in a closed economy. There consumption does not change, because a BBM=1 raises pre-tax income to offset higher taxes. But if consumption does not change, neither will imports, so this is also the solution in the open economy case.

What the textbooks do is apply a marginal propensity to import to total output, which implicitly assumes that the same proportion of government spending is imported as consumption spending. For most economies that is not the case, as the ‘home bias’ for government spending is much larger. Furthermore, if the government is increasing its spending with the aim of raising output, it can choose to spend it on domestically produced output rather than imports. So, a multiplier of one is again a good place to start. Allowing some import leakage will reduce the multiplier, but this could easily be offset by the real interest rate effects discussed above, particular as these would in an open economy depreciate the real exchange rate.

3) Debt financed government spending with future tax increases

Although this is the standard case, from a pedagogical point of view I think it’s better to start with the BBM, and note that it’s all the same with Ricardian Equivalence. We can then have a discussion about which are the quantitatively important reasons why Ricardian Equivalence does not hold. All these go to raise the multiplier above one. You have to add, however, some discussion about the impact that distortionary tax increases will have on output in the second period, which reduces second period output and, through consumption smoothing, the size of the first period multiplier. 

4) Debt financed government spending without tax increases

In an earlier post I queried why arguments for the expansionary impact of government spending increases always involved raising taxes at some point. For debt finance, why not assume lower government spending in the future rather than higher taxes. The advantage is that you do not need to worry about supply side tax effects. Monetary policy ensures there is no impact on output of lower government spending in the second period. Now, unlike the BBM case, we do need to make some assumptions about the degree of consumption smoothing. If you think the first period is short enough, and consumers smooth enough, such that the impact of higher income on consumption in the first period is negligible, then we have a multiplier of one again.


[1] I assume Quantitative Easing cannot negate the ZLB problem, and that inflation targets are in place and fixed. This is not about fiscal stimulus versus NGDP targeting, but just about macro theory.

Thursday 23 August 2012

Hayek versus Keynes and the Eurozone


The editors of the EUROPP blog, run by the Public Policy Group at the London School of Economics, wanted to contrast Hayekian and Keynesian views of the Eurozone crisis, by running posts from either side. Here is the Hayekian view, from Steven Horwitz, and for better or worse I provide the Keynesian view here. To be honest it is my view of the Eurozone crisis, which I think owes a lot to Keynesian ideas – it is absolutely not an attempt to guess what Keynes would have said if he could speak from the grave.

While regular readers of my blog will not find anything very new here, I personally found it useful to put my various posts into a brief but coherent whole. What struck me when I did so was the gulf between my own perspective (which is not particularly original, and borrows a great deal from the work of others like Paul De Grauwe), and that of most Eurozone policymakers. It is a gulf that goes right back to when the Euro was formed.

Much of the academic work before 2000 looking at the prospects for the Euro focused on asymmetric or country specific shocks, or asymmetric adjustment to common shocks due to structural differences between countries. My own small contribution, and those of many others, looked at the positive role that fiscal policy could play in mitigating this problem. Yet most European policymakers did not want to hear about this. Instead they were focused on the potential that a common currency had for encouraging fiscal profligacy, because market discipline would be reduced.

Now this was a legitimate concern – as some Greek politicians subsequently showed. However what I could not understand back then, and still cannot today, is how this concern can justify ignoring the problem of asymmetric shocks. I can still remember my surprise and incomprehension when first reading the terms of the Pact – what were Eurozone policymakers thinking? My incredulity has certainly been validated by events, as the Eurozone was hit by a huge asymmetric shock as capital flowed into periphery countries and excess demand there remained unchecked. Now countercyclical fiscal policy in those countries would not have eliminated the impact of that shock, at least not according to my own work, but it would have significantly reduced its impact.

When I make this point, many respond that fiscal policy in Ireland or Spain was probably contractionary during this time – am I really suggesting it should have been tighter still? Absolutely I am, and the fact that this question is so often asked partly reflects the complete absence of discussion of countercyclical fiscal policy by Eurozone policymakers. Brussels was too busy fretting about breaches of the SGP deficit limits, and largely ignoring the growing competitiveness divide between Germany and most of the rest. (Maybe this is a little unfair on the Commission. I have been told that when the Commission did raise concerns of this kind, they were dismissed by their political masters.)

If periphery countries had pursued aggressive countercyclical fiscal policies before 2007, would the Eurozone crisis have started and ended with Greece? Who knows, but it certainly would have been less of a crisis than the one we have now.

This is just one aspect of the policy failure that is the Eurozone crisis. Another is the fiction of expansionary austerity, and yet another is the obsession by the ECB with moral hazard (or even worse their balance sheet). As I say at the end of my EUROPP post, there is a pattern to all these mistakes. It reflects a world view that governments are always the problem, and private sector behaviour within competitive markets never requires any intervention. Whether you attribute that view to Hayek, or Ordoliberalism, or something else is an interesting academic question. But what the Eurozone crisis shows all too clearly is the damage that this world view can do when it becomes the cornerstone of macroeconomic policy.

Monday 20 August 2012

Facts and Spin about Fiscal Policy under Gordon Brown


Below is a chart of UK net debt to GDP from the mid 1970s until the onset of the Great Depression. This post is about the right hand third of this chart, from 1998 to 2007, which was the period during which Gordon Brown was Chancellor. 

UK Net Debt as a Percentage of GDP (financial years) – Source OBR

In general looking at figures for debt can give you a rather misleading impression of what fiscal policy is doing, particularly over short intervals. However, having finished trawling through budget reports and other data for a paper I am writing, I can safely say that this chart tells a pretty accurate story. (For those who cannot wait for the detail that will be in my paper, there is an excellent account by Alan Budd here.) In the first two years of his Chancellorship, Brown continued his predecessor’s policy of tightening fiscal policy. The budget moved into small surplus, so that the debt to GDP ratio fell to near 30% of GDP. Policy then shifted in the opposite direction, with a peak deficit of over 3% of GDP, a period which included substantial additional funding to the NHS. The remaining five budgets were either broadly neutral or mildly contractionary in the way they moved policy, but as this was starting from a significant deficit, the net result was a continuing (if moderating) rise in debt.

Why was fiscal policy insufficiently tight over most of this period? Despite what Gordon Brown said at the end of his term, I do not think this had anything to do with the business cycle. In one sense there is nothing unusual to explain: we are used to politicians being reluctant to raise taxes by enough to cover their spending, which leads to just this kind of deficit bias. However this should not have happened this time because policy was being constrained by two fiscal rules designed to prevent this. So what went wrong with the rules?

The first answer is in one sense rather mundane. The rules, as all sensible fiscal rules should, tried to correct for the economic cycle. However, rather than use cyclically adjusted deficit figures, Gordon Brown’s rules looked at average deficits over the course of an economic cycle. That allowed Brown to trade off excessively tight policy in the early years against too loose policy towards the end, and still (just) meet his rule. As we can roughly see from the chart, debt ends up about where it started under his stewardship, which also roughly coincided with a full cycle.

Was this intended? The answer is to some extent not, which brings us to the second reason policy was too loose, and that is forecast error. One of the striking things about reading through the budget reports is how persistent these errors were. Outturns seemed always more favourable than expected over the first part of this period, until they became persistently unfavourable in the second. The former encouraged forecasters to believe higher than expected tax receipts represented a structural shift, and they were reluctant to give up that view in the second period. Unlucky or an aspect of wishful thinking that is often part of deficit bias?

To their credit, the current Conservative led government learnt from both these mistakes. Most notably, they set up the independent Office for Budget Responsibility with the task of producing forecasts without any wishful thinking. In addition their fiscal mandate is also defined in terms of a cyclically adjusted deficit figure, which does not have the backward looking bias inherent in averaging over the past cycle. Their mistake is in trying to meet that mandate when the recovery had only just begun.

What this chart does not show are the actions of a spendthrift Chancellor who left the economy in a dire state just before the Great Recession. He stopped being Chancellor with debt roughly where it was when he started, and a deficit only moderately above the level required to keep it there. The spin that our current woes are the result of the awful mess Gordon Brown left the UK economy in is a distortion based on a half-truth. The half truth is that it would have been better if fiscal policy had been tighter, leaving debt at 30% rather than 37% when the recession hit. The distortion is that the high deficit and debt when labour left office in 2010 were a consequence of the recession, and commendable attempts to limit its impact on output and employment.

Saturday 18 August 2012

The Lucas Critique and Internal Consistency


For those interested in microfoundations macro. Unlike earlier posts, I make no judgement about the validity or otherwise of the microfoundations approach, but instead just try and clarify two different motivations behind microfoundations.

When I discuss the microfoundations project, I say that internal consistency is the admissibility criteria for microfounded models. I am not alone in stressing the role of internal consistency: for example in the preface to their highly acclaimed macroeconomics textbook, Obstfeld and Rogoff (1996) argue that a key problem with the pre microfoundations literature is that it “lacks the microfoundations needed for internal consistency”. However when others talk about microfoundations, they often say they are designed to avoid the Lucas critique. This post argues that the latter is just a particular case of the former.

What do we mean when we say a model is internally consistent? Most obviously, we mean that individual agents within the model behave consistently in making their own decisions. A trivial example is if the model contains a labour supply equation and a consumption function that are supposed to represent the behaviour of the same agent. In that case we would want the agent to behave consistently. An agent that became more impatient, and so wanted to consume more by borrowing, but also wanted to work more hours (and so exhibit less impatience in their consumption of leisure), would appear to behave inconsistently unless their preferences or prices also changed.

Suppose instead of a labour supply equation, we had wage setting by unions. In this case we have a consistency issue between two sets of agents: consumers and unions. If we wanted to model unions as representing consumers as workers, we would want to align their preferences, so we are back to the previous case. However, there may be reasons why we do not want to do this. If we did not, we would want to make sure these agents interrelated in a sensible way.

What is meant by a sensible way? Consumer’s decisions will almost certainly depend on expectations about the wages unions set. Lucas called rational expectations a ‘consistency axiom’. If, for example, the union started being more concerned about employment than wages, we might expect consumers to recognise this in thinking about how their future income might evolve.

The Lucas critique is just an example of consistency between agents. The question is whether the private sector agents in the model react in a sensible way to policy changes. The classical example of the Lucas critique is inflation expectations. If monetary policy changes to become much harder on inflation, then rational agents will incorporate that into the way they form inflation expectations. A model that did not have that feedback would be ‘subject to the Lucas critique’.

Discussion of the Lucas critique often involves the need to model in terms of ‘deep’ parameters. A deep parameter (like impatience) is one that is independent of (exogenous to) the rest of the model. Here the parameters of the rule agents’ use to forecast inflation are not deep parameters, because (under rational expectations) they depend on how policy is made. But we can have a similar discussion about workers and unions: if the latter aimed at representing the former, then union attitudes to the wage/employment trade off should not be independent of worker preferences. Internal consistency is again more general than the Lucas critique.

Now obviously the Lucas critique is a particularly important kind of inconsistency if you are interested in analysing policy. But it is not the only kind of inconsistency that matters. A very good example of this is Woodford’s derivation of a social welfare function from the utility function of agents. Before this work, macroeconomists had typically assumed that a benevolent policy maker would minimise some quadratic combination of excess inflation and output, but this was disconnected from consumers’ utility. This had no bearing on the Lucas critique, which applies to any policy, benevolent or not. However it was a glaring example of inconsistency – why wasn’t the policy maker maximising the representative agent’s utility? After Woodford’s analysis, nearly every macroeconomics paper followed his example: not because it did anything about the Lucas critique, but because it solved an internal consistency issue.

Why does putting the Lucas critique in its proper place matter? I can think of two reasons. First, if you believe that avoiding the Lucas critique means you necessarily have a microfounded model, you are wrong. (In contrast, an internally consistent model will avoid the Lucas critique.) Second, it has a bearing on the idea often put forward that microfounded models are just for policy analysis, but not for forecasting. If we think that microfoundations is all about the Lucas critique, then this mistake is understandable (although still a mistake). But if microfoundations is about internal consistency, then it is easier to see how a microfounded model could be much better at forecasting as well as policy analysis.

Thursday 16 August 2012

Why do European Economists write Letters while US Economists Endorse Candidates?


In February 2010, 20 economists including a number of academics of note signed a letter that endorsed the Conservative Party’s deficit reduction plan for the UK. Although 20 is a small number (I’m sure many more – like me – were asked to sign and did not), they made up in quality what they lacked in quantity. The New Statesman magazine recently had the bright idea of asking them “whether they regretted signing the letter and what they would do to stimulate growth”. It published the results yesterday.

Half of the signatories replied. The headline was that most have changed their mind. Actually the responses are more varied, but interesting given that they are mostly well known academics. For example Ken Rogoff simply says “I have always favoured investment in high-return infrastructure projects that significantly raise long-term growth” which you can interpret how you want. A few are brave enough to say they have changed their minds. Only Albert Marcet says that he has no regrets.

400 economists have signed up in favour of Romney for President. Of course we all know that everything is always done bigger and louder in the US, but I think Andrew Watt is right when he says that it is “unusual in Europe, at least in the countries I know, for academic economists to ally themselves party-politically in such a clear fashion”. I only know the UK well enough to judge, but in that case I think he is right, and the New Statesman responses illustrate this. They do not represent the comments of those who would support a party or ideological position come what may. The 42 French economists who wrote a letter endorsing Hollande’s recovery plan seem more in the UK tradition of supporting particular policies in their own words. In contrast the 400 seem to be signing up to something that could only have been written by a political machine.   

So if there is a difference between the US and at least some parts of Europe here, why is this? Andrew Watt wonders whether the more fluid nature of the civil service in the US has something to do with it. While that might explain the actions of those with a real chance of a top job, can it really explain what appears to be a much more widespread difference? Perhaps European economists just attach greater value to masking their political or ideological prejudices, but in a way that just moves the question sideways – why do they attach more value to this?

Yet perhaps I’m asking the wrong question here. Is the issue about US/European differences, or is it about what drives those who support the Republican Party in the US? The fact that parts of the Republican Party appear quite anti-science (evolution is just one theory), as well as anti-economics (tax cuts reduce the deficit), would surely have the effect of putting academics off publicly associating themselves with that party. I can see why that would make a Republican candidate particularly keen to be seen to have academic support, but not why so many seem happy to give their blanket support.

When I get asked to sign letters, there is always an internal debate between part of me that agrees with the cause and another that does not agree with everything that is written in the letter supporting that cause. Sometimes one side wins and I sign, and sometimes the other side wins and I don’t. Applying the same logic to the 400, the cause must be really important. Either the prospect of a Romney victory must be so appealing, or the threat of another Obama Presidency so awful, that those signing have been willing to put all their normal critical faculties and sensibilities to one side.  Or to go further, and write supporting documents in a way that either ignores what the evidence suggests or tries to suggest the evidence says what it does not, something that neither scientists nor engineers would do.

In a world still suffering greatly from the consequences of ineffective financial regulation, is the threat of marginally more effective regulation that dire? In an economy where tax rates on the rich have fallen and inequality has increased massively (whatever John Cochrane may want to believe), is the prospect of that not continuing so appalling? Is the prospect of just a bit less rather than a lot less government so terrifying that you are happy to sign up to obvious distortions like “Obama has offered no plan to reduce federal spending and stop the growth of the debt-to-GDP ratio”? It is this I find hard to understand.

Martin Wolf comments that it would be naive to think that economics could ever be as free from ideological or political influence as science or engineering, and I agree. However that does not mean that it is wrong to try and expose and reduce that influence. So it is therefore interesting if the influence of right wing politics and free market ideology is less powerful in some parts of Europe than it is in the US. Unfortunately I have little idea quite why that is and what it implies. 

Wednesday 15 August 2012

House prices, consumption and aggregation

A simplistic view of the link between house prices and consumption is that lower house prices reduce consumers’ wealth, and wealth determines consumption, so consumption falls. But think about a closed economy, where the physical housing stock is fixed. Housing does not provide a financial return. So if house prices fall, but aggregate labour income is unchanged, then if aggregate consumption falls permanently the personal sector will start running a perpetual surplus. This does not make sense.

The mistake is that although an individual can ‘cash in’ the benefits of higher house prices by downgrading their house, if the housing stock is fixed that individual’s gain is a loss for the person buying their house. Higher house prices are great for the old, and bad for the young, but there is no aggregate wealth effect.

As a result, a good deal of current analysis looks at the impact house prices may have on collateral, and therefore on house owners ability to borrow. Higher house prices in effect reduce a liquidity or credit constraint. Agents who are credit constrained borrow and spend more when they become less constrained. There is no matching reduction in consumption elsewhere, so aggregate consumption rises. If it turns out that this was a house price bubble, the process goes into reverse, and we have a balance sheet recession[1]. In this story, it is variations in the supply of credit caused by house prices that are the driving force behind consumption changes. Let’s call this a credit effect.

There is clear US evidence that house price movements were related to changes in borrowing and consumption. That would also be consistent with a wealth effect as well as a credit constraint story, but as we have noted, in aggregate the wealth effect should wash out.

Or should it? Let’s go back to thinking about winners and losers. Suppose you are an elderly individual, who is about to go into some form of residential home. You have no interest in the financial position of your children, and the feeling is mutual. You intend to finance the residential home fees and additional consumption in your final years from the proceeds of selling your house. If house prices unexpectedly fall, you have less to consume, so the impact of lower house prices on your consumption will be both large and fairly immediate. Now think about the person the house is going to be sold to. They will be younger, and clearly better off as a result of having to fork out much less for the house. If they are the archetypal (albeit non-altruistic) intertemporal consumer, they will smooth their additional wealth over the rest of their life, which is longer than the house seller. So their consumption rises by less than the house seller’s consumption falls, which means aggregate consumption declines for some time. This is a pure distributional effect, generated by life-cycle differences in consumption.

In aggregate, following a fall in house prices, the personal sector initially moves into surplus (as the elderly consume less), and then it moves into deficit (as the elderly disappear and the young continue to spend their capital gains). In the very long run we go back to balance. This reasoning assumes that the house buyer is able to adjust to any capital gains/losses over their entire life. But house buyers tend to be borrowers, and are therefore more likely to be credit constrained. So credit effects could reverse the sign of distributional effects.

This is a clear case where micro to macro modelling, of the kind surveyed in the paper by Heathcote, Storesletten and Violante, is useful in understanding what might happen. An example related to UK experience is a paper by Attanasio, Leicester and Wakefield (earlier pdf here). This tries to capture a great deal of disaggregation, and allows for credit constraints, limited (compared to the Barro ideal) bequests and much more, in a partial equilibrium setting where house price and income processes are exogenous. The analysis is only as good as its parts, of course, and I do not think it allows for the kind of irrationality discussed here. In addition, as housing markets differ significantly between countries, some of their findings could be country specific.

Perhaps the most important result of their analysis is that house prices are potentially very important in determining aggregate consumption. According to the model, most movements in UK consumption since the mid-1980s are caused by house price shocks rather than income shocks. In terms of the particular mechanism outlined above, their model suggests that the impact of house prices on the old dominate those on the young, despite credit constraints influencing the latter more. In other words the distributional effect of lower house prices on consumption is negative. Add in a collateral credit effect, and the model predicts lower house prices will significantly reduce aggregate consumption, which is the aggregate correlation we tend to observe.

But there remains an important puzzle which the paper discusses but does not resolve. In the data, in contrast to the model, consumption of the young is more responsive to house price changes than consumption of the old. The old appear not to adjust their consumption following house price changes as much as theory suggests they should, even when theory allows a partial bequest motive. So there remain important unresolved issues about how house prices influence consumption in the real world.



[1] This is like the mechanism in the Eggertsson and Krugman paper, although that paper is agnostic about why borrowing limits fall. They could fall as a result of greater risk aversion by banks, for example.

Monday 13 August 2012

ECB conditionality exceeds their mandate


To get a variety of views on this issue, read this post  from Bruegel . Here is my view.

We can think of the governments of Ireland or Spain facing a multiple equilibria problem when trying to sell their debt. There is a good equilibrium, where interest rates on this debt are low  and fiscal policy is sustainable. There is a bad equilibrium, where interest rates are high, and because of this default is possible at some stage. Because default is possible, a high interest rate makes sense – hence the term equilibrium.

Countries with their own central bank and sustainable fiscal policy can avoid the bad equilibria, because the central bank would buy sufficient government debt to move from the bad to the good. (See this pdf by Paul De Grauwe.) The threat that they would do this means they may not need to buy anything. Anyone who speculates that interest rates will rise will lose money, so the interest rate immediately drops to the low equilibrium.

How do markets know the central bank will do this, if that central bank is independent? They might reason that independence would be taken away by the government if the central bank refused. But suppose independence was somehow guaranteed. Well, they might look at what the central bank is doing. If it is already buying government debt as part of a Quantitative Easing (QE) programme, then as long as the same conditions remain the high interest outcome would not be an equilibrium.

Suppose instead that the central bank does not have a QE programme, and announces that it will only undertake one if the country concerned agrees to sell some of its debt to other countries under certain onerous conditions, and agreement is uncertain. We are of course talking about the ECB. Now the bad equilibrium becomes a possibility again. Perhaps the country will not agree to these onerous conditions. As Kevin O’Rourke points out, this possibility is quite conceivable for a country like Italy. Equally, based on past experience, the lenders may only agree if there is partial default. Neither of these things needs to be inevitable, just moderately possible – after all, interest rates are high only because there is a non-negligible chance of default. The ECB also says that even if the country and its potential creditors agree, it may still choose not to buy that country’s bonds. This throws another lifeline to the existence of a bad equilibrium.

So, we have moved from a situation where the bad equilibrium does not exist, to one where it can. As the good equilibrium is clearly better than the bad one, there must be some very good reason for the ECB to impose this kind of conditionality. What could it be?

The ECB’s mandate is price stability. So without conditionality, would there be an increased risk of inflation? One concern is that printing more money to buy government debt will raise inflation. But that does not appear to be a concern in the UK and US, for two very good reasons. First, the economy is in recession, or experiencing a pretty weak recovery. Second, central bank purchases of government debt are reversible, if inflation did look like it was becoming a serious problem.

What about the danger that by buying bonds now, when there is no inflation risk, governments will be encouraged to follow imprudent fiscal policies at other times when inflation is an issue. But why would the ECB buy government bonds in that situation? Buying bonds now does not commit the ECB to do so in the future. No one thinks the Fed will be doing QE in a boom. OK, what about all those ‘structural reforms’ that might not occur if the bad equilibria disappeared? Well, quite simply, that is none of the ECB’s business. It has nothing to do with price stability. If the ECB is worrying about structural reforms, it is exceeding its mandate.

Cannot the same argument – that an issue is not germane to price stability - be used about choosing between the good and bad equilibria? No. The bad equilibrium, because it forces countries like Ireland and Spain to undertake excessive austerity (and because it may influence the provision of private sector credit in those countries), is reducing output and will therefore eventually reduce inflation below target. The only ‘conditionality’ the ECB needs to avoid moral hazard is that intervention will take place only if the country in the bad equilibrium is suffering an unnecessarily severe recession. The ECB can decide itself whether this is the case by just looking at the data.

So, in my view, to embark on unconditional and selective QE in the current situation is within the price stability mandate of the ECB. To impose conditionality in the way it is doing is not within its mandate. Unfortunately, as Carl Whelan points out, this is not the first time the ECB has exceeded its mandate. As he also says, if the Fed or Bank of England made QE conditional on their governments undertaking certain ‘structural reforms’ or fiscal actions, there would be outrage. So why do so many people write as if it acceptable for the ECB to do this?

Saturday 11 August 2012

Handling complexity within microfoundations macro


In a previous post I looked at a paper by Carroll which suggested that the aggregate consumption function proposed by Friedman looked rather better than more modern intertemporal consumption theory might suggest, once you took the issue of precautionary saving seriously. The trouble was that to show this you had to run computer simulations, because the problem of income uncertainty was mathematically intractable. So how do you put the results of this finding into a microfounded model?

While I want to use the consumption and income uncertainty issue as an example of a more general problem, the example itself is very important. For a start, income uncertainty can change, and we have some evidence that its impact could be large. In addition, allowing for precautionary savings could make it a lot easier to understand important issues, like the role of liquidity constraints or balance sheet recessions.

I want to look at three responses to this kind of complexity, which I will call denial, computation and tricks. Denial is straightforward, but it is hardly a solution. I mention it only because I think that it is what often happens in practice when similar issues of complexity arise. I have called this elsewhere the streetlight problem, and suggested why it might have had unfortunate consequences in advancing our understanding of consumption and the recent recession.

Computation involves embracing not only the implications of the precautionary savings results, but also the methods used to obtain them as well. Instead of using computer simulations to investigate a particular partial equilibrium problem (how to optimally plan for income uncertainty), we put lots of similar problems together and use the same techniques to investigate general equilibrium macro issues, like optimal monetary policy.

This preserves the internal consistency of microfounded analysis. For example, we could obtain the optimal consumption plan for the consumer facing a particular parameterisation of income uncertainty. The central bank would then do its thing, which might include altering that income uncertainty. We then recompute the optimal consumption plan, and so on, until we get to a consistent solution.

We already have plenty of papers where optimal policy is not derived analytically but through simulation.(1) However these papers typically include microfounded equations for the model of the economy (the consumption function etc). The extension I am talking about here, in its purest form, is where nothing is analytically derived. Instead the ingredients are set out (objectives, constraints etc), and (aside from any technical details about computation) the numerical results are presented – there are no equations representing the behaviour of the aggregate economy.

I have no doubt that this approach represents a useful exercise, if robustness is investigated appropriately. Some of the very interesting comments to my earlier post did raise the question of verification, but while that is certainly an issue, I do not see it as a critical problem. But could this ever become the main way we do macroeconomics? In particular, if results from these kinds of black box exercises were not understandable in terms of simpler models or basic intuition, would we be prepared to accept them? I suspect they would be a complement to other forms of modelling rather than a replacement, and I think Nick Rowe agrees, but I may be wrong. It would be interesting to look at the experience in other fields, like Computable General Equilibrium models in international trade for example.

The third way forward is to find a microfoundations 'trick'. By this I mean a set up which can be solved analytically, but at the cost of realism or generality. Recently Carroll has done just that for precautionary saving, in a paper with Patrick Toche. In that model a representative consumer works, has some probability of becoming unemployed (the income uncertainty), and once unemployed can never be employed again until they die. The authors suggest that this set-up can capture a good deal of the behaviour that comes out of the computer simulations that Carroll discussed in his earlier paper.

I think Calvo contracts are a similar kind of trick. No one believes that firms plan on the basis that the probability of their prices changing is immutable, just as everyone knows that one spell of unemployment does not mean that you will never work again. In both cases they are a device that allows you to capture a feature of the real world in a tractable way.

However, these tricks do come at a cost, which is how certain we can be of their internal consistency. If we derive a labour supply and consumption function from the same intertemporal optimisation problem, we know these two equations are consistent with each other. We can mathematically prove it. Furthermore, we are content that the underlying parameters of that problem (impatience, the utility function) are independent of other parts of the model, like monetary policy. Now Noah Smith is right that this contentment is a judgement call, but it is a familiar call. With tricks like Calvo contracts, we cannot be that confident. This is something I hope to elaborate on in a subsequent post. 

This is not to suggest that these tricks are not useful – I have used Calvo contracts countless times. I think the model in Carroll and Toche is neat. It is instead to suggest that the methodological ground on which these models stand is rather shakier as a result of these tricks. We can never write ‘I can prove the model is internally consistent’, but just ‘I have some reasons for believing the model may be internally consistent’. Invariance to the Lucas critique becomes a much bigger judgement call.

There is another option that is implicit in Carroll’s original paper, but perhaps not a microfoundations option. We use computer simulations of the kind he presents to justify an aggregate consumption function of the kind Friedman suggested. Aggregate equations would be microfounded in this sense (there need be no reference to aggregate data), but they would not be formally (mathematically) derived. Now the big disadvantage of this approach is that there is no procedure to ensure the aggregate model is internally consistent. However, it might be much more understandable than the computation approach (we could see and potentially manipulate the equations of the aggregate model), and it could be much more realistic than using some trick. I would like to add it as a fourth possible justification for starting macro analysis with an aggregate model, where aggregate equations were justified by references to papers that simulated optimal consumer behaviour.  

(1) Simulation analysis can make use of mathematically derived first order conditions, so the distinction here is not black and white. There are probably two aspects to the distinction that are important for the point at hand, generality and transparency of analysis, with perhaps the latter being more important. My own thoughts on this are not as clear as I would like.

Thursday 9 August 2012

Giving Economics a Bad Name


Greg Mankiw is known to every economist and economics student, if only because of his best selling textbook. John Taylor is known to every macroeconomist, if only because of the large number of bits of macro with his name on it (Taylor rule, Taylor contracts etc). Both are respected by other academics because of the quality and influence of their academic work.

With two others, they recently wrote this about the Obama administration’s attempts to stimulate the economy through fiscal policy after the recession: “The negative effect of the administration’s ‘stimulus’ policies has been documented in a number of empirical studies.” They then quote from two studies. The first looks at a minor aspect of the stimulus packages, the Cash for Clunkers attempt to bring forward car purchases. There are other studies of this programme which are more favourable. The second study is co-authored by John Taylor, and others have interpreted his findings differently.

No other studies are directly referred to. That might just be because the overwhelming majority suggest that the stimulus package worked. Dylan Matthews on Ezra Klein's blog documents them here. As I wrote in a recent post, the evidence is about as clear as it ever is in macro. Which is not too surprising, as it is what Mankiw’s textbook suggests, and it is what the New Keynesian theory both authors have contributed to suggests.

Now the quote comes from a paper prepared for the Romney presidential campaign. It is clearly political in tone and intent. As both academics are Republican supporters, it may therefore seem par for the course. But it should not be. The Romney campaign publicised this paper because it was written by academics – experts in their field. It allows those who oppose fiscal stimulus to continue to claim that the evidence is on their side – look, these distinguished academics say so.

It is one thing for economists to disagree about policy. It would also be fine to say I know the evidence is mixed, but I think some evidence is more reliable. It is not fine to imply that the evidence points in one direction when it points in the other. I say here imply, because the authors do not explicitly say that the majority of studies suggest stimulus is ineffective. If they chose their words carefully, then you have to ask whether ‘intending to mislead’ is any better than ‘misrepresenting the facts’. Was that the intent, or just an isolated unfortunate piece of bad phrasing? All I can say is read the paper and judge for yourself, or this post from Brad DeLong.

This is sad, because it tells us as much about economics as an academic discipline as it does about the individuals concerned. In the past I have imagined something similar happening in physics. It actually stretches the imagination to do so, but if it did, the academics concerned would immediately lose their academic reputation. The credibility of their work would be questioned.  Responding to evidence rather than ignoring it is what distinguishes real science from pseudo science, and doctors from snake oil salesmen.

What can economics as a discipline do about this sad state of affairs? The answer is pretty obvious, to economists in particular, and that is changing the incentives where we can. However we cannot do much about the incentives provided by politics and the media. I have been pretty pessimistic about this in the past, but in a future post I will try and be more positive and talk about one possible way forward. 

Wednesday 8 August 2012

One more time – good policy takes account of risks, and what happens if they materialise


From the Guardian's report of Mervyn King’s press conference today, where the Bank of England lowers forecast UK growth this year to zero.

Paul Mason of Newsnight suggests that the Bank of England should stop trying to use monetary policy to offset the impact of chancellor George Osborne's fiscal tightening, and call for a Plan B instead.
King rejects the idea, saying that Osborne's plan looked "pretty sensible" back in 2010. Overseas factors have undermined it, he argues.
Now Mervyn King had little choice but to say this, but he is wrong (and probably knows he is wrong) for a simple reason. Even if the post-2010 Budget forecast of 2.8% growth in 2012 had been pretty sensible, there were risks either side. There always are, although the nature of the recession probably made these risks greater than normal. It is what you can do if those risks materialise that matters.

Now if growth had appeared to be stronger than 2.8%, and inflation becomes excessive, the solution is obvious, well tested and effective – the Bank of England raises interest rates. But if growth looked like falling well short of 2.8%, the solution – more Quantitative Easing - is untested and very unclear in its effectiveness. (And before anyone comments, the government knows it has no intention of telling the Bank to abandon inflation targets.) With this basic asymmetry, you do not cross your fingers and hope your forecasts are correct. Instead you bias policy towards trying as far as possible to avoid the bad outcome. You go for 3.5% or 4% growth, knowing that if this produced undesirable inflation you could do something about it. That in turn meant not undertaking the Plan A of severe austerity.

So all the talk about how much austerity, or the Eurozone, or anything else, caused the current UK recession is beside the point when it comes to assessing the wisdom of 2010 austerity. Criticising the Bank of England for underestimating inflation in the past is even more pointless – do those making the criticism really think interest rates should have been higher two or three years ago? Even if the Euro crisis has been unforeseeable bad luck for the government (although I think excessive austerity is having its predictable effect there to), the government should not have put us in a position where we seem powerless to do anything about it.

If you are sailing a ship near land, you keep well clear of the coast, even if it means the journey may take longer.  So the fact that the economy has run aground does not mean the government was just unlucky. You do not embark on austerity when interest rates are near zero. Keynes taught us that, it is in all the textbooks, and a government bears responsibility when it ignores this wisdom. To the extent that the government was encouraged to pursue this course by the Governor of the Bank of England, that responsibility is shared.

Saturday 4 August 2012

Watching the ECB play chess


Watching Mario Draghi  trying to gradually out manoeuvre some of his colleagues in order to rescue the Eurozone has a certain intellectual fascination, as long as you forget the stakes involved. I’m not an expert on the rules of this game, so I’m happy to leave the blow by blow account to others, such as Storbeck, Fatas, Varoufakis and Whelan.

What I cannot help reflecting on is the intellectual weakness of the position adopted by Draghi’s opponents. These opponents appear obsessed with a particular form of moral hazard: if the ECB intervenes to reduce the interest rates paid by certain governments, this will reduce the pressure on these governments to cut their debt and undertake certain structural reforms. (Alas this concern is often repeated in otherwise more reasonable analysis.) Now one, quite valid, response is to say that in a crisis you have to put moral hazard concerns to one side, as every central bank should know when it comes to a financial crisis. But a difficulty with this line is that it implicitly concedes a false diagnosis of the major problem faced by the Eurozone.

For most Eurozone countries, the crisis was not caused by their governments spending in an unsustainable way, but by their private sectors doing so (for example, Martin Wolf here). The politics are such that the government ends up picking up the tab for imprudent lending by banks. If you want to avoid this happening again, you focus on making sure governments do what they can to prevent excess private sector spending, which means countercyclical fiscal policy, and perhaps breaking the political power that banks have over local politicians.

Trying to do either of these things by forcing excessive austerity on governments is completely counterproductive. You do not encourage countercyclical fiscal policy by making it more pro-cyclical. In addition, creating major recessions in these countries makes it more, not less, likely that banks will be bailed out. Forcing excessive austerity, as well as doing nothing to deal with the underlying causes of the crisis, may even have made the short term problem of default risk worse. Not only have the size of any bank bailouts increased because of domestic recession, but in the case of Greece excessive austerity has generated political instability which also increases default risk.

In a monetary union, a ‘punishment’ for allowing excessive private sector spending (and therefore the incentive to avoid it) is automatic: the economy becomes uncompetitive and must deflate relative to its partners to bring its prices back into line. Adjustment should be painful for creditors and debtors alike. However there are two clear cut reasons why this deflation should be gradual rather than sharp. The first is the Phillips curve: gradual deflation to adjust the price level is much more efficient than rapid deflation. The second is aversion to nominal wage cuts, which makes getting significant negative inflation very costly.

It is in this context that the game of chess being played at the ECB seems so divorced from macroeconomic reality. By delaying intervention, and insisting on conditionality, the ECB is complicit in creating unnecessarily severe recessions in many Eurozone countries, and may even be making the problem of high interest rates on government debt worse. As the interest rate the ECB sets is close to the zero lower bound, it is almost powerless deal with the consequences for aggregate Eurozone activity, so the Eurozone as a whole enters an unnecessary recession.  The OECD is forecasting a -4% output gap for the Euro area in 2013, and only an inflation nutter would call that as a success for the ECB.

It gets worse. By not using its power (which no one doubts) to lower interest rates on government debt, it has allowed a crisis of market confidence to become a distributional struggle between Eurozone countries. So in effect one set of governments started financing another, on terms that make it very difficult for debtors to pay, and so the crisis becomes one that could threaten the cohesion of the Eurozone itself.  The ‘you will have to leave’ threats to Greece are just a particularly nasty manifestation of this.

There is a line that some people take that the current crisis shows that a partial economic union, where fiscal policy remains under the control of nation states, is inevitably flawed, and that the only long term solution for the Euro area is fiscal as well as monetary union. I think that case is unproven. If the ECB had undertaken a programme of Quantitative Easing, directed (as any such programme should be) at markets where high interest rates were damaging the economy, then economies would have been able to focus on restoring competitiveness in a controlled and efficient manner. That was never going to be easy or painless, but it need not have led to the scale of recession, and the political discord, that we are now seeing.

The current crisis certainly reveals shortcomings in the original design of the Euro. In my view these shortcomings could have been (and still could be) solved, if those in charge had looked at what was actually happening and applied basic macroeconomic principles and ideas. We have perpetual crisis today because too many European policymakers (and, with politicians’ encouragement, perhaps also voters) are looking at events through a kind of Ordoliberal and anti-Keynesian prism. If the current crisis reveals anything, it is how misguided this ideological perspective is.