Is economic output an accurate measure of the COVID-19 impact?
By Emeritus Professor Michael Haynes
Coronavirus has put all governments under pressure. One measure of their success and failure has been the number of deaths; another is the hit taken to economic output as a result of the lockdown measures.
But how accurate is the data? Measuring deaths is a nightmare, but is the measurement of economic output any better?
The answer is no.
The UK Q2 data: a 20%+ fall?
Total output, or gross domestic product (GDP), is said to be ‘the most powerful statistical figure in human history’. It’s the denominator of many key ratios including debt, and governments and their policies are judged by it. The Brexit hit or gain will be measured by it, and so will the impact of the pandemic.
So important is GDP that advanced countries try to issue output data on a quarterly basis. Bad data can spook the markets, and the UK data for the second quarter of 2020 looked terrible – a seeming 20.4% fall. The Financial Times said Britain was experiencing ‘the worst slump in Europe’.
The government’s many critics did not hold off. But honesty demands that we also have regard to the difficulties of making international comparisons.
Every student of basic economics learns the elements of national income accounting but even at more advanced levels few explore it in detail.
Top commentators seemed quite confused about the scale of the Q2 output decline, and failed to realise that internationally comparable GDP figures are an added victim of the pandemic.
The mysterious production boundary
GDP is a measure of the total output of the economy but it only measures market output. This leaves out two big areas – labour in the home, and the public sector, from defence to education and health – where goods are not bought and sold.
The internationally agreed accounting approach talks of a production boundary. Within it, things are counted. Beyond it, they are not. This includes a significant amount of unpaid (largely female) labour in the home. But what of the state sector?
Early on it was decided that government services and those of non-profit institutions serving households (that’s NPISH in the insider jargon) could not be left out. But how could they be valued?
The simple solution, still in use in many countries, is to assume that in the government sector the value of everything is determined by the value of the input.
If the input increases by one unit then so does output. So, input = output. The output of primary schools, for example, is mostly just assumed to be equal to the value of teachers’ salaries. This makes life easier, but it means that no account is taken of quality, and there can be no productivity change.
In the UK from 1998 the Office for National Statistics (ONS) began to pioneer a new set of what are called volume measures. These try to measure the output of some government services more directly.
In education you might look at the number of pupils in schools and colleges, their grades, and so on. In health you would look at patients, procedures, and outcomes.
Coronavirus complications
Unfortunately, the coronavirus lockdown made this far more complex. No country has stopped paying teachers, so in countries that used the input = output method, the contribution of education to output therefore held up.
But in the UK, the ONS statisticians took account, among other things, of the reduced formal schooling, so output fell. The same applied to the health sector, with operations cancelled and A&E attendance down.
If we look at the UK data, we see that some of the biggest output falls were in these large areas. Of the 20% fall in GDP, 4% is the result of falls in education and health output.
The UK is not alone in using these measures. But whereas there is only one input = output method, volume measures can be calculated in different ways; and the UK appears to have been much more upfront than many other countries in taking account of reductions in output in the public sector.
At the start of the crisis Eurostat, which sets the rules and checks they are being followed, gave instructions about how to respond to these complications, including the need to explain these added uncertainties.
But this has been largely ignored, including by Eurostat itself. Meanwhile, the ONS included an international comparison in their GDP publication, without noting the difficulty of making international comparisons.
One further significant effect of this was to create the illusion that there may have been a significant hidden inflation surge in the UK. Inflation is normally measured by the consumer price index, which in August rose by 1%. But the GDP deflator is a second measure, and it went haywire – rising by nearly 8%.
Some panicked, not realising that this was because government spending in cash terms had increased while measured output had dropped due to schools closing, and all but emergency healthcare being cancelled.
This made the price of government inputs appear to rise. The government was paying more and getting less, but this is not ‘inflation’ in any meaningful sense.
More statistical perversities?
None of this is easy to correct. In both methods of measuring output labour in the home does not count. The considerable, if uneven, additional home schooling effort of parents is ‘beyond the boundary’.
The same logic means that unpaid overtime to help the recovery is also uncounted. But perversities in volume measures are greater.
If output in education in the second quarter went down because of a lack of teaching then logically shouldn’t it go down even more in the third, because of the school holidays? No – because education output is always low in the summer, and the statistics are seasonally adjusted to take account of this.
Paradoxically, this will mean the UK will see a sharp ‘rise’ in education output in August!
Or take the little matter of the UK exam crisis. One of the inputs used in measuring the productivity of the education sector is exam performance. So how does the political fix and resulting grade inflation affect the statistics?
Looking across Eurostat and the OECD, the level of complexity is huge. It will take time to get a measure of it. Before we make international comparisons to praise or damn governments, we need to make sure we understand the data. If we don’t, we end up confusing apples and pears.
For more information please contact the Corporate Communications Team.