In many aspects of commercial and public life, key performance indicators are used to quantify outcomes so that they can be compared numerically. In energy management there is a common KPI called the ‘Specific Energy Ratio’ or SER: it is simply energy used divided by the quantity of product output.

Unfortunately, although the use of SER is widespread and the concept is written into UK national legislation and international standards, it is a weak and misleading metric. Its weakness is that it explicitly assumes that consumption is proportional to output. It almost never is. Usually there is some degree of fixed overhead consumption and that means, through simple arithmetic, that SER values go up when throughput is low and down when it is high, irrespective of energy performance. This fact, acknowledged in UK government energy advice as long ago as 1947, ought to disqualify SER as a performance indicator.

Readers won’t be surprised to hear that the concept of ‘expected consumption’ solves the SER problem. If you measure actual consumption, and can calculate expected consumption to match, you can divide one by the other to arrive at a robust performance indicator called the ‘Energy Performance Coefficient’ (EnPC). When the EnPC is greater than 1.0, you have used more than you should have done; when it is less than 1.0, you have used less than previously would have been required. Simple as. Furthermore, the result is inoculated against variation in product throughput, weather, daylight availability or whatever other factors naturally affect consumption. Why? Because the relevant factors were taken into account when calculating the expected consumption which is part of the arithmetic.

And if you don’t like the idea of a performance indicator with a starting value of 1.0, you can just multiply it by whatever fixed constant scales it up to the numerical value people are more used to.

But just using a more rational performance indicator is not the complete answer. All conventional performance indicators suffer from a fundamental drawback: they only report *relative *performance and they say nothing about the scale of what they are measuring. Put simply, if something small is performing really badly the penalty could well be less than for something gigantic performing only slightly worse than it should. The answer is simple: look not at the ratio between actual and expected consumption, but their difference. But that is for future issues.