Category Archives: Energy analysis and reporting

Why league tables don’t work

League tables are highly unsuitable for reporting energy performance, because small measurement errors can propel participants up and down the table. As a result, the wrong people get praised or blamed, and real opportunities go missing while resources are wasted pursuing phantom problems.

Figure 1
Figure 1

To illustrate this, let’s look at a fashionable application for league tables: driver behaviour. The table on the right (Figure 1) shows 26 drivers, each of whom is actually achieving true fuel economy between 45 and 50 mpg in identical vehicles doing the same duties. This is a very artificial scenario but to make it a bit more realistic let us accept that there will be some error in measurement: alongside their ‘true’ mpg I have put the ‘measured’ values. These differ by a random amount from the true value, on a normal distribution with a standard deviation of 1 mpg — meaning that 2/3 of them fall within 1 mpg either side of the true value, and big discrepancies, although rare, are not impossible. Errors of this magnitude (around 2%) are highly plausible given the facts that (a) it is difficult to fill the tank consistently to the brim at the start and end of the assessment period and (b) there could easily be a 5% error in the recorded mileage. Check your speedometer against a satnav if you doubt that.

In the right-hand column of Figure 1 we see the resulting ranking based on a spreadsheet simulating the random errors. The results look fine: drivers A, B and C at the top and X, Y and Z at the bottom, in line with their true performance. But I cheated to get this result: I ran the simulation several times until this outcome occurred.

Figure 2
Figure 2

Figure 2 shows an extract of two other outcomes I got along the way. The top table has driver B promoted from second to first place (benefiting from a 2.3 mpg error), while in the bottom table the same error, combined with bad luck for some of the others, propels driver K into first place from what should have been 11th.

In neither case does the best driver get recognised, and in the top case driver P, actually rather average at 16th, ends up at the bottom thanks to an unlucky but not impossible 7% adverse measurement error.

A league table is pretty daft as a reporting tool. The winners crow about it (deservedly or not) while those at the bottom find excuses or (justifiably)  blame the methodology. As a motivational tool: forget it. When the results are announced, the majority of participants, including those who made a big effort, will see themselves advertised as having failed to get to the top.

Download the simulation to see all this for yourself.

Estimating savings from building-fabric improvements

If you improve a building’s insulation, or reduce its ventilation rate, the resulting energy saving can be estimated using simple formulae in combination with relevant weather-data tables. In the case of an improvement to insulation of an individual element of the building envelope, the approximate formula for annual fuel savings is

0.024 x (UOLD – UNEW) x A x DDA / EFF                         (kWh)

where  UOLD and UNEW are the original and improved U-values (W/m2K), and A is the area of building element being improved (m2).  EFF is heating-system efficiency, for which it would be reasonable to assume a value in the range of 0.8 to 0.9, reflecting the fact that 10-20% of the fuel used is accounted for by combustion losses.

DDA meanwhile is the annual heating degree-day figure, which is a measure of how cold the weather was in aggregate. Degree-day totals tend to be higher in the north and lower in the south; and they also depend on the outside temperature below which a given building’s heating needs to be turned on (the ‘base’ temperature). Selected totals are given in Table 1 for various regions and base temperatures. Buildings with high space temperatures and low casual heat gains have higher base temperatures, implying higher annual degree-day totals and thus bigger expected savings for a given improvement to their insulation.

Turning to the effect of reducing the building’s ventilation rate, we need to know the reduction in air throughput, RDV. If we express RDV in m3/day, the annual energy savings are given by this approximate formula:

(0.008 x RDV x DDA) / EFF                   (kWh)

DDA and EFF have the same meanings as before.

Use for air conditioning

The same techniques can be used to gauge the effect of reduced cooling load. In this case we use cooling degree days (examples in Table 2) and EFF is likely to be in the range 2 to 4, representing the chiller coefficient of performance. Saving one kWh of cooling effect saves much less than a kWh of electricity.

Base temperatures

The base temperature for heating depends on the temperature set-point, the construction of the building, how it is used, how densely it is populated and how much casual heat gain it experiences from lighting and equipment. It is invariably below the internal set-point temperature. How far below can be determined in various ways but there would typically be about 4°C difference.

Similar considerations apply to cooling: the cooling base temperature is the temperature above which it becomes necessary to run air conditioning. If you know air-conditioning is used throughout the year, a very low base (say 5°C) is appropriate. Otherwise something of the order of 15°C could be a reasonable assumption.

Table 1: Annual heating degree days1

Base temperature: 20°C 15°C 10°C
South West   3,189   1,576      503
Midland   3,632   2,033      860
N E Scotland   4,075   2,355   1,003

Table 2: Annual cooling degree days1

Base temperature: 25°C 15°C 5°C
South West          2      233   2,386
Midland          6      274   2,111
N E Scotland    0      111   1,649

1 The full tables can be downloaded from www.vesma.com. Click on ‘D’ in the A-Z index and look for ‘degree days’.

Accurate meter readings: a managing director’s view

More from the museum of energy management…

books_lyleThe UK’s first energy manager was Oliver Lyle, managing director of the eponymous sugar refinery in London. He was successful not only because he was in a position of influence but also because he was a very capable engineer. Fuel efficiency was mission critical to him both during the war (because of rationing) and afterwards when the effect of rationing was compounded by economic growth.

Lyle’s book The Efficient Use of Steam, published by the Ministry of Fuel and Power in 1947, remains a technical classic and is written in a most engaging style.  In one of my favourite passages he talks about my pet subject, the analysis of fuel consumption. He notes that energy performance seemed to be systematically better in weeks when the factory had been bombed. He remarks that common sense suggests that the opposite would seem more likely; “I can only conclude”, he writes, “that people were too busy clearing up the mess to take proper charts and meter readings”.

Accounting for the weather: an early degree-day meter

ddmeter1

I’m indebted to Dr Peter Harris for unearthing this curiosity, published in the journal of the Institute of Heating and Ventilating Engineers in 1936. It is a design for a “degree-day meter” whose purpose was to summarise how cold the weather had been over a given period (a month, say). This is how it works: a resistance thermometer a is mounted outdoors and connected via a Wheatstone Bridge b to a moving coil galvanometer c whose pointer d moves horizontally across a scale e, marked from 60°F on the left to -20°F on the right. It thereby indicates the outside air temperature. Above the pointer is a tapered chopper bar f, moved up and down by a light spring g driven by a rotating cam h.  Because the chopper bar is tapered, its vertical travel is constrained to an increasing extent as the pointer moves leftward indicating higher temperatures.  Conversely, its vertical travel will be greater the lower the temperature, as the pointer moves to the right. The intermittent vertical travel of the chopper bar is transmitted via a pawl i and ratchet-wheel j to a cyclometer counter k which shows the total vertical travel. The counter will advance more rapidly when it is colder and more slowly when it is warmer outside, and it is so arranged that when the temperature exceeds 60°F there will be no vertical play and the counter will not advance at all.

Because a building’s heating power requirement at any given moment is proportional to the temperature deficit, the accumulated deficit over any given period of days (as measured by this meter) is proportional to the total thermal energy lost from the building, which needs to be made up by the heating system.  Because it measures the time integral of temperature deficit, its units of measurement are degree-days (analogous to man-hours) and the threshold temperature of  60°F survives today as the common degree-day base temperature of  15.5°C.

Choosing an assessment interval

By default, I tend to favour a weekly assessment interval for routine exception reporting and associated analysis. Monthly is too long for all except the smallest users (although it may be appropriate for passive top-management reports for users of all sizes) and months are also too inconsistent in terms both both of duration and number of working days.

In some applications, daily analysis may be viable, for example:

  • in buildings such as data centres which operate seven days a week and respond rapidly to changing weather conditions; or
  • in energy-intensive manufacturing processes.

More frequent near-real-time assessment is sometimes attempted but this brings complications that tend to outweigh the benefits. Firstly, there will be error induced by short-term effects such as transients, lags, latencies, and factors which are not practical to take into account but whose random influences would have cancelled out over a longer time interval. Secondly, the cash values of excess consumptions over a short interval are very small. Thus with too-frequent reporting the user is continually bombarded with trivial alerts which often prove fleeting. Not the best recipe for engagement.

Having said that, where fine-grained data are being collected they can be an invaluable diagnostic aid; but the best reporting tactic is to review performance at, say, a daily or weekly interval and use the real-time record for diagnosis by exception.

 

 

How to waste energy No. 7: meter reading

A big part of wasting energy is not knowing how much you use, when, where or for what. Most keen energy wasters rely on their energy suppliers not read their meters for them, but here are some top tips for those who want to be proactively bad:

1. Make it difficult to get access, for example by installing meters at height, or leaving the keys to the meter room with an obnoxious jobsworth.

2. Try to have meters installed in positions where you cannot see their dials.

3. Never have a reliable check-reading taken by somebody who knows what they are doing.

4. Do not create a meter schedule; if you have one, don’t keep it up to date.

5. Do not try to find out what each meter serves.

6. If in doubt about units of measurement or scale multiplier factors, make whatever assumptions you like.

7. When a meter is swapped out, dispose of the old one without noting its final reading.

8. Do not train anybody to read meters.

9. Do not appoint stand-ins to cover for sickness or holiday absence.

10. Allow meter readers to be lax about when they take readings, and let them record the date they were supposed to take the readings rather than the actual date and time.

11. Allow meter readers to include or ignore decimal fractions as they feel inclined, if possible being inconsistent between visits to the same meter.

12. Rely on paper returns, and lose them.

Link: Energy management training

 

Struggling to verify savings?

When people ask me for advice on how to verify energy savings, it is usually because their analysis is not giving the results they expected. Often they have left it too late, developing a methodology after the event or even making it up as they go along. So if you are contemplating an energy-saving project the first plea I would make is this: agree a measurement and verification plan between the interested parties before the project starts. That way, everyone is forced to think about the calculation methodology and (just as importantly) focus on what data will be needed, who will collect it, and even how much uncertainty there is likely to be in the conclusions. It also pays to think about what non-routine changes might occur (patterns of occupation, extensions, demolitions, etc) and agree how those will be factored in if they occur.

Sometimes, fortunately, it is possible to rescue the verification of a project where the “shoot first, ask questions later” approach has been used. To achieve a resolution one needs two things: first a willingness on both sides to accept a retrospective definition of procedure; and secondly, at least some accurate prior consumption data. That consumption data can, however, be sparse, so the presence of a lot of estimates (a common situation) need not necessarily be a problem. The analysis in such circumstances is done using a technique called “back-casting”.

Recall that in a normal evaluation, accurate and complete pre-project baseline data are needed so as to establish the prior relationship between consumption and relevant independent driving factors (such as degree days, hours of darkness, production and so on). A formula is derived, typically using regression analysis, for predicting consumption from those driving factors. After the energy conservation measure (ECM) has been installed, that same baseline formula can be fed with driving-factor data and will yield an estimate of what consumption would have been in the absence of the ECM. The spread between this estimate and actual consumption is a measure of the ‘avoided’ energy consumption.

The back-casting method is different. It turns that logic on its head. Using post-ECM rather than pre-ECM measurements, a formula is developed which relates consumption to driving factors for the improved installation (rather than its original performance). Thus you can say that the analysis “baseline” period follows, rather than precedes, the ECM, which some people find odd. In this scenario, pre-ECM actual consumptions can be compared with what they would have been if the ECM had been active all along, and one would expect those actual consumptions to be higher than the model’s predictions (the opposite of the conventional approach where post-ECM consumptions turn out lower than the baseline model predicts).

Back-casting is no less valid as a method, but it enjoys one big advantage in that you only need two firm meter readings predating the ECM. They should be as far apart in time as possible, and you need to be able to retrieve driving-factor data spanning exactly the entire period between the meter readings, but if those conditions are met, your model formula can tell you what the expected consumption of the installation would have been over that entire period if the ECM had already been in place, and hence how much more was actually used in the absence of the ECM. This back-to-front approach is attractive because regular meter readings are generally easier to assure after the project than before.

Link: Energy management training