Category Archives: Energy analysis and reporting

Nice try, but…

A recent issue of the CIBSE Journal, which one would have thought ought to have high editorial standards, recently published an article which was basically a puff piece for a certain boiler water additive. It contained some fairly odd assertions, such as that the water in the system would heat up faster but somehow cool down more slowly. Leaving aside the fact that large systems in fact operate at steady water temperatures, this would be magic indeed. The author suggested that the additive reduced the insulating effect of  steam bubbles on the heat-exchanger surface, and thus improved heat transfer. He may have been taking the word ‘boiler’ too literally because of course steam bubbles don’t normally occur in a low or medium-temperature hot water boiler, and if they did, I defy him to explain how they would interfere with heat transfer in the heat emitters.

But for me the best bit was a chart relating to an evaluation of the product in situ. A scatter diagram compared the before-and-after relationships between fuel consumption and degree days (a proxy for heating load). This is good: it is the sort of analysis one might expect to see,

The chart looked like this, and I can’t argue that performance is better after than before. The problem is that this chart does not tell quite the story they wanted. The claim for the additive is that it improves heat transfer; the reduction in fuel consumption should therefore be proportional to load, and the ‘after’ line ought really to have a shallower gradient as well as a lower intercept. If the intercept reduces but the gradient stays the same, as happened here, it is because some fixed load (such as boiler standing losses) has disappeared. One cannot help wondering whether they had idle boilers in circuit before the system was dosed, but not afterwards.

The analysis illustrated here is among the useful techniques people learn on my energy monitoring and targeting courses.

MAVCON17 WAS A HIT

We’ve had some very enthusiastic feedback from delegates at MAVCON17, the third National Measurement and Verification Conference,  which we held on 16 November.

Delegates wrestle with the thorny issue of non-routine adjustments

Adam Graveley of Value Retail for example described it as “a very informative and well-organised conference that provided a great deal of practical insight” .

The event consistently attracts around 70 M&V practitioners who value not only the networking opportunity but also what they call the ‘geek element’ (expert technical papers with extended question-and-answer sessions), group exercises, and a no-holds-barred expert panel discussion for which this year’s theme was “when M&V goes wrong”.

(l. to r.)  Chairman Richard Hipkiss, keynote speaker Denis Tanguay and organiser Vilnis Vesma

Our keynote speaker was Denis Tanguay, Executive Director of the Efficiency Valuation Organisation, the body responsible for the International Performance Measurement and Verification Protocol (IPMVP). We are planning to run MAVCON again in early November 2018 and are open for offers of technical papers and ideas for group exercises.

We are grateful to our other speakers Dave Worthington, Hilary Wood, Colin Grenville, Steve Barker and  Emma Hutchinson and our expert panelists Sandeep Nair, Ellen Salazar and Quinten Babcock. You can read more about them at the conference web site www.MAVCON.uk

We should also acknowledge the venue, the Priory Rooms, for the quality of their service including excellent catering which also drew much favourable comment.

 

Daylight-linked consumption

When monitoring consumption in outside lighting circuits with photocell control, it is reasonable to expect weekly consumption to vary according to how many hours of darkness there were. And that’s exactly what we can see here in this Spanish car park:

It is a textbook example: with the exception of two weeks, it shows the tighest correlation that I have ever seen in any energy-consuming system.

The negative intercept is interesting, and a glance at the daily demand profile (viewed as a heatmap) shows how it comes about:

Moving left to right we see from January to March the duration of daylight (zero consumption in blue) increases. High consumption starts at dusk and finishes at dawn, but from about 10 p.m. to 5 a.m. it drops back to a low level. It is this “missing” consumption for about seven hours in the night which creates the negative intercept. If they kept all the lights on from dusk to dawn the line would go through the origin.

For weekly and monthly tabulations of hours of darkness (suitable for England and other countries on similar latitudes)  click here.

 

Energy Savings Opportunity Scheme

ESOS is the UK government’s scheme for mandatory energy assessments which must be reviewed and signed off by assessors who are on one of the approved registers. We are now in Phase 2 with a submission deadline in December 2019, but the Environment Agency is trying to get participants to act now.

I run a closed LinkedIn group for people actively engaged with ESOS; it provides a useful forum with lots of high-quality discussion.

Background reading

Useful resources

These are documents which I have developed to support the ESOS assessment process. I used them for my assignments during the first phase and have since revised them in the light of experience:

Pitfalls of regression analysis: case study

I began monitoring this external lighting circuit at a retail park in the autumn of 2016. It seems from the scatter diagram below that it exhibits weekly consumption which is well-correlated with changing daylight availability expressed as effective hours of darkness per week.

The only anomaly is the implied negative intercept, which I will return to later; when you view actual against expected consumption, as below, the relationship seems perfectly rational:

 

Consumption follows the annual sinusoidal profile that you might expect.

But what about that negative intercept? The model appears to predict close to zero consumption in the summer weeks, when there would still be roughly six hours a night of darkness. One explanation could be that the lights are actually habitually turned off in the middle of the night for six hours when there is no activity. That is entirely plausible, and it is a regime that does apply in some places, but not here. For evidence see the ‘heatmap’ view of half-hourly consumption from September to mid November:

 

As you can see, lighting is only off during hours of daylight; note by the way how the duration of daylight gradually diminishes as winter draws on. But the other very clear feature is the difference before and after 26 October when the overnight power level abruptly increased. When I questioned that change, the explanation was rather simple: they had turned on the Christmas lights (you can even see they tested them mid-morning as well on the day of the turn-on).

So that means we must disregard that week and subsequent ones when setting our target for basic external lighting consumption. This puts a different complexion on our regression analysis. If we use only the first four weeks’ data we get the relationship shown with a red line:

In this modified version, the negative intercept is much less marked and the data-points at the top right-hand end of the scatter are anomalous because they include Christmas lighting. There are, in effect, two behaviours here.

The critical lesson we must draw is that regression analysis is just a statistical guess at what is happening: you must moderate the analysis by taking into account any engineering insights that you may have about the case you are analysing

 

Lego shows why built form affects energy performance

Just to illustrate why building energy performance indicators can’t really be expected to work. Here we have four buildings with identical volumes and floor areas (same set of Lego blocks) but just look at the different amount of external wall, roof and ground-floor perimeter – even exposed soffit in two of them.

But all is not lost: there are techniques we can use to benchmark dissimilar buildings, in some cases leveraging submeters and automatic meter reading, but also using good old-fashioned whole-building weekly manual meter readings if that’s all we have. Join me for my lunchtime lecture on 23 February to find out more

Advanced benchmarking of building heating systems

The traditional way to compare buildings’ fuel consumptions is to use annual kWh per square metre. When they are in the same city, evaluated over the same interval, and just being compared with each other, there is no need for any normalisation. So it was with “Office S” and “Office T” which I recently evaluated. I found that Office S uses 65 kWh per square metre and Office T nearly double that. Part of the difference is that Office T is an older building; and it is open all day Saturday and Sunday morning, not just five days a week. But desktop analysis of consumption patterns showed that Office T also has considerable scope to reduce its demand through improved control settings.

Two techniques were used for the comparison. The first is to look at the relationship between weekly gas consumption and the weather (expressed as heating degree days).

The chart on the right shows the characteristic for Office S. Although not a perfect correlation, it exhibits a rational relationship.

Office T, by contrast, has a quite anomalous relationship which actually looked like two different behaviours, one high one during the heating season and another in milder weather.

The difference in the way the two heating systems behave can be seen by examining their half-hourly consumption patterns. These are shown below using ‘heat map’ visualisations for the period 3 September to 10 November, i.e., spanning the transition from summer to winter weather. In an energy heatmap each vertical stripe is one day, midnight to midnight GMT from top to bottom and each cell represents half an hour. First Office S. You can see its daytime load progressively becoming heavier as the heating season progresses:

Compare Office T, below. It has some low background consumption (for hot water) but note how, after its heating system is brought into service at about 09:00 on 3 October, it abruptly starts using fuel at similar levels every day:

Office T displays classic signs of mild-weather overheating, symptomatic of faulty heating control. It was no surprise to find that its heating system uses radiators with weather compensation and no local thermostatic control. In all likelihood the compensation slope has been set too shallow – a common and easily-rectified failing.

By the way, although it does not represent major energy waste, note how the hot water system evidently comes on at 3 in the morning and runs until after midnight seven days a week.

This case history showcases two of the advanced benchmarking techniques that will be covered in my lunchtime lecture in Birmingham on 23 February 2017 (click here for more details).

Air-compressor benchmarking

Readers with reliably-metered compressed-air installations are invited to participate in an exercise using a comparison technique called parametric benchmarking.

Background

Traditionally, air-compressor installations have been benchmarked against each other by comparing their simple specific energy ratios (SER) expressed typically as kWh per normal cubic metre. However, as this daily data kindly supplied by a reader shows, there may be an element of fixed consumption which confounds the analysis because the SER will be misleadingly higher at low output:

example
Note: a four-day period of anomalous performance has been hidden in this diagram

It seems to me that the gradient of the regression line would be a much better parameter for comparison; broadly speaking, on a simple thermodynamic view, one would expect similar gradients for compressors with the same output pressure, and differences would imply differences in the efficiency of compression. The intercept on the other hand is a function of many other factors. It may include parasitic loads; it will certainly depend on the size of the installation, which the gradient should not.

I am proposing to run a pilot exercise pooling anonymous data from readers of the Energy Management Register to try “parametric” benchmarking, in which the intercepts and gradients of regression lines are compared separately.

Call for data

Participants must have reliable data for electricity consumption and air output at either daily or weekly intervals: we will also need to know what compressor technology they use, the capacity of each compressor, and the air delivery pressures.

In terms of the metered data the ideal would be to have an electricity and air meter associated with each individual compressor. However, metering arrangements may force us to group compressors together, the aim being to create the smallest possible block model whose electricity input and air output is measurable.

Please register your interest by email to moc.a1516692677msev@1516692677sinli1516692677v1516692677 with ‘compressor benchmarking’ in the subject line: once I have a reasonable group of participants I will approach them for the data.

Vilnis Vesma

4 January 2017

The meaning of R-squared

In statistical analysis the coefficient of determination (more commonly known as R2) is a measure of how well variation in one variable explains the variation in something else, for instance how well the variation in hours of darkness explains variation in electricity consumption of yard lighting.

R2 varies between zero, meaning there is no effect, and 1.0 which would signify total correlation between the two with no error. It is commonly held that higher R2 is better, and you will often see a value of (say) 0.9 stated as the threshold below which you cannot trust the relationship. But that is nonsense and one reason can be seen from the diagrams below which show how, for two different objects,  energy consumption on the vertical or y axis might relate to a particular driving factor or independent variable on the horizontal or x axis.

r2_vs_CV(RMSE)

In both cases, the relationship between consumption and its driving factor is imperfect. But the data were arranged to have exactly the same degree of dispersion. This is shown by the CV(RMSE) value which is the root mean square deviation expressed as a percentage of the average consumption.  R2 is 0.96  (so-called “good”) in one case but only 0.10 (“bad”) in the other. But why would we regard the right-hand model as worse than the left? If we were to use either model to predict expected consumption, the absolute error in the estimates would be the same.

By the way, if anyone ever asks how to get R2 = 1.0 the answer is simple: use only two data points. By definition, the two points will lie exactly on the best-fit line through them!

Another common misconception is that a low value of R2 in the case of heating fuel signifies poor control of the building. This is not a safe assumption. Try this thought experiment. Suppose that a building’s fuel consumption is being monitored against locally-measured degree days. You can expect a linear relationship with a certain R2 value. Now suppose that the local weather monitoring fails and you switch to using published degree-day figures from a meteorological station 35km away. The error in the driving factor data caused by using remote weather observations will reduce R2 because the estimates of expected consumption are less accurate; more of the apparent variation in consumption will be attributable to error and less to the measured degree days. Does the reduced R2  signify worse control? No; the building’s performance hasn’t changed.

Degree-day base temperature

When considering the consumption of fuel for space heating, the degree-day base temperature is the outside air temperature above which heating is not required, and the presumption is that when the outside air is below the base temperature, heat flow from the building will be proportional to the deficit in degrees. Similar considerations apply to cooling load, but for simplicity this article deals only with heating.

In UK practice, free published degree-day data have traditionally been calculated against a default base temperature of 15.5°C (60°F). However, this is unlikely to be truly reflective of modern buildings and the ready availability of degree-day data to alternative base temperatures makes it possible to be more accurate. But how does one identify the correct base temperature?

The first step is to understand the effect of getting the base temperature wrong. Perhaps the most common symptom is the negative intercept that can be seen in Figure 1 which compares the relationships between consumption and degree days. This is what most often alerts you to a problem:

Figure 1: the classic symptom
Figure 1: the classic symptom

It should be evident that in Figure 1 we are trying to fit a straight line to what is actually a curved characteristic. The shape of the curve depends on whether the base temperature was too low or too high, and Figure 2 shows the same consumptions plotted against degree days computed to three different base temperatures: one too high (as Figure 1), one too low, and one just right.

Figure 2: the effect of varying base temperature
Figure 2: the effect of varying base temperature

Notice in Figure 2 that the characterists are only curved near the origin. They are parallel at their right-hand ends, that is to say, in weeks when the outside air temperature never went above the base temperature. The gradients of the straight sections are all the same, including of course the case where the base temperature was appropriate. This is significant because although in real life we only have the distorted view represented by Figure 1, we now know that the gradient of its straight section is equal to the true gradient of the correct line.

So let’s revert to our original scenario: the case where we had a single line where the base temperature was too high. Figure 3 shows that a projection of the straight segment of the line intersects the vertical axis at -1000 kWh per week, well below the true position, which from Figure 1 we can judge to be around 500 kWh per week. The gradient of the straight section, incidentally, is 45 kWh per degree day.

Figure 3: correct gradient but wrong intercept
Figure 3: correct gradient but wrong intercept

To correct the distortion we need to shift the line in Figure 3 to the left by a certain number of degree days so that it ends up looking like Figure 4 below. The change in intercept we are aiming for is 1,500 kWh (the difference between the apparent intercept of -1000, and the true intercept, 500*). We can work out how far left to move the line by dividing the required change in the intercept by the gradient: 1500/45 = 33.3 degree days. Given that the degree-day figures are calculated over a 7-day interval, the required change in base temperature is 33.3/7 = 4.8 degrees

Figure 4: degree-day values reduced by lowering the base temperature
Figure 4: degree-day values moved leftwards by lowering the base temperature

Note that only the points in the straight section moved the whole distance to the left: in the curved sections, the further left the point originally sits, the less it moves. This can best be visualised by looking again at Figure 2.

In more general terms the base-temperature adjustment is given by (Ct-Ca)/m.t where:

Ct is the true intercept;
Ca is the apparent intercept when projecting the straight portion of the distorted characteristic;
m is the gradient of that straight portion; and
t is the observing-interval length in days


* The intercept could be judged or estimated by a variety of methods including: empirical observations like averaging the consumption in non-heating weeks; by ‘eyeball’; or by fitting a curved regression line, etc..