All posts by Editor

Advanced benchmarking of building heating systems

The traditional way to compare buildings’ fuel consumptions is to use annual kWh per square metre. When they are in the same city, evaluated over the same interval, and just being compared with each other, there is no need for any normalisation. So it was with “Office S” and “Office T” which I recently evaluated. I found that Office S uses 65 kWh per square metre and Office T nearly double that. Part of the difference is that Office T is an older building; and it is open all day Saturday and Sunday morning, not just five days a week. But desktop analysis of consumption patterns showed that Office T also has considerable scope to reduce its demand through improved control settings.

Two techniques were used for the comparison. The first is to look at the relationship between weekly gas consumption and the weather (expressed as heating degree days).

The chart on the right shows the characteristic for Office S. Although not a perfect correlation, it exhibits a rational relationship.

Office T, by contrast, has a quite anomalous relationship which actually looked like two different behaviours, one high one during the heating season and another in milder weather.

The difference in the way the two heating systems behave can be seen by examining their half-hourly consumption patterns. These are shown below using ‘heat map’ visualisations for the period 3 September to 10 November, i.e., spanning the transition from summer to winter weather. In an energy heatmap each vertical stripe is one day, midnight to midnight GMT from top to bottom and each cell represents half an hour. First Office S. You can see its daytime load progressively becoming heavier as the heating season progresses:

Compare Office T, below. It has some low background consumption (for hot water) but note how, after its heating system is brought into service at about 09:00 on 3 October, it abruptly starts using fuel at similar levels every day:

Office T displays classic signs of mild-weather overheating, symptomatic of faulty heating control. It was no surprise to find that its heating system uses radiators with weather compensation and no local thermostatic control. In all likelihood the compensation slope has been set too shallow – a common and easily-rectified failing.

By the way, although it does not represent major energy waste, note how the hot water system evidently comes on at 3 in the morning and runs until after midnight seven days a week.

This case history showcases two of the advanced benchmarking techniques that will be covered in my lunchtime lecture in Birmingham on 23 February 2017 (click here for more details).

Air-compressor benchmarking

In energy-intensive manufacturing processes there is a need to benchmark production units against each other and against yardstick figures. Conventional wisdom has it that you should compare specific energy ratios (SER), of which kWh per gross tonne is one common example. It seems simple and obvious but, as anybody will know who has tried it, it does not really work because a simple SER varies with output, and this clouds the picture.

To illustrate the problem and to suggest a solution, this article picks some of the highlights from a pilot exercise to benchmark air compressors. These are the perfect thing for the purpose not least because they are universally used and obey fairly straightforward physical laws. Furthermore, because they are all making a similar product from the same raw material, they should in principle be highly comparable with each other.

Various conventions are used for expressing compressors’ SERs but I will use kWh per cubic metre of free air. From the literature on the subject you might expect a given compressor’s SER to fall in the range 0.09 to 0.14 kWh/m3 (typically). Lower SER values are taken to represent better performance.

The drawback of the SER approach is that some compressor installations, like any energy-intensive process, have a certain fixed standing load independent of output. The compressor installation in Figure 1 has a standing load of 161 kWh per day for example, and this has a distorting effect: if you divide kWh by output at an output of 9,000 m3 you should find the SER is just under 0.12 kWh/m3 but at a low daily output, say 4,000 m3 , you get 0.14 kWh/m3. The fixed consumption makes performance look more variable than it really is and changes in throughput change the SER whereas in reality, with a small number of obvious exceptions, the performance of this particular compressor looks quite consistent.

Figure 1

When I say it looks consistent I mean that consumption has a consistent straight-line relationship with output. The gradient of the best-fit straight line does not change across the normal operating range: it is said to be a ‘parameter’. In parametric benchmarking we compare compressors’ marginal SERs, that is, the gradients of their energy-versus-output scatter diagrams. The other parameter that we might be interested in is the standing load, i.e., where the diagonal characteristic crosses the vertical (kWh) axis.

The compressor installation in Figure 1 is one of eight that I compared in a pilot study whose results were as follows:

============================
Case   Marginal  Standing 
No     SER       kWh per day
----------------------------
 8      0.085       115 
 5      0.090        62 
 1      0.092     3,062 
 2      0.097       161 
 7      0.105        58 
 6      0.124        79 
 3      0.161       698 
============================

As you can see, the marginal SERs are mainly fairly comparable and may prove to be more so once we have taken proper account of inlet temperatures and delivery pressures. But their standing kWh per day are wildly different. It makes little sense to try comparing the standing loads. In part they are a function of the scale of the installation (Case 1 is huge) but also the metering may be such that unrelated constant-ish loads are contributing to the total. The variation in energy with variation in output is the key comparator.

In order to conduct this kind of analysis, one needs frequent meter readings, and the installations in the pilot study were analysed using either daily or weekly figures (although some participants provided minute-by-minute records). Rich data like this can be filtered using cusum analysis to identify inconsistencies, so for example in Case 3, although there is no space to go into the specific here, we found that performance tended to change dramatically from time to time and the marginal SER quoted in the table is the best that was consistently achieved.

Case 7 was found to toggle between two different characteristics depending on its loading: see Figure 2. At higher outputs its marginal SER rose to 0.134 kWh/m3, reflecting the relatively worse performance of the compressors brought into service to match higher loads.

Figure 2

In Case 8, meanwhile, the compressor plant changed performance abruptly at the start of June, 2016. Figure 3 compares performance in May with that on working days in June and we obtained the following explanation. The plant consists of three compressors. No.1 is a 37 kW variable-speed machine which takes the lead while Nos 2 and 3 are identical fixed-speed machines also of 37 kW rating. Normally, No.2 takes the load when demand is high but during June they had to use No.3 instead and the result was a fixed additional consumption of 130 kWh per day. The only plausible explanation is that No. 3 leaks 63 m3 per day before the meter, quite possibly internally because of defective seals or non-return vales. Enquiries with the owner revealed that they had indeed been skimping on maintenance and they have now had a quote to have the machines overhauled with an efficiency guarantee.

Figure 3

This last case is one of three where we found variations in performance through time on a given installation and were able to isolate the period of best performance. It improves a benchmarking exercise if one can focus on best achievable, rather than average, performance; this is impossible with the traditional SER approach, as is the elimination of rogue data. Nearly all the pilot cases were found to include clear outliers which would have contaminated a simple SER.

Deliberately excluding fixed overhead consumption from the analysis has two significant benefits:

  • It enables us to compare installations of vastly differing sizes, and
  • it means we can tolerate unrelated equipment sharing the meter as long as its contribution to demand is reasonably constant.

Meaningless claims

MEANINGLESS CLAIMS No. 9,461

Seen in a product brochure for a control system: “The theory states that if you allow the indoor temp to vary by 8ºC in a commercial or public building the heat saving will be 80%. In practice a span of 3-4ºC is usually more realistic (20-24ºC is common) resulting in heat savings of 20-40%. The use of a temperature range does not mean that the indoor temperature will change 3-4ºC over 24h, the average change in indoor temp over 24h is less than 1ºC, which is enough to utilise thermal storage. If no range is allowed, none of the excess free or purchased energy can be stored in the building.”

MEANINGLESS CLAIMS No. 9,462

I recently reported the new fashion for describing boiler-water additives as ‘organic’ to make them sound benign. As I pointed out, cyanide is an organic compound. Now here’s a new twist: a report on the efficacy of a certain boiler water additive says “[it] is 100% organic so the embodied carbon is 0.58kg of CO2 per bottle”. Er… How do they figure that?

MEANINGLESS CLAIMS No. 9,463

The same report cited another which said that a certain programme of domestic energy-conservation refits had yielded “up to a 42% increase in living room temperature”. Cold comfort indeed if your room started at zero degrees Celsius; 42% of zero is zero. Oh wait: what if you had used Fahrenheit, where freezing point is 32°F? A 42% increase on 32°F gives you 45.4°F (7.5°C). So it depends what temperature scale you use, and the truth is you can only talk about a percentage increase in temperature relative to absolute zero (-273°C). If we start at an absolute 273K (0°C), a 42% increase takes us to 388K or 115°C. To be honest, that doesn’t sound too comfortable either.

Refrigeration nonsense

The vapour-compression cycle at the heart of most air-conditioning systems consists of a closed loop of volatile fluid. In the diagram below the  fluid in vapour form at (1) is compressed, which raises its temperature (2), after which it passes through a heat exchanger (the “condenser”) where it is cooled by water or ambient air. At (3) it reaches its dewpoint temperature and condenses, changing back to liquid (4). The liquid passes through an expansion valve. The abrupt drop in pressure causes a drop of temperature as some of the fluid turns to vapour: the resulting cold liquid/vapour mixture passes through a heat exchanger (the “evaporator”) picking up heat from the space and turning back to vapour (1).

normal_loop
Figure 1: the vapour-compression refrigeration cycle schematically and on a temperature-entropy diagram

The condenser has two jobs to do. It needs to dump latent heat (3->4) but first it must dump sensible heat just to reduce the vapour’s temperature to its dewpoint. This is referred to as removing superheat.

It has been claimed that it is possible to improve the efficiency of this process by injecting heat between the compressor and condenser (for example by using a solar panel). Could this work?

solar_loop_true
Figure 2: showing the effect of injecting heat

The claim is based on the idea that injecting heat reduces the power drawn by the compressor. It is an interesting claim because it contains a grain of truth, but there is a catch: the drop in power would be inextricably linked to a drop in the cooling capacity of the apparatus. This is because we have now superheated the vapour even more than before, so the condenser now needs to dump more sensible heat. This reduces its capacity to dump latent heat. The evaporator can only absorb as much latent heat as the condenser can reject: if the latter is reduced, so is the former. Any observed reduction in compressor power is the consequence of the cooling capacity being constrained.

The final nail in the coffin of this idea is that reduced power is not the same as reduced energy consumption: the compressor will need to run for longer to pump out the same amount of heat. Thus there is no kWh saving, whatever the testimonials may say.

View a vendor’s response

Fifty years of degree-day data

Another exhibit for the Museum of Energy Management: thanks to David Bridger for unearthing UK monthly degree-day data for the period 1966 to 1975 (view the complete archive file here ).

These data have mainly curiosity value and should not be relied upon for any kind of trend analysis. Observing stations have sometimes been moved or affected by nearby building development and urban expansion. Region 18 (North West Scotland) was not even included at all until I launched the Degree Days Direct subscription service in 1992, and there had been two other main data providers before I got the contract to supply the official figures in 2003, so it would be risky to assume continuity over the whole fifty years.

Below: degree-day figures reported in the government’s “Energy Management” newspaper in 1986

old_dd_table

Effect of voltage on motor efficiency

Proponents of voltage reduction (“optimisation” as they like to call it) have started suggesting that equipment is more energy-efficient at lower voltage. In fact this is quite often not the case. For an electric motor, this diagram shows how various aspects of energy performance vary as you deviate from a its nominal voltage. The red line shows that peak efficiency occurs, if anything, at slightly above rated voltage.

motor_voltage_characteristics

Reduced voltage is associated with reduced efficiency. The reason is that to deliver the same output at lower voltage, the motor will need to draw a higher current, and that increases its resistive losses.

On-off testing to prove energy savings

When you install an energy-saving measure, it is important to evaluate its effect objectively. In the majority of cases this will be achieved by a “before-and-after” analysis making due allowance for the effect of the weather or other factors that cause known variations.

There are, however, some types of energy-saving device which can be temporarily bypassed or disabled at will, and for these it may be possible to do interleaved on-off tests. The idea is that by averaging out the ‘on’ and ‘off’ consumptions you can get a fair estimate of the effect of having the device enabled. The distorting effect of any hard-to-quantify external influences—such as solar gain or levels of business activity—should tend to average out.

A concrete example may help. Here is a set of weekly kWh consumptions for a building where a certain device had been fitted to the mains supply, with the promise of 10% reductions. The device could easily be disconnected and was removed on alternate weeks:

Week	kWh	Device?
----    -----   -------
1	241.8	without
2	223.0	with
3	221.4	without
4	196.4	with
5	200.1	without
6	189.6	with
7	201.9	without
8	181.3	with
9	185.0	without
10	208.5	with
11	181.7	without
12	188.3	with
13	172.3	without
14	180.4	with

The mean of the even-numbered weeks, when the device was active, is 195.4 kWh compared with 200.6 kWh in weeks when it was disconnected, giving an apparent saving on average of 5.2 kWh per week. This is much less than the promised ten percent but there is a bigger problem. If you look at the figures you will see that the “with” and “without” weeks both have a spread of values, and their ranges overlap. The degree of spread can be quantified through a statistical measure called the standard deviation, which in this case works out at 19.7 kWh per week. I will not go into detail beyond pointing out that it means that about two-thirds of measurements in this case can be expected to fall within a band of ±19.7 kWh of the mean purely by chance. Measured against that yardstick, the 5.2 kWh apparent saving is clearly not statistically significant and the test therefore failed to prove that the device had any effect (as a footnote, when the analysis was repeated taking into account sensitivity to the weather, the conclusion was that the device apparently increased consumption).

When contemplating tests of this sort it is important to choose the length of on-off interval carefully. In the case cited, a weekly interval was used because the building had weekend/weekday differences. A daily cycle would also be inappropriate for monitoring heating efficiency in some buildings because of the effect of heat storage in the building fabric: a shortfall in heat input one day might be made up the next. Particular care is always needed where a device which reduces energy input might resulting in a shortfall in output which then has to be made up in the following interval when it is disconnected. This will notably tend to happen with voltage reduction in electric heating applications. During low-voltage interval the heaters will run at lower output and this may result in a heat deficit being ‘exported’ to the succeeding high-voltage period, when additional energy will need to be consumed to make up the shortfall, making the high-voltage interval look worse than the low-voltage one. To minimise this distortion, be sure to set the interval length several times longer than the typical equipment cycle time.

Otherwise there are perhaps two other stipulations to add. Firstly, the number of ‘on’ and ‘off’ cycles should be equal; secondly, although there is no objection to omitting an interval for reasons beyond the control of either party (such as metering failure) it could be prudent to insist that intervals are omitted only in pairs, and that tests should always recommence consistently in either the ‘off’ or ‘on’ state. This is to avoid the risk of skewing the results by selectively removing individual samples.

The meaning of R-squared

In statistical analysis the coefficient of determination (more commonly known as R2) is a measure of how well variation in one variable explains the variation in something else, for instance how well the variation in hours of darkness explains variation in electricity consumption of yard lighting.

R2 varies between zero, meaning there is no effect, and 1.0 which would signify total correlation between the two with no error. It is commonly held that higher R2 is better, and you will often see a value of (say) 0.9 stated as the threshold below which you cannot trust the relationship. But that is nonsense and one reason can be seen from the diagrams below which show how, for two different objects,  energy consumption on the vertical or y axis might relate to a particular driving factor or independent variable on the horizontal or x axis.

r2_vs_CV(RMSE)

In both cases, the relationship between consumption and its driving factor is imperfect. But the data were arranged to have exactly the same degree of dispersion. This is shown by the CV(RMSE) value which is the root mean square deviation expressed as a percentage of the average consumption.  R2 is 0.96  (so-called “good”) in one case but only 0.10 (“bad”) in the other. But why would we regard the right-hand model as worse than the left? If we were to use either model to predict expected consumption, the absolute error in the estimates would be the same.

By the way, if anyone ever asks how to get R2 = 1.0 the answer is simple: use only two data points. By definition, the two points will lie exactly on the best-fit line through them!

Another common misconception is that a low value of R2 in the case of heating fuel signifies poor control of the building. This is not a safe assumption. Try this thought experiment. Suppose that a building’s fuel consumption is being monitored against locally-measured degree days. You can expect a linear relationship with a certain R2 value. Now suppose that the local weather monitoring fails and you switch to using published degree-day figures from a meteorological station 35km away. The error in the driving factor data caused by using remote weather observations will reduce R2 because the estimates of expected consumption are less accurate; more of the apparent variation in consumption will be attributable to error and less to the measured degree days. Does the reduced R2  signify worse control? No; the building’s performance hasn’t changed.

Footnote: for a deeper, informative and highly readable treatment of this subject see this excellent paper by Mark Stetz. 

Degree-day base temperature

When considering the consumption of fuel for space heating, the degree-day base temperature is the outside air temperature above which heating is not required, and the presumption is that when the outside air is below the base temperature, heat flow from the building will be proportional to the deficit in degrees. Similar considerations apply to cooling load, but for simplicity this article deals only with heating.

In UK practice, free published degree-day data have traditionally been calculated against a default base temperature of 15.5°C (60°F). However, this is unlikely to be truly reflective of modern buildings and the ready availability of degree-day data to alternative base temperatures makes it possible to be more accurate. But how does one identify the correct base temperature?

The first step is to understand the effect of getting the base temperature wrong. Perhaps the most common symptom is the negative intercept that can be seen in Figure 1 which compares the relationships between consumption and degree days. This is what most often alerts you to a problem:

Figure 1: the classic symptom
Figure 1: the classic symptom

It should be evident that in Figure 1 we are trying to fit a straight line to what is actually a curved characteristic. The shape of the curve depends on whether the base temperature was too low or too high, and Figure 2 shows the same consumptions plotted against degree days computed to three different base temperatures: one too high (as Figure 1), one too low, and one just right.

Figure 2: the effect of varying base temperature
Figure 2: the effect of varying base temperature

Notice in Figure 2 that the characterists are only curved near the origin. They are parallel at their right-hand ends, that is to say, in weeks when the outside air temperature never went above the base temperature. The gradients of the straight sections are all the same, including of course the case where the base temperature was appropriate. This is significant because although in real life we only have the distorted view represented by Figure 1, we now know that the gradient of its straight section is equal to the true gradient of the correct line.

So let’s revert to our original scenario: the case where we had a single line where the base temperature was too high. Figure 3 shows that a projection of the straight segment of the line intersects the vertical axis at -1000 kWh per week, well below the true position, which from Figure 1 we can judge to be around 500 kWh per week. The gradient of the straight section, incidentally, is 45 kWh per degree day.

Figure 3: correct gradient but wrong intercept
Figure 3: correct gradient but wrong intercept

To correct the distortion we need to shift the line in Figure 3 to the left by a certain number of degree days so that it ends up looking like Figure 4 below. The change in intercept we are aiming for is 1,500 kWh (the difference between the apparent intercept of -1000, and the true intercept, 500*). We can work out how far left to move the line by dividing the required change in the intercept by the gradient: 1500/45 = 33.3 degree days. Given that the degree-day figures are calculated over a 7-day interval, the required change in base temperature is 33.3/7 = 4.8 degrees

Figure 4: degree-day values reduced by lowering the base temperature
Figure 4: degree-day values moved leftwards by lowering the base temperature

Note that only the points in the straight section moved the whole distance to the left: in the curved sections, the further left the point originally sits, the less it moves. This can best be visualised by looking again at Figure 2.

In more general terms the base-temperature adjustment is given by (Ct-Ca)/m.t where:

Ct is the true intercept;
Ca is the apparent intercept when projecting the straight portion of the distorted characteristic;
m is the gradient of that straight portion; and
t is the observing-interval length in days


* The intercept could be judged or estimated by a variety of methods including: empirical observations like averaging the consumption in non-heating weeks; by ‘eyeball’; or by fitting a curved regression line, etc..

A new dark age?

Is this the worst energy dashboard ever?

The worst energy dashboard ever?

It’s an anonymised but accurate reconstruction of something I recently saw touted as an example of a ‘visual energy display’ suitable for a reception area. Apart from patently being an advertisement for an equipment supplier — name changed to protect the innocent (guilty?) — the only numerical information in the display is in small type against a background which makes it hard to read. Also, one might ask, “so what?”. There is no context. What proportion was 3.456 kWh? What were we aiming for? What is the trend?

There’s a bigger picture here: in energy reporting generally, system suppliers have descended into “content-lite” bling warfare (why do bar charts now have to bounce into view with a flourish?). And nearly always the displays are just passive and uncritical statements of quantities consumed. Anybody who wants to display energy information graphically should read Stephen Few’s book Information Dashboard Design . It is clear that almost no suppliers of energy monitoring systems have ever done so, but perhaps if their customers did, and became more discerning and demanding, we might see more useful information and less meaningless noise and clutter.