Category Archives: Uncategorized

“Science-based targets”: sounds good, means very little

WHEN I FIRST heard the term science-based target (SBT) bandied around in the public arena I thought “oh good – they are advocating a rational approach to energy management”. I thought they were promoting the idea that I always push, which is to compare your actual energy consumption against an expected quantity calculated, on a scientific basis, from the prevailing conditions of weather, production activity, or whatever other measurable factors drive variation in consumption.

How wrong I was. Firstly, SBTs are targets for emissions, not energy consumption;  and secondly a target is defined as ‘science-based’ if, to quote the Carbon Trust, “it is in line with the level of decarbonisation required to keep the global temperature increase below 2°C compared to pre-industrial temperatures”. I have three problems with all of this.

Firstly I have a problem with climate change. I believe it is real, of course; and I am sure that human activity, fuel use in particular, is the major cause. What I don’t agree with is using it as a motivator or to define goals. It is too remote, too big, and too abstract to be relevant to the individual enterprise. And it is too contentious. To mention climate change is to invite debate: to debate is to delay.

Secondly, global targets cannot be transcribed directly into local ones. If your global target is a reduction of x% and you set x% as the target for every user, you will fail because some people will be unable or unwilling to achieve a cut of x% while those who do achieve x% will stop when they have done so. In short there will be too few over-achievers to compensate for the laggards.

Finally I object to the focus on decarbonisation. Not that decarbonisation itself is valueless; quite the opposite. It is the risk that people prioritise decarbonisation of supply, rather than reduction of demand. If you decarbonise the supply to a wasteful operation, you have denied low-carbon energy to somebody somewhere who needed it for a useful purpose. We should always put energy saving first, and that is where effective monitoring and targeting, including rational comparisons of actual and expected consumption, has an essential part to play.

Monitoring external lighting

The diagram below shows the relationship, over the past year, between weekly electricity consumption and the number of hours of darkness per week for a surface car park. It is among the most consistent cases I have ever seen:

Figure 1: relationship between kWh and hours of darkness

 

 

There is a single outlier (caused by meter error).

Although both low daylight availability and cold weather occur in the winter, heating degree days cannot be used as the driving factor for daylight-linked loads.  Plotting the same consumption data against heating degree days gives a very poor correlation:

Figure 2: relationship between kWh and heating degree days

There are two reasons for the poor correlation. One is the erratic nature of the weather (compared with very regular variations in daylight availability) and the other is the phase difference of several weeks between the shortest days and the coldest weather. If we co-plot the data from Figure 2 as a time-series chart we see this illustrated perfectly. In Figure 3 the dots represent actual electricity consumption and the green trace shows what consumption was predicted by the best-fit relationship with heating degree days:

Figure 3: actual kWh compared with a weather-linked model of expected consumption

Compare Figure 3 with the daylight-linked model:

Figure 4: actual and expected kWh co-plotted using daylight-linked model

One significant finding (echoed in numerous other cases) is that it is not necessary to measure actual hours of darkness: standard weekly figures work perfectly well. It is evident that occasional overcast and variable cloud cover do not introduce perceptible levels of error. Moreover, figures for UK appear to work acceptably at other latitudes: the case examined here is in northern Spain (41°N) but used my standard darkness-hour table for 52°N.

You can download my standard weekly and monthly hours-of-darkness tables here.

This article is promoting my advanced energy monitoring and targeting workshop in Birmingham on 11 September

 

 

ISO 50001: transition to 2018 edition

The story so far: ISO 50001 is an international standard which lays down a harmonised recommended method for managing energy. Published in 2011, it is analogous to ISO 14001, which covers environmental management and ISO 9001 for quality management. Organisations can be certified to ISO 50001 to show that they have energy-management procedures which meet certain criteria. 

At the time of writing, the original 2011 edition of ISO50001 is due to be phased out and replaced with a new 2018 version. To help understand the differences,  I have approached it from the point of view of the main topics that you or an auditor might explore when establishing compliance, and the questions that would be asked. I give the section references of both old (2011) and new (2018) editions, and where necessary there is a note of any material differences.

You need to login to view the rest of the content. Please . Not a Member? Join Us

Review of ISO50001:2018

I ALWAYS THOUGHT that the diagrammatic representation of the “plan, do, check, act” cycle in ISO50001:2011 was a little strangely drawn (left-hand in picture below), although it does vaguely give the sense of a preparatory period followed by a repetitive cycle and occasional review. Turns out, though, that it was wrong all along because in the 2018 version of the Standard, the final draft of which is available to buy in advance of publication in August, it seems to have been “corrected” (right-hand below). For my money the new version is less meaningful than the old one.

Spot any similarity?

ISO50001 has been revised not because there was much fundamentally wrong with the 2011 version but as a matter of standards policy: it and other management-system standards such as ISO9001 (quality) and ISO14001 (environment) have a lot in common and are all being rewritten to match a new common “High Level Structure” with identical core text and harmonized definitions. ISO50001’s requirements, with one exception, will remain broadly the same as they were in 2011.

It is just a pity that ISO50001:2018 fails in some respects to meet its own stated objective of clarity, and there is evidence of muddled thinking on the part of the authors. The PDCA diagram is a case in point. I see also, for example, that the text refers to relevant variables (i.e., driving factors like degree days etc) affecting energy ‘performance’ whereas what they really affect is energy consumption. To take a trivial example, if you drive twice as many miles one week as another, your fuel consumption will be double but your fuel performance (expressed as miles per gallon) might well be the same. Mileage in this case is the relevant variable but it is the consumption, not the performance, that it affects. This wrong-headed view of ‘performance’ pervades the document and looking in the definitions section of the Standard you can see why: to most of us, energy performance means the effectiveness with which energy is converted into useful output or service; ISO50001:2018 however defines it as ‘measurable result(s) related to energy efficiency, energy use, and energy consumption’. I struggle to find practical meaning in that, and I suspect the drafting committee members themselves got confused by it.

Furthermore, the committee have ignored warnings about ambiguity in the way they use the term Energy Performance Indicator (EnPI). There are always two aspects to an EnPI: (a) the method by which it is calculated—what we might call the EnPI formulation—and (b) its numerical value at a given time. Where the new standard means the latter, it says so, and uses the phrase ‘EnPI value’ in such cases. However, when referring to the EnPI formulation, it unwisely expresses this merely as ‘EnPI’, which is open to misinterpretation by the unwary. For example Section 6.4, Energy Performance Indicators, says that the method for determining and updating the EnPI(s) shall be maintained as documented information. I bet a fair proportion of people will take the phrase ‘determining and updating the EnPI(s)’ to mean calculating their values. It does not. The absence of the word ‘values’ means that you should be determining and updating what EnPIs you use and how they are derived.

Failure to explicitly label EnPI ‘formulations’ as such has also led to an error in the text: section 9.1.1 bullet (a) (2) says that EnPIs need to be monitored and measured. That should obviously have said EnPI values.

The new version adds an explicit requirement to ‘demonstrate continual energy performance improvement’. No such explicit requirement appeared in the 2011 text, but since last year thanks to the rules governing certifying bodies, you cannot even be certified in the first place if you don’t meet this requirement. There was a lot of debate on this during consultation, but this new requirement survived even though it does not appear in the much-vaunted High Level Structure which ISO50001 was rewritten supposedly to conform to. That being the case, it is paramount that users adopt energy performance indicators that accurately reflect progress. Simple ratio-based metrics like kWh/tonne (or in data centres, Power Usage Effectiveness) are not fit for purpose and their users risk losing their certification because EnPIs of that kind often give perverse results and may fail to reflect savings that have really been achieved.

On a positive note, the new version of the Standard retains the requirement to compare actual and expected energy consumption, and to investigate significant deviations in energy performance. These requirements are actually central to effective ongoing energy management. Moreover, a proper understanding of how to calculate expected consumption is the key to the computation of accurate EnPIs, making it a mission-critical concept for anyone wanting to keep their certification.

This article is promoting my training courses on energy monitoring and targeting which include (a) the use of consumption data to detect hidden waste and (b) how to derive meaningful energy performance indicators.

This review is based on the Final Draft of ISO50001:2018 which has been released on sale prior to formal publication in August 2018.

Magnets: mutual repulsion

One ironic and highly satisfying way to debunk the claims for magnetic fuel conditioning is to pitch one supplier against another. I have been digging in the archive for claims made by different suppliers, and with assistance from eagle-eyed newsletter reader Mark J., have compiled the following account. Let’s start with Magnatech. Their web site makes a bald assertion that passing fuel through a magnet’s negative and positive (sic) fields makes it easier for the fuel to bond with oxygen and burn. They offer no explanation of how this works but say it creates a rise in flame temperature of “an extra 120°C or more”.  However, their competitor Maximus Green says that the flame temperature only rises by 20°C, but they gamely have a crack at explaining how: they claim that hydrocarbon fuel molecules clump together in large “associations” because they are randomly charged positive and negative (although even if that were true, wouldn’t they just pair up?). Passing through a magnetic field, they say, gives all the molecules a positive charge, breaking up these supposed big clusters of fuel molecules. They don’t say where all the resulting spare electrons go.

Or at least that’s what Maximus Green used to say. In a recent (unsuccessful) submission to the Advertising Standards Authority they offered a completely different story. Quoted in the ASA ruling they said that “the hydrogen and carbon compound of gas and oil had two distinct isometric (sic) forms – ‘Ortho-state’ and ‘Para-state’ – which were characterised by different, opposite nucleus spins. The Ortho-state was more unstable and reactive in comparison to the Para-state, and therefore that state was desired because it resulted in a higher rate of combustion. They said that when fuel passed through the magnetic field the hydrocarbon molecule changed from the para-hydrogen state to the ortho-hydrogen state, and that the higher energised spin state of the ortho-hydrogen molecules produced high electrical potential (reactivity), which attracted additional oxygen and therefore increased combustion efficiency”.

Another player, Maxsys, meanwhile, are having none of this ionised oil, lumpy gas or nuclear spin stuff. Their 2014 brochure lays the blame on very fine dust in the fuel. By applying a magnetic field, they say “nanoparticles that would normally pass through the combustion or reduce heat transfer efficiency, by clinging to and fouling surfaces, begin to cluster together”, an effect which forms “larger colloids, less likely to create a film deposit and compromise a plant’s performance”. Now pardon my scientific knowledge, but a “colloid” is a stable suspension of very fine particles in a liquid. Milk is a good example. Be that as it may, Maxsys are saying that magnetic fields cause things to clump together, in direct contradiction to what we heard earlier from Maximus Green in one of their versions of how magnetism supposedly works.

Someone is telling porkies and I will leave it to you, dear reader, to work out who.

Footnote: an independent test of the efficacy of magnets on fuel lines was carried out by Exeter University in 1997. Their report, which strangely is never quoted by vendors, can be downloaded here.

Errors in solid-state electricity meters

Recent press reports suggest that some types of electricity meter (including so-called ‘smart’ meters) are susceptible to gross errors when feeding low-energy lamps, variable-speed drives and other equipment that generates electromagnetic interference.

According to an investigation and review by metering expert Kris Szajdzicki, such measurement errors do occur and their magnitude depends upon the current-sensing technology used by the meter, although the effect may be negligible in normal situations in the domestic market. However, potential for gross error remains in unfavourable circumstances, particularly in industrial or commercial installations or where there is deliberate intent to fool the meter.

Kris has made his assessment available here.

On-off testing to prove energy savings

When you install an energy-saving measure, it is important to evaluate its effect objectively. In the majority of cases this will be achieved by a “before-and-after” analysis making due allowance for the effect of the weather or other factors that cause known variations.

There are, however, some types of energy-saving device which can be temporarily bypassed or disabled at will, and for these it may be possible to do interleaved on-off tests. The idea is that by averaging out the ‘on’ and ‘off’ consumptions you can get a fair estimate of the effect of having the device enabled. The distorting effect of any hard-to-quantify external influences—such as solar gain or levels of business activity—should tend to average out.

A concrete example may help. Here is a set of weekly kWh consumptions for a building where a certain device had been fitted to the mains supply, with the promise of 10% reductions. The device could easily be disconnected and was removed on alternate weeks:

Week	kWh	Device?
----    -----   -------
1	241.8	without
2	223.0	with
3	221.4	without
4	196.4	with
5	200.1	without
6	189.6	with
7	201.9	without
8	181.3	with
9	185.0	without
10	208.5	with
11	181.7	without
12	188.3	with
13	172.3	without
14	180.4	with

The mean of the even-numbered weeks, when the device was active, is 195.4 kWh compared with 200.6 kWh in weeks when it was disconnected, giving an apparent saving on average of 5.2 kWh per week. This is much less than the promised ten percent but there is a bigger problem. If you look at the figures you will see that the “with” and “without” weeks both have a spread of values, and their ranges overlap. The degree of spread can be quantified through a statistical measure called the standard deviation, which in this case works out at 19.7 kWh per week. I will not go into detail beyond pointing out that it means that about two-thirds of measurements in this case can be expected to fall within a band of ±19.7 kWh of the mean purely by chance. Measured against that yardstick, the 5.2 kWh apparent saving is clearly not statistically significant and the test therefore failed to prove that the device had any effect (as a footnote, when the analysis was repeated taking into account sensitivity to the weather, the conclusion was that the device apparently increased consumption).

When contemplating tests of this sort it is important to choose the length of on-off interval carefully. In the case cited, a weekly interval was used because the building had weekend/weekday differences. A daily cycle would also be inappropriate for monitoring heating efficiency in some buildings because of the effect of heat storage in the building fabric: a shortfall in heat input one day might be made up the next. Particular care is always needed where a device which reduces energy input might resulting in a shortfall in output which then has to be made up in the following interval when it is disconnected. This will notably tend to happen with voltage reduction in electric heating applications. During low-voltage interval the heaters will run at lower output and this may result in a heat deficit being ‘exported’ to the succeeding high-voltage period, when additional energy will need to be consumed to make up the shortfall, making the high-voltage interval look worse than the low-voltage one. To minimise this distortion, be sure to set the interval length several times longer than the typical equipment cycle time.

Otherwise there are perhaps two other stipulations to add. Firstly, the number of ‘on’ and ‘off’ cycles should be equal; secondly, although there is no objection to omitting an interval for reasons beyond the control of either party (such as metering failure) it could be prudent to insist that intervals are omitted only in pairs, and that tests should always recommence consistently in either the ‘off’ or ‘on’ state. This is to avoid the risk of skewing the results by selectively removing individual samples.