Category Archives: Uncategorized

LED versus metal halide lamps

Clare C., a regular reader of my energy-management bulletins, was perplexed when she started researching the cost advantages of LEDs as replacement for metal halide (MH) high-bay fittings. She discovered that MH lamps have luminous efficacies very similar to LEDs with both, broadly speaking, yielding about 100 lumens per watt. Certainly she wasn’t going to get the 50% saving she was after, and she asked my opinion.

There are a couple of factors that would tip the balance in favour of LEDs. Firstly, she needed to account for the fact that unlike LEDs, MH lamps need control gear which would add some parasitic load (say 20 watts on a 400-watt lamp).  Secondly, LEDs are more directional and can deliver all their output more effectively to the working space; MH lamps are omnidirectional and need reflectors which may lose some of the light output. So in terms of useful light output per circuit watt, a well-specified and correctly-installed LED fitting may have a moderate advantage.

But the big gain is in controllability. MH lamps have a warm-up time measured in minutes and a ‘restrike’ time (after turning off) which is longer still, to allow them to cool before being turned on again.  This is common to all high-intensity discharge (HID) lamps. It does not matter how long the delay is; it discourages the use of automatic control so  HID lamps are often turned on well before they are needed, and then stay on for the duration. LEDs by contrast can be turned off at will and as soon as they are needed again, they come on. This is where Clare will might get her 50% saving.

High-intensity discharge lamps in a sports hall – a good candidate for LEDs because of erratic occupancy

Water treatment

Scale build-up from hard water is often cited as a cause of energy waste in hot-water systems (I am talking here about ‘domestic hot water’ supply, not closed loops within central heating systems; I will come to those later). Actually though, contrary to claims for some water treatment devices, it is not necessarily the case that energy waste in DHW systems will be significant. Indeed with an electric immersion heater on a 24-hour service, all the supplied energy still gets into the water; there is no loss. Of course the rate at which the water temperature recovers will be reduced, and the heating element will fail prematurely, but those are service and reliability issues not energy waste.

The story is a little different on intermittent hot water storage of any kind. Here, because scaling will retard temperature recovery, users may extend preheat times and that will result in a marginal increase in standing heat loss. If the heat supply is from a primary (boiler-fed) water loop, the primary return temperature will be higher because scale impedes heat transfer, and this also will increase standing losses although in reality not to a significant extent in the grand scheme of things. If hot-water recovery times deteriorate markedly, users may of course dispense with time control altogether and in those circumstances avoidable standing heat loss might become significant if thermal insulation is poor.

Turning now to the effect on wet central heating circuits, scaling will affect efficiency. Scale within heat emitters (radiators and so on) will reduce heat transfer and result in higher circulating temperatures because the heat cannot escape from the water so readily. Meanwhile within the boiler itself, impaired transfer of heat into the (now hotter) system water will result in excessive heat going up the chimney, evidenced by elevated exhaust temperatures.

Furthermore in both heating and DHW systems, scale could interfere with the operation of control valves and either result in excessive heat output – with a corresponding excessive use of fuel – or inadequate heat output, which will cause people to interfere with the controls, deploy electric heaters, or take other actions that incur excess costs.

Preventing scale build-up

Simplifying the story somewhat, the main constituent of scale is calcium carbonate, which starts to form above about 35°C through breakdown of the more soluble calcium hydrogen carbonate that is present to varying degrees in the public water supply, with  ‘hard’ water containing higher concentrations of it. Calcium carbonate crystals of the normal ‘calcite’ form stick to surfaces and each other, and that is what constitutes limescale.

One way to deal with this is softening which (in its strict sense) involves a chemical process to turn calcium carbonate into sodium carbonate which does not precipitate as crystals but stays in solution. The process is costly in terms of chemicals; a waste product, calcium chloride, needs to be flushed away periodically; and the softened water is unsuitable for drinking and cooking because of its high sodium content.

The alternative to chemical treatment is physical conditioning. Various proprietary methods are available. Some involve electric or magnetic fields which are supposed to affect the calcite crystals in some way (for example giving them an electric charge so that they repel each other, or in some other manner inhibiting their tendency to agglomerate).

Another class of conditioner is electrolytic. Electrolytic devices release of minute quantities of zinc or iron into the water, which change the calcium carbonate to its ‘aragonite’ form which, unlike calcite don’t stick together, so they stay in suspension and do not contribute to scale formation.

For a wide-ranging introduction to energy-saving technologies look out for my one-day ‘A to Z’ courses advertised at

With the exception of electrolytic devices, there is no scientific explanation of how or why most  of these physical conditioners work, and there are no accepted tests of efficacy. There is only anecdotal evidence, but if it works, it works.

The one method of physical condition which is definitely effective (and I can vouch for it personally) is polysilicate-polyphosphate dosing. This has a dual action. It modifies the carbonate crystals to stop them sticking to each other, and it coats the inner surfaces of pipework and appliances to inhibit scale formation.

For anybody wanting further references, this note from WRc commissioned by Southern Water is what I currently regard as the most authoritative advice on the subject of water treatment techniques.

The value of a tree

We all know that trees are good and absorb carbon dioxide. But how good are they? Let’s work it out…

Trees absorb carbon dioxide at different rates depending upon their age, species and other factors but as a rough order of magnitude you can say the figure for a typical established tree is 10 kg per year. The carbon dioxide emissions associated with energy use are 0.2 kg per kWh for natural gas and (in the UK in 2018, including transmission losses) an average of 0.3 kg per kWh for electricity.

So 50.0 kWh of gas or about 33.3 kWh of electricity each generate the 10 kg of CO2 that a single tree can absorb in a year. Take that figure for electricity. As a year is 8760 hours, 33.3 kWh equates to a continuous load of only 3.8W. So one entire tree compensates for one broadband router, a TV on standby, or a couple of electric toothbrushes or cordless phones (roughly).

And as for gas consumption: remember pilot lights? The little flame that burns continuously to ignite the main gas burner? If you had pilot flame with a rating of 100 watts, in the course of a year it would use 876 kWh and require no fewer than 17 trees to offset its CO2 emissions..

Are the assumptions correct?

The first time I published this piece in the Energy Management Register bulletin my estimate of CO2 takeup rates was challenged. Fair enough: I plucked it from stuff I had found on the Web knowing that it might be out by an order of magnitude. So let’s do a sense check.

The chemical composition of wood is 50% carbon (on a dry-matter basis) and all that carbon came from CO2 in the air. So 1 kg of dry woody matter contains 0.5 kg of carbon, which in turn was derived from 0.5 x 44/12 = 1.833 kg CO2 . Thus if we know the growth rate of a tree in dry mass per year, we can multiply that by 1.833 to estimate its CO2 takeup. Fortunately a 2014 article in ‘Nature’  has the growth figures we need. Although there is wide variability in the results, for European species with trunk diameters of 10 cm the typical growth in above-ground dry mass is 1.6 kg per year, equating to a CO2 takeup of only 2.9 kg per year (although this rises to 18 and 58 kg per year for diameters of 40 and 100 cm). So newly-planted trees (which is what we are talking about) are going to fall well short of my 10 kg/year estimate, and it will be years before they reach a size where their offsetting contribution reaches even modest levels.

I like trees – don’t get me wrong – by all means plant them for shade, wildlife habitat, fruit or aesthetic appearance. But when it comes to saving the planet I just think that given the choice between (a) planting a tree and waiting a few years, and (b) cutting my electricity demand by 3.8 watts now, I know what I would go for.

“Science-based targets”: sounds good, means very little

WHEN I FIRST heard the term science-based target (SBT) bandied around in the public arena I thought “oh good – they are advocating a rational approach to energy management”. I thought they were promoting the idea that I always push, which is to compare your actual energy consumption against an expected quantity calculated, on a scientific basis, from the prevailing conditions of weather, production activity, or whatever other measurable factors drive variation in consumption.

How wrong I was. Firstly, SBTs are targets for emissions, not energy consumption;  and secondly a target is defined as ‘science-based’ if, to quote the Carbon Trust, “it is in line with the level of decarbonisation required to keep the global temperature increase below 2°C compared to pre-industrial temperatures”. I have three problems with all of this.

Firstly I have a problem with climate change. I believe it is real, of course; and I am sure that human activity, fuel use in particular, is the major cause. What I don’t agree with is using it as a motivator or to define goals. It is too remote, too big, and too abstract to be relevant to the individual enterprise. And it is too contentious. To mention climate change is to invite debate: to debate is to delay.

Secondly, global targets cannot be transcribed directly into local ones. If your global target is a reduction of x% and you set x% as the target for every user, you will fail because some people will be unable or unwilling to achieve a cut of x% while those who do achieve x% will stop when they have done so. In short there will be too few over-achievers to compensate for the laggards.

Finally I object to the focus on decarbonisation. Not that decarbonisation itself is valueless; quite the opposite. It is the risk that people prioritise decarbonisation of supply, rather than reduction of demand. If you decarbonise the supply to a wasteful operation, you have denied low-carbon energy to somebody somewhere who needed it for a useful purpose. We should always put energy saving first, and that is where effective monitoring and targeting, including rational comparisons of actual and expected consumption, has an essential part to play.

Review of ISO50001:2018

I ALWAYS THOUGHT that the diagrammatic representation of the “plan, do, check, act” cycle in ISO50001:2011 was a little strangely drawn (left-hand in picture below), although it does vaguely give the sense of a preparatory period followed by a repetitive cycle and occasional review. Turns out, though, that it was wrong all along because in the 2018 version of the Standard, the final draft of which is available to buy in advance of publication in August, it seems to have been “corrected” (right-hand below). For my money the new version is less meaningful than the old one.

Spot any similarity?

ISO50001 has been revised not because there was much fundamentally wrong with the 2011 version but as a matter of standards policy: it and other management-system standards such as ISO9001 (quality) and ISO14001 (environment) have a lot in common and are all being rewritten to match a new common “High Level Structure” with identical core text and harmonized definitions. ISO50001’s requirements, with one exception, will remain broadly the same as they were in 2011.

It is just a pity that ISO50001:2018 fails in some respects to meet its own stated objective of clarity, and there is evidence of muddled thinking on the part of the authors. The PDCA diagram is a case in point. I see also, for example, that the text refers to relevant variables (i.e., driving factors like degree days etc) affecting energy ‘performance’ whereas what they really affect is energy consumption. To take a trivial example, if you drive twice as many miles one week as another, your fuel consumption will be double but your fuel performance (expressed as miles per gallon) might well be the same. Mileage in this case is the relevant variable but it is the consumption, not the performance, that it affects. This wrong-headed view of ‘performance’ pervades the document and looking in the definitions section of the Standard you can see why: to most of us, energy performance means the effectiveness with which energy is converted into useful output or service; ISO50001:2018 however defines it as ‘measurable result(s) related to energy efficiency, energy use, and energy consumption’. I struggle to find practical meaning in that, and I suspect the drafting committee members themselves got confused by it.

Furthermore, the committee have ignored warnings about ambiguity in the way they use the term Energy Performance Indicator (EnPI). There are always two aspects to an EnPI: (a) the method by which it is calculated—what we might call the EnPI formulation—and (b) its numerical value at a given time. Where the new standard means the latter, it says so, and uses the phrase ‘EnPI value’ in such cases. However, when referring to the EnPI formulation, it unwisely expresses this merely as ‘EnPI’, which is open to misinterpretation by the unwary. For example Section 6.4, Energy Performance Indicators, says that the method for determining and updating the EnPI(s) shall be maintained as documented information. I bet a fair proportion of people will take the phrase ‘determining and updating the EnPI(s)’ to mean calculating their values. It does not. The absence of the word ‘values’ means that you should be determining and updating what EnPIs you use and how they are derived.

Failure to explicitly label EnPI ‘formulations’ as such has also led to an error in the text: section 9.1.1 bullet (a) (2) says that EnPIs need to be monitored and measured. That should obviously have said EnPI values.

The new version adds an explicit requirement to ‘demonstrate continual energy performance improvement’. No such explicit requirement appeared in the 2011 text, but since last year thanks to the rules governing certifying bodies, you cannot even be certified in the first place if you don’t meet this requirement. There was a lot of debate on this during consultation, but this new requirement survived even though it does not appear in the much-vaunted High Level Structure which ISO50001 was rewritten supposedly to conform to. That being the case, it is paramount that users adopt energy performance indicators that accurately reflect progress. Simple ratio-based metrics like kWh/tonne (or in data centres, Power Usage Effectiveness) are not fit for purpose and their users risk losing their certification because EnPIs of that kind often give perverse results and may fail to reflect savings that have really been achieved.

On a positive note, the new version of the Standard retains the requirement to compare actual and expected energy consumption, and to investigate significant deviations in energy performance. These requirements are actually central to effective ongoing energy management. Moreover, a proper understanding of how to calculate expected consumption is the key to the computation of accurate EnPIs, making it a mission-critical concept for anyone wanting to keep their certification.

This article is promoting my training courses on energy monitoring and targeting which include (a) the use of consumption data to detect hidden waste and (b) how to derive meaningful energy performance indicators.

This review is based on the Final Draft of ISO50001:2018 which has been released on sale prior to formal publication in August 2018.

Magnets: mutual repulsion

One ironic and highly satisfying way to debunk the claims for magnetic fuel conditioning is to pitch one supplier against another. I have been digging in the archive for claims made by different suppliers, and with assistance from eagle-eyed newsletter reader Mark J., have compiled the following account. Let’s start with Magnatech. Their web site makes a bald assertion that passing fuel through a magnet’s negative and positive (sic) fields makes it easier for the fuel to bond with oxygen and burn. They offer no explanation of how this works but say it creates a rise in flame temperature of “an extra 120°C or more”.  However, their competitor Maximus Green says that the flame temperature only rises by 20°C, but they gamely have a crack at explaining how: they claim that hydrocarbon fuel molecules clump together in large “associations” because they are randomly charged positive and negative (although even if that were true, wouldn’t they just pair up?). Passing through a magnetic field, they say, gives all the molecules a positive charge, breaking up these supposed big clusters of fuel molecules. They don’t say where all the resulting spare electrons go.

Or at least that’s what Maximus Green used to say. In a recent (unsuccessful) submission to the Advertising Standards Authority they offered a completely different story. Quoted in the ASA ruling they said that “the hydrogen and carbon compound of gas and oil had two distinct isometric (sic) forms – ‘Ortho-state’ and ‘Para-state’ – which were characterised by different, opposite nucleus spins. The Ortho-state was more unstable and reactive in comparison to the Para-state, and therefore that state was desired because it resulted in a higher rate of combustion. They said that when fuel passed through the magnetic field the hydrocarbon molecule changed from the para-hydrogen state to the ortho-hydrogen state, and that the higher energised spin state of the ortho-hydrogen molecules produced high electrical potential (reactivity), which attracted additional oxygen and therefore increased combustion efficiency”.

Another player, Maxsys, meanwhile, are having none of this ionised oil, lumpy gas or nuclear spin stuff. Their 2014 brochure lays the blame on very fine dust in the fuel. By applying a magnetic field, they say “nanoparticles that would normally pass through the combustion or reduce heat transfer efficiency, by clinging to and fouling surfaces, begin to cluster together”, an effect which forms “larger colloids, less likely to create a film deposit and compromise a plant’s performance”. Now pardon my scientific knowledge, but a “colloid” is a stable suspension of very fine particles in a liquid. Milk is a good example. Be that as it may, Maxsys are saying that magnetic fields cause things to clump together, in direct contradiction to what we heard earlier from Maximus Green in one of their versions of how magnetism supposedly works.

Someone is telling porkies and I will leave it to you, dear reader, to work out who.

Footnote: an independent test of the efficacy of magnets on fuel lines was carried out by Exeter University in 1997. Their report, which strangely is never quoted by vendors, can be downloaded here.

Errors in solid-state electricity meters

Recent press reports suggest that some types of electricity meter (including so-called ‘smart’ meters) are susceptible to gross errors when feeding low-energy lamps, variable-speed drives and other equipment that generates electromagnetic interference.

According to an investigation and review by metering expert Kris Szajdzicki, such measurement errors do occur and their magnitude depends upon the current-sensing technology used by the meter, although the effect may be negligible in normal situations in the domestic market. However, potential for gross error remains in unfavourable circumstances, particularly in industrial or commercial installations or where there is deliberate intent to fool the meter.

Kris has made his assessment available here.

On-off testing to prove energy savings

When you install an energy-saving measure, it is important to evaluate its effect objectively. In the majority of cases this will be achieved by a “before-and-after” analysis making due allowance for the effect of the weather or other factors that cause known variations.

There are, however, some types of energy-saving device which can be temporarily bypassed or disabled at will, and for these it may be possible to do interleaved on-off tests. The idea is that by averaging out the ‘on’ and ‘off’ consumptions you can get a fair estimate of the effect of having the device enabled. The distorting effect of any hard-to-quantify external influences—such as solar gain or levels of business activity—should tend to average out.

A concrete example may help. Here is a set of weekly kWh consumptions for a building where a certain device had been fitted to the mains supply, with the promise of 10% reductions. The device could easily be disconnected and was removed on alternate weeks:

Week	kWh	Device?
----    -----   -------
1	241.8	without
2	223.0	with
3	221.4	without
4	196.4	with
5	200.1	without
6	189.6	with
7	201.9	without
8	181.3	with
9	185.0	without
10	208.5	with
11	181.7	without
12	188.3	with
13	172.3	without
14	180.4	with

The mean of the even-numbered weeks, when the device was active, is 195.4 kWh compared with 200.6 kWh in weeks when it was disconnected, giving an apparent saving on average of 5.2 kWh per week. This is much less than the promised ten percent but there is a bigger problem. If you look at the figures you will see that the “with” and “without” weeks both have a spread of values, and their ranges overlap. The degree of spread can be quantified through a statistical measure called the standard deviation, which in this case works out at 19.7 kWh per week. I will not go into detail beyond pointing out that it means that about two-thirds of measurements in this case can be expected to fall within a band of ±19.7 kWh of the mean purely by chance. Measured against that yardstick, the 5.2 kWh apparent saving is clearly not statistically significant and the test therefore failed to prove that the device had any effect (as a footnote, when the analysis was repeated taking into account sensitivity to the weather, the conclusion was that the device apparently increased consumption).

When contemplating tests of this sort it is important to choose the length of on-off interval carefully. In the case cited, a weekly interval was used because the building had weekend/weekday differences. A daily cycle would also be inappropriate for monitoring heating efficiency in some buildings because of the effect of heat storage in the building fabric: a shortfall in heat input one day might be made up the next. Particular care is always needed where a device which reduces energy input might resulting in a shortfall in output which then has to be made up in the following interval when it is disconnected. This will notably tend to happen with voltage reduction in electric heating applications. During low-voltage interval the heaters will run at lower output and this may result in a heat deficit being ‘exported’ to the succeeding high-voltage period, when additional energy will need to be consumed to make up the shortfall, making the high-voltage interval look worse than the low-voltage one. To minimise this distortion, be sure to set the interval length several times longer than the typical equipment cycle time.

Otherwise there are perhaps two other stipulations to add. Firstly, the number of ‘on’ and ‘off’ cycles should be equal; secondly, although there is no objection to omitting an interval for reasons beyond the control of either party (such as metering failure) it could be prudent to insist that intervals are omitted only in pairs, and that tests should always recommence consistently in either the ‘off’ or ‘on’ state. This is to avoid the risk of skewing the results by selectively removing individual samples.