Clare C., a regular reader of my energy-management bulletins, was perplexed when she started researching the cost advantages of LEDs as replacement for metal halide (MH) high-bay fittings. She discovered that MH lamps have luminous efficacies very similar to LEDs with both, broadly speaking, yielding about 100 lumens per watt. Certainly she wasn’t going to get the 50% saving she was after, and she asked my opinion.
But the big gain is in controllability. MH lamps have a warm-up time measured in minutes and a ‘restrike’ time (after turning off) which is longer still, to allow them to cool before being turned on again. This is common to all high-intensity discharge (HID) lamps. It does not matter how long the delay is; it discourages the use of automatic control so HID lamps are often turned on well before they are needed, and then stay on for the duration. LEDs by contrast can be turned off at will and as soon as they are needed again, they come on. This is where Clare will might get her 50% saving.
Ice storage is sometimes used in central air conditioning systems as a way of smoothing demand for chilling, thereby reducing the installed chiller capacity or allowing demand to be time-shifted. It’s attractive because the latent heat absorbed or released as the water changes phase between liquid and solid is an order of magnitude more than can be stored and recovered just by heating or cooling liquid water.
With phase-change storage established as a legitimate and effective element of central air conditioning and heating plant, it should come as no surprise that we now see vendors offering phase-change materials (PCM) to be embedded in the fabric of buildings as a way of stabilising internal temperatures and thus (according to their claims) saving energy. Are such claims likely to have any merit?
The concept of a PCM such as ice is that as any substance melts (or solidifies), it absorbs (or releases) heat without a change in temperature. PCMs for use in building elements such as walls or ceilings are usually either salts or waxes that change phase at the building’s internal set-point temperature. The argument goes that when daily outside-air temperatures swing above and below the internal set-point, heat stored during the hot part of the day is released during the cold part, avoiding the need for artificial cooling or heating. However, such circumstances are rare. What would happen in a more realistic scenario where, say, the weather is cold and the space needs heating? Firstly, if the space needs heating continuously, the PCM will never change phase and will thus be redundant. It will either be permanently solid or permanently liquid, depending on which side of its melting point the space is being held.
Now suppose the space is heated intermittently. If the internal set-point were below the PCM’s melting point, it would never melt, so it would have no effect. But if the heating set-point were above the PCM’s melting point then it would absorb heat during the warm-up part of the heating cycle. The problem with this is that it would retard the rise in space temperature and delay the achievement of set-point. This would call for a longer pre-heat period — which incurs an energy penalty. At the end of occupation the heating would go off and any heat stored in the PCM would dissipate to no effect, first to the outside and then — once the internal temperature was sufficiently low — back into the unoccupied space.
Similar considerations apply to cooling. If the PCM is effective it will retard the effect of the room air-conditioning system. This could result in complaints, for example in settings such as hotels where this technology is being actively promoted.
Skeptical of the online video purporting to demonstrate the effect (see diagram), I set up a simple mathematical simulation of the warm-up cycle of a heated room with PCM embedded behind plasterboard in its outer wall. The outcome was even worse than the description I gave earlier. At 5C outside the PCM did not start to store heat until the room temperature went nearly two degrees above the PCM melting point This is because the intervening plasterboard acts as a thermal insulator, keeping the PCM below the room temperature even though it I had 100mm of insulation on its cold side.
On a positive note, PCM layers may have a role to play in moderating overheating from solar gain to roofs in particular, for example in attic rooms. I have measured external roof-tile surface temperatures of 50C in the UK, which even with insulation behind it results in uncomfortable internal temperatures on the sloping ceiling behind it. 25mm of PCM under the roofing felt would absorb, by my calculations, the first 2.5 kWh per square metre of solar gain. Keeping the internal surface cooler would help alleviate discomfort and with internal insulation the stored heat would dissipate preferentially to the night sky.
If you or a colleague need a wide-ranging introduction to energy-saving methods and techniques, look out for my one-day events “Energy efficiency A to Z“
“StoreDot and BP present world-first full charge of an electric vehicle in five minutes” runs the headline on this news item from BP which actually talks about an electric scooter. The Storedot website is a bit more gung-ho about their new battery technology, which they think would enable a 5-minute full recharge of an electric car with 300 mile range. Really?
Quick sense check: for a 300 mile range you’d be talking probably about a 100-kWh battery for which a 5-minute full recharge would demand 1.2 megawatts of charging capacity. That’s going to be some meaty charger. Moreover, even upping the charger voltage to 1,000 volts you’ll be drawing 1,200 amps, so I reckon the charger cables are going to need a pair of conductors of (say) 4 square centimetres cross section. And cars would need to be engineered with DC charging circuits to match …
I put these points to StoreDot and they pointed me to Chargepoint’s website which talks about “up to 500 kW” Express Plus charging using the CCS Type 2 connector, although as far as I know CCS2 goes nowhere near that rating and when those kinds of powers are achieved they are going to need thousand-volt water-cooled charging cables with thermal sensing on the plug because of the risk of overheated contacts.
Our next course on transport energy and carbon is on 17 October: click here for details
Back to the scooter that BP had seen recharged in 5 minutes. The model in question has two 48V 31.9 Ah batteries (so about 3.1 kWh) which to recharge in 5 minutes would require a 37 kW charger – plausible in a non-domestic setting. I imagine the demonstration to BP involved removing the batteries to recharge them because obviously the scooter’s onboard electrics would not be designed to handle a charging current of 800 amps.
My colleague Daniel did some digging and unearthed this priceless video from StoreDot in 2014, purporting to show a smartphone being completely recharged in 30 seconds using battery technology that would be released in 2016 (I’m still waiting…). The sceptical comments are worth reading — especially the ones about fake phone screens, and indeed the ones about exploding phones — but you can’t help but notice in the video itself they are actually “charging” a huge battery glued to the back of the phone. So a big dose of scepticism is in order, I think, and if the link to the video no longer works you can guess why.
More credible is the news this April about battery developments using vanadium disulfide cathodes stabilised with a microscopic layer of titanium disulphide: this promises faster charging but they are careful not to say how much faster.
Scale build-up from hard water is often cited as a cause of energy waste in hot-water systems (I am talking here about ‘domestic hot water’ supply, not closed loops within central heating systems). Actually though, contrary to claims for some water treatment devices, it is not necessarily the case that energy waste will be significant. Indeed with an electric immersion heater on a 24-hour service, all the supplied energy still gets into the water; there is no loss. Of course the rate at which the water temperature recovers will be reduced, and the heating element will fail prematurely, but those are service and reliability issues not energy waste.
The story is a little different on intermittent hot water storage of any kind. Here, because scaling will retard temperature recovery, users may extend preheat times and that will result in a marginal increase in standing heat loss. If the heat supply is from a primary (boiler-fed) water loop, the primary return temperature will be higher because scale impedes heat transfer, and this also will increase standing losses although in reality not to a significant extent in the grand scheme of things. If hot-water recovery times deteriorate markedly, users may of course dispense with time control altogether and in those circumstances avoidable standing heat loss might become significant if thermal insulation is poor.
Preventing scale build-up
Simplifying the story somewhat, the main constituent of scale is calcium carbonate, which starts to form above about 35°C through breakdown of the more soluble calcium hydrogen carbonate that is present to varying degrees in the public water supply, with ‘hard’ water containing higher concentrations of it. Calcium carbonate crystals of the normal ‘calcite’ form stick to surfaces and each other, and that is what constitutes limescale.
One way to deal with this is softening which (in its strict sense) involves a chemical process to turn calcium carbonate into sodium carbonate which does not precipitate as crystals but stays in solution. The process is costly in terms of chemicals; a waste product, calcium chloride, needs to be flushed away periodically; and the softened water is unsuitable for drinking and cooking because of its high sodium content.
The alternative to chemical treatment is physical conditioning. Various proprietary methods are available. Some involve electric or magnetic fields which are supposed to affect the calcite crystals in some way (for example giving them an electric charge so that they repel each other, or in some other manner inhibiting their tendency to agglomerate).
Another class of conditioner is electrolytic. Electrolytic devices release of minute quantities of zinc or iron into the water, which change the calcium carbonate to its ‘aragonite’ form which, unlike calcite don’t stick together, so they stay in suspension and do not contribute to scale formation.
With the exception of electrolytic devices, there is no scientific explanation of how or why most of these physical conditioners work, and there are no accepted tests of efficacy. There is only anecdotal evidence, but if it works, it works.
The one method of physical condition which is definitely effective (and I can vouch for it personally) is polysilicate-polyphosphate dosing. This has a dual action. It modifies the carbonate crystals to stop them sticking to each other, and it coats the inner surfaces of pipework and appliances to inhibit scale formation.
For anybody wanting further references, this note from WRc commissioned by Southern Water is what I currently regard as the most authoritative advice on the subject of water treatment techniques.
We all know that trees are good and absorb carbon dioxide. But how good are they? Let’s work it out…
Trees absorb carbon dioxide at different rates depending upon their age, species and other factors but as a rough order of magnitude you can say the figure for a typical established tree is 10 kg per year. The carbon dioxide emissions associated with energy use are 0.2 kg per kWh for natural gas and (in the UK in 2018, including transmission losses) an average of 0.3 kg per kWh for electricity.
So 50.0 kWh of gas or about 33.3 kWh of electricity each generate the 10 kg of CO2 that a single tree can absorb in a year. Take that figure for electricity. As a year is 8760 hours, 33.3 kWh equates to a continuous load of only 3.8W. So one entire tree compensates for one broadband router, a TV on standby, or a couple of electric toothbrushes or cordless phones (roughly).
And as for gas consumption: remember pilot lights? The little flame that burns continuously to ignite the main gas burner? If you had pilot flame with a rating of 100 watts, in the course of a year it would use 876 kWh and require no fewer than 17 trees to offset its CO2 emissions..
Are the assumptions correct?
The first time I published this piece in the Energy Management Register bulletin my estimate of CO2 takeup rates was challenged. Fair enough: I plucked it from stuff I had found on the Web knowing that it might be out by an order of magnitude. So let’s do a sense check.
The chemical composition of wood is 50% carbon (on a dry-matter basis) and all that carbon came from CO2 in the air. So 1 kg of dry woody matter contains 0.5 kg of carbon, which in turn was derived from 0.5 x 44/12 = 1.833 kg CO2 . Thus if we know the growth rate of a tree in dry mass per year, we can multiply that by 1.833 to estimate its CO2 takeup. Fortunately a 2014 article in ‘Nature’ has the growth figures we need. Although there is wide variability in the results, for European species with trunk diameters of 10 cm the typical growth in above-ground dry mass is 1.6 kg per year, equating to a CO2 takeup of only 2.9 kg per year (although this rises to 18 and 58 kg per year for diameters of 40 and 100 cm). So newly-planted trees (which is what we are talking about) are going to fall well short of my 10 kg/year estimate, and it will be years before they reach a size where their offsetting contribution reaches even modest levels.
I like trees – don’t get me wrong – by all means plant them for shade, wildlife habitat, fruit or aesthetic appearance. But when it comes to saving the planet I just think that given the choice between (a) planting a tree and waiting a few years, and (b) cutting my electricity demand by 3.8 watts now, I know what I would go for.
ON FACTORY compressed air systems it’s good practice to fit air isolation valves like this one below (A) fitted to a stamping press. It shuts off the air when the press is idle but in case of valve failure a bypass (B) is provided. This one is closed now, but moments before the picture was taken we had found it open, defeating the automatic air shut-off.
Just before we moved on I noticed a hose connected to the valve at one end and nothing at the other end. It was the air supply to the pneumatic actuator on the air valve itself, and without it the valve would never close anyway. Somebody had decided to adopt a belt-and-braces approach to wasting air by disconnecting it (C).
The problem of open bypass valves was commonplace and well known, but nobody had thought to establish the root cause. What compelled operators to defeat the system? It turned out that they sometimes needed air on the press to apply the pneumatic brakes on the flywheel after the main motor had been turned off. A simple push-button over-ride will solve that issue.
Our client relies on an extensive network of automatically-read submeters thoughout his estate and asked us to prepare a recovery manual in case his data-collection contractor should cease trading. As part of the exercise we set up a temporary online storage location, proved that the output from a typical data-logging installation can be rerouted, and established what format the data arrive in.
We are also discussing with the incumbent contractor what additional information will need to be available in escrow to permit an orderly handover.
Additional metering may be required for all sorts of reasons. There are three relatively clear-cut cases where the decision will be dictated by policy:
- Departmental accountability or tenant billing: it is often held that making managers accountable for the energy consumed in their departments encourages economy. Where this philosophy prevails, departmental sub-metering must be provided unless estimates (which somewhat defeat the purpose) are acceptable. Similar considerations would apply to tenant billing (I am talking about commercial rather than domestic tenants here).
- Environmental reporting: accurate metering is essential if, for example, consumption data is used in an emissions trading scheme: an assessor could refuse certification if measurements are held to be insufficiently accurate.
- Budgeting and product costing: this use of meter data is important in industries where energy is a significant component of product manufacturing cost, and where different products (or different grades of the same product) are believed to have different energy intensities.
The fourth case is where metering is contemplated purely for detecting and diagnosing excessive consumption in the context of a targeting and monitoring scheme. This may well be classified as discretionary investment and will require justification. This could be based on a rule of thumb, or on the advice in the Building Regulations (for example). A more objective method is to identify candidates for submetering on the basis of the risk of undetected loss (RoUL). The RoUL method attempts to quantify the absolute amount of energy that is likely to be lost through inability to detect adverse changes in consumption characteristics. It comprises four steps for each candidate branch:
- Estimate the annual cost of the supply to the branch in question (see below).
- Decide on the level of risk (see table below) and pick the corresponding factor.
- Multiply the cost in step 1 by the factor in step 2, to get an estimate of the annual average loss.
- Use the result from step 3 to set a budget limit for installing, reading and maintaining the proposed meter.
|High||Usually associated with highly-intermittent or very variable loads under manual control, or under automatic control at unattended installations (the risk is that equipment is left to run continually when it should only run occasionally, or is allowed to operate ‘flat out’ when its output ought to modulate in response to changes in demand). Examples of highly-intermittent loads include wash-down systems, transfer pumps, frost protection schemes, and in general any equipment which spends significant time on standby. Typical continuous but highly-variable loads would include space heating and cooling systems. It should be borne in mind that oversized plant, or any equipment which necessarily runs at low load factor, is at increased risk.||20%|
|Medium||Typified by variable loads and intermittently-used equipment operating at high load factor under automatic control, in manned situations where failure of the automatic controls would probably become apparent quickly.||5%|
|Low||Anything which necessarily runs at high load factor (and therefore has little capacity for excessive operation) or where loss or leakage, if able to occur at all, would be immediately detected and rectified.||1%|
*Note: the risk percentages are suggested only; the reader should use his or her judgment in setting percentages appropriate to individual circumstances
The RoUL method tries to quantify the cost of not having a meter, but this relies on knowing the consumption in the as-yet-unmetered circuit. The circular argument has to be broken by estimating consumption:
- by step testing
- using regression analysis to determine sensitivity to driving factors such as product throughput and prevailing weather
- using ammeter readings for electricity, condensate flow for steam, etc.
- multiplying installed capacity by assumed (or measured) load factors
- from temporary metering
To prove that energy performance has improved, we calculate the energy performance indicator (EnPI) first for a baseline period and again during the subsequent period which we wish to evaluate. Let us represent the baseline EnPI value as P1 and the subsequent period’s value as P2
Most people would then say that as long as P2 is less than P1 we have proved the case. But there is uncertainty in both P1 and P2 and this will be translated into uncertainty in the estimate of their difference. We strictly need to show not only that the difference (P1 – P2) is positive, but that the difference exceeds the uncertainty in its calculation. Here’s how we can do that.
In the example which follows I will use a particular form of EnPI called the ‘Energy Performance Coefficient’ (EnPC), although any numerical indicator could be used. The EnPC is the ratio of actual to expected consumption. By definition this has a value of 1.00 over your baseline period, falling to lower values if energy-saving measures result in consumption less than otherwise expected. To avoid a long explanation of the statistics I’ll also draw on Appendix B of the International Performance Measurement and Verification Protocol (IPMVP, 2012 edition) which can be consulted for deeper explanations.
IPMVP recommends evaluation based on the Standard Error, SE, of (in this case) the EnPC. To calculate SE you first calculate the EnPC at regular intervals and measure the Standard Deviation (SD) of the results; then divide SD by the square root of the number of EnPI observations. In my sample data I use 2016 and 2017 as the baseline period, and calculate the EnPC month by month.
In my sample data the standard deviation of the EnPC during the baseline period was 0.04423 and there being 24 observations the baseline Standard Error was thus
SE1 = 0.04423 / √24 = 0.00903
The cusum analysis shows that performance continued unchanged after the baseline period but then in July 2018 it improved. We see that the final five months show apparent improvement; the mean EnPC after the change was 0.94, and these five observations had a Standard Deviation of 0.02402. Their Standard Error was therefore
SE2 = 0.02402 / √5 = 0.01074
SEdiff , the Standard Error of the difference (P1 – P2) is given by
SEdiff = √( SE12 + SE22 )
= √( 0.009032 + 0. 010742 )
SE on its own does not express the true uncertainty. It must be multiplied by a safety factor t which will be smaller if we have more observations (or if we can accept lower confidence) and vice versa. This table is a subset of t values cited by IPMVP:
| Confidence level | | 90% | 80% | 50% | Observations | | | | 5 | 2.13 | 1.53 | 0.74 | 10 | 1.83 | 1.38 | 0.70 | 12 | 1.80 | 1.36 | 0.70 | 24 | 1.71 | 1.32 | 0.69 | 30 | 1.70 | 1.31 | 0.68 |
Let us suppose we want to be 90% confident that the true reduction in the EnPC lies within a certain range. We therefore need to pick a t-value from the “90%” column of the table above. But do we pick the value corresponding to 24 observations (the baseline case) or 5 (the post-improvement period)? To be conservative—as required by IPMVP—we take the lower number, meaning we must in this case use a t value of 2.13.
Now in the general case ∆P, the EnPC reduction, is given by
∆P = (P1 – P2) ± t.SEdiff
Which, substituting the values from our example, would yield
∆P = (1.00 – 0.94) ± (2.13 x 0.01403)
∆P = 0.06 ± 0.03
The lowest probable value of the improvement ∆P is thus (0.06 – 0.03) = 0.03 . It may in reality be less, but the chances of that are only 1 in 20 because we are 90% confident that it falls within the stated range and by implication 5% confident that it is above the upper limit.
Footnote: example data
The analysis is based on real data (preview below). These are from an anonymous source and multiplied by a secret factor to disguise their true values. Anybody wishing to verify the analysis can download the anonymous data as a spreadsheet here.
Note: to compute the baseline EnPC
- do a regression of MWh against tonnes using the months labelled ‘B’
- create a column of ‘expected’ consumptions by substituting tonnage values in the regression formula
- divide each actual MWh figure by the corresponding expected value