Category Archives: Uncategorized

Remote measurement and verification

Note: this guidance was developed because of the Covid-19 pandemic. However, by its nature, the pandemic is an unexpected major non-routine event which will in many cases completely invalidate baselines developed before March 2020.  Readers should not expect this advice to remedy the resultant disruption to evaluations. It may prove to be applicable only to ‘retrofit isolation’  assessments and trials using extended sequences of on-off mode changes.

THIS GUIDANCE proposes enhancements to standard measurement and verification protocols to cover the situation where, for reasons beyond the control of the parties involved, a measurement and verification practitioner (MVP) is unable to participate in person.

Firstly the proposal for the energy-saving project should indicate not only the quantity or proportion by which consumption is expected to be reduced, but wherever possible the nature of the expected reduction. For example where data are to be analysed at weekly or monthly intervals, it may be possible to say whether reductions are expected in the fixed or variable components of demand or both; while for data collected at intervals of an hour or less it may be possible to define the expected change in daily demand profile or other parameters related to the pattern of demand.

Setting the expectations more precisely in this manner will help the MVP to detect whether post-implementation results may have been affected by unrelated factors.

Secondly a longer period than usual of pre-implementation data should be collected and analysed. This is necessary in order not only to establish prevailing levels of measurement and modelling uncertainty, but potentially to expose pre-existing irregularities in performance. Such monitoring should employ the same metering devices which will be used for post-implementation assessment.

The causes of historical irregular performance should be traced as they could provide clues about foreseeable non-routine events (NRE) which would then need to be allowed for in post-implementation assessment in case they recur. If NREs are not adequately allowed for, they will at best degrade the analysis and at worst lead to incorrect conclusions.

Thirdly, all parties should remember that as well as foreseeable NREs, there will be unforeseen ones as well. Dealing with is part of standard practice but because the appointed MVP is unable to visit the subject site and interview key personnel, he or she is likely to miss important clues about potential NREs which otherwise would have been evident based on his or her professional experience. It is therefore imperative that a planning teleconference takes place involving local personnel who are thoroughly conversant with the operation and maintenance of the facility. As part of this meeting a knowledgeable client representative should provide the MVP with a clear account of how the facility is used with particular emphasis on non-standard situations such as temporary closures. Pertinent input from client representatives at large would include (to give some examples) control set-points, time schedules, plant sequencing and duty-cycling regimes, occupation patterns, the nature and timing of maintenance interventions and so on. Information about other projects—both active and contemplated—should be disclosed. The MVP has a duty to probe and ask searching questions. It should never be assumed that something is irrelevant to the matter in hand and as a general rule no question asked by the MVP should go unanswered.

It may be helpful to provide the facility of a walk-through video tour for the benefit of the MVP, which can of course be on a separate occasion.

We will be holding this year’s measurement and verification conference, MAVCON20, as a weekly series of two-hour sessions in October and November.  Follow this link for details and booking information

Monitoring solar installations

In a recent newsletter I suggested that somebody wishing to monitor the health of their solar PV installations could do so using ‘back-to-back’ comparisons between them. Reader Ben Whittle knows a lot more about these matters and wrote to put me right. His emails are reproduced here:

I would point out that whilst it may possibly be interesting to compare solar installations and account for cloud cover, personally I wouldn’t bother!

  1. variable cloud cover is variable – you can’t control it and generally it is fair to say that annual variation in the UK rarely deviates +/- 5% annually
  2. if you have a monitoring system, it will be capable of telling you when there is a fault immediately by email rather than waiting to do analysis

In the case of inverter  manufacturers’ own monitoring systems they will directly report faults immediately , usually based on

  1. an actual fault code being generated by the inverter – typically either being switched off and failing to report at all, string insulation resistance faults or other major failures
  2. output not matching other inverters in the same installation or sometimes against a base case / prediction based on yield expected due to weather forecasts
  3. possibly against a self defined target, and a failure to meet it

Third-party monitoring manufacturers will typically do the same as inverter manufacturer monitoring (with the exception of not reporting actual fault codes), but they have the advantage of being able to report on installations with mixed inverter manufacturers being used (possibly new + historic installations in one location or a portfolio of installations from different installers)

One classic mistake made with solar monitoring information is having no clear idea of what and how you are going to deal with all the information! It is time consuming to do and takes a bit of experience to make sense of it all.

So I asked Ben if there was a cost attached and this was his reply:

Most inverter manufacturers provide a solution. Three of the biggest brands (SMA, Fronius, SolarEdge) all have very competent systems, which are hosted for free, but you can get additional services by paying extra.

A typical domestic setup (which could cover any installation to any size in theory) would have basic info on annual, monthly and daily yield, and may also display self consumption rates assuming you have bought the requisite sub meter. Other info can include energy sent to a battery if you have one. It would also notify you if you lost grid connection, or communication faults. Communication is typically managed over wifi for domestic set ups and ethernet in commercial set ups. Remote solar farms do all this over 3g or 4g if there is no nearby telephone infrastructure.

Where you would pay money for a service is for an enterprise solution: this would allow you to also compare multiple installations and give you more detailed performance info, possibly also automating equipment replacement or engineer visits if malfunctions were being detected. (You would only get this from the major manufacturers with a dedicated team in this country, or an O&M service provider who was being paid to keep an eye on performance).

Third party systems typically only work using generation meter and export meter info, but a surprising amount of knowledge can be gleaned from this – you are after all only trying to find anomalies and once you have defined the expected performance initially this is quite straightforward. The advantage of this is that if you are managing lots of different installations with different inverters then you can pull the data all into one database. Big O&M companies may insist on this being added where a service level is being defined – such as 98% availability or emergencies responded to  in under 24 hours. The service will also include additional data points such as pryanometer info and other weather data, depending on the scale of the installation.

The companies who operate big solar farms are often hedge funds and they don’t like leaving systems down and not running for any length of time given the income from feed in tariffs. Though they quite often don’t manage the farms as well as they could do…

Ben Whittle (07977 218473, ) is with the  Welsh Government Energy Service

Heat meters using ultrasonic flow measurement

Clamp-on ultrasonic flow meters are tricky things to deploy and I always get a sinking feeling when somebody says they’re going to use them. In this case they were fitted to measure cooling energy as part of a measurement and verification project.  Provisional analysis in the early weeks of the project showed that all was not well: there were big apparent swings in performance, which were unrelated to what we knew was going on on the plant.

Data from the meters, which were downloaded at one-minute intervals, contained computed kWh values which I was consolidating into hourly totals. The person sending me the data was extracting the kWh figures into a spreadsheet for me but some instinct prompted me to request the raw data, which I noticed contained the flow and temperature readings as well as the kWh results. My colleague Daniel wrote a fast conversion routine which saved our friend the trouble and we discovered that there were occasional huge spikes in the one-minute kWh records which were caused by errors in the volumetric flow rates. The following crude diagram of the minute-by minute flows over several weeks shows that as well as plausible results (under 500 cubic metres per hour) there were families of high readings spaced at multiples of about 750 above that:

Minute-interval flow measurements over several weeks

The discrepancies were sporadic, rare, and clearly delineated so Dan was able to modify his software to skip the anomalous readings and average over the gaps. We were lucky that flow rates and temperatures were relatively constant, meaning that the loss of an occasional minute per hour was not fatal. He also discovered that the heat meter was zeroing out low readings below a certain threshold, and he plugged those holes by using the flow and differential-temperature data to compute the values which the meter had declined to output.

The next diagram shows the relationship between the meter’s kWh output, aggregated to eight-hourly intervals (on the vertical axis) with what we believe to be the true readings (on the horizontal axis). The straight line represents a 1:1 relationship and shows that, quite apart from the gross discrepancies, readings were anomalously high in almost every eight-hour interval.

Relationship between raw eight-hourly kWh and the estimated true values








The effect on our analysis was dramatic. Instead of erratic changes in performance not synchronised with the energy-saving measure being turned on and off, we were able to see clear confirmation that it was having the required effect.


Sankey diagram software

Sankey diagrams depict the flows of energy through a system or enterprise using arrows whose widths are proportional to the magnitudes of the flows.

My friend and former colleague Kevin Cardall recently challenged my newsletter readers to come up with improvements on an Excel-based Sankey Diagram generator which he had devised (illustrated right, and attached here). His work was inspired by this website. Reader David B. responded rapidly with a neat enhancement but we also had a number of alternatives suggested by other readers.

Readers’ recommendations

When the subject of Sankey diagram software came up in 2003 one of my clients recommended SDRAW.  It is a commercial product but there is a free demonstration version.

Readers Colin G., Gary C. and Chris S. mentioned a free online tool, and reader Andy drew my attention to this toolkit for those who want to do it in Excel.



“Energy Service Company in a Box”

ESCO in a box is a concept being developed with government support with the aim of improving takeup of energy-saving measures among small and medium enterprises.

In essence, the project is designed to enable respected local community organisations (for example) to set up as providers of energy-saving services to the SMEs in their area, by equipping them with a complete package of technical, analytical, legal and financial components.

The originators of the idea, EnergyPro Ltd, have produced an  overview of the scheme  and I interviewed their managing partner, Steven Fawkes, about it on 15 April. The recording will be available until 14 May at  (apologies that the recording is missing the first couple of minutes).

Monitoring vehicle performance

Normally when we track vehicle performance we think in terms of miles per gallon or kilometres per litre. So in figure 1 for example we are looking at the weekly km/litre figure for a 32-tonne flatbed lorry delivering building materials:

Figure 1: trend in kilometres per litre

It is just about possible to discern worsening performance towards the end of the trace. But by taking a slightly different approach we can not only confirm that there is an issue, but also learn more about its timing, nature and magnitude. We should start by plotting weekly fuel consumption against weekly distance traveled as in Figure 2. (Distance traveled is the “driving factor” in this analysis not in the sense of driving the lorry, but in the sense that variation in weekly distance traveled “drives” variation in weekly fuel use):

Figure 2: relationship between weekly fuel consumption and distance driven

What we see is that there is an element of consumption (about 40 litres per week in this case) that is unrelated to distance driven. Most likely, this is fuel consumed while stationary. The straight-line relationship gives us a more precise gauge of performance because it allows us to deduce expected consumption each week quite accurately. We can thus show the deviation from expected fuel consumption as a time-history chart (Figure 3):

Figure 3: weekly deviation from expected fuel consumption

From this it is clear that there was a change in behaviour on or about 7 October, which manifests itself as a fairly consistent 50-litre-per week excess almost every week since (see the highlighted points).

Furthermore, we can compare the adverse and achievable behaviour on the scatter diagram (Figure 4) in which the post-change points are marked:

Figure 4: comparison of behaviour before and after the change

The red straight line is a best fit through all the post-change points, and it shows us that the apparent excess fuel consumption is not distance-related. It might be a permanent change in terrain or traffic conditions or a new pattern of deliveries with more waiting time…  Or it might be a new driver who doesn’t turn their engine off while waiting. It probably isn’t a mechanical fault, because that would tend to change the gradient of the line. But at least we know when the change occurred (which will help trace the cause), its nature (which helps eliminate some kinds of fault) and its magnitude (which helps us decide whether to bother pursuing the case).

Try getting those insights from tracking the MPG.

This method of monitoring energy performance also applies to buildings and industrial processes, and you can find training on the method at

ISO50001 Q&A

One of my newsletter readers, A.M., wrote from New Zealand with a series of questions about ISO50001, the management-systems standard for energy management. He has just started to get to grips with the 2018 edition. Here are his questions and my answers:

A.M.: How we distinguish between boundaries and scope? if boundary is simply the physical borders for the system (e.g. the office buildings), what is scope then? and if scope is for example “transportation” and etc., why in SEU [significant energy use] we say “Transportation” could be an SEU as a process?

V.V.: “Scope” means the range of activities covered. For example “manufacturing processes” or “heating, ventilation and air conditioning” or, as you say “transportation”. Within transportation you might have, for example, “freight” as an SEU, but equally you could declare all transport as significant. There is no paradox here.

A.M.: In the new edition, the top management shall take all the responsibilities that the representative had in the last edition. This sounds impossible to delegate all the tasks to the top management. How do we cope with this?

V.V.: If you are responsible for a task you can delegate it but still keep responsibility, i.e., it is your fault if the people you delegated it to fail to carry it out properly. Managers are accountable for the actions of subordinates.

A.M.: In section 4.3, page 8, after b) we have a statement “The organization shall not exclude an energy type within the scope and boundaries” I do not understand the idea! why we are not allowed to do so?

V.V.: The requirement seems logical to me. For one example: if you have transport as your scope and you have plug-in hybrid vehicles, it is reasonable to insist that you cannot exclude any electricity used by them. Another example: if you had an oil-fired boiler and replaced it with a wood-fired one, it would evidently be wrong to exclude the wood fuel from consideration.

A.M.: If a new opportunity would become replacing diesel boiler with wood pellet, it means we are changing the energy types which does not necessarily reduce the energy costs. Can we call it action plans?

V.V.: ISO50001 is about managing energy performance, not costs or carbon. If substituting a different fuel improves the energy performance, it will contribute to your aims and objectives, so it would make sense to classify the work as an action plan.

A.M.: I understand that for each energy type, we identify SEU(s) and for each SEU, we list the action plans. What if one action plan reduces diesel and increases electricity? Do we still keep it as an action plan for diesel?

V.V.: What matters is the overall energy performance. If the amount of electricity consumption that you add exceeds the amount of diesel energy saved, your energy performance would be worse after the project and it would therefore make no sense to include the project in an action plan within your EnMS. If the project is going to improve energy performance, you could declare it as part of an action plan.

LED versus metal halide lamps

Clare C., a regular reader of my energy-management bulletins, was perplexed when she started researching the cost advantages of LEDs as replacement for metal halide (MH) high-bay fittings. She discovered that MH lamps have luminous efficacies very similar to LEDs with both, broadly speaking, yielding about 100 lumens per watt. Certainly she wasn’t going to get the 50% saving she was after, and she asked my opinion.

There are a couple of factors that would tip the balance in favour of LEDs. Firstly, she needed to account for the fact that unlike LEDs, MH lamps need control gear which would add some parasitic load (say 20 watts on a 400-watt lamp).  Secondly, LEDs are more directional and can deliver all their output more effectively to the working space; MH lamps are omnidirectional and need reflectors which may lose some of the light output. So in terms of useful light output per circuit watt, a well-specified and correctly-installed LED fitting may have a moderate advantage.

But the big gain is in controllability. MH lamps have a warm-up time measured in minutes and a ‘restrike’ time (after turning off) which is longer still, to allow them to cool before being turned on again.  This is common to all high-intensity discharge (HID) lamps. It does not matter how long the delay is; it discourages the use of automatic control so  HID lamps are often turned on well before they are needed, and then stay on for the duration. LEDs by contrast can be turned off at will and as soon as they are needed again, they come on. This is where Clare will might get her 50% saving.

High-intensity discharge lamps in a sports hall – a good candidate for LEDs because of erratic occupancy

Water treatment

Scale build-up from hard water is often cited as a cause of energy waste in hot-water systems (I am talking here about ‘domestic hot water’ supply, not closed loops within central heating systems; I will come to those later). Actually though, contrary to claims for some water treatment devices, it is not necessarily the case that energy waste in DHW systems will be significant. Indeed with an electric immersion heater on a 24-hour service, all the supplied energy still gets into the water; there is no loss. Of course the rate at which the water temperature recovers will be reduced, and the heating element will fail prematurely, but those are service and reliability issues not energy waste.

The story is a little different on intermittent hot water storage of any kind. Here, because scaling will retard temperature recovery, users may extend preheat times and that will result in a marginal increase in standing heat loss. If the heat supply is from a primary (boiler-fed) water loop, the primary return temperature will be higher because scale impedes heat transfer, and this also will increase standing losses although in reality not to a significant extent in the grand scheme of things. If hot-water recovery times deteriorate markedly, users may of course dispense with time control altogether and in those circumstances avoidable standing heat loss might become significant if thermal insulation is poor.

Turning now to the effect on wet central heating circuits, scaling will affect efficiency. Scale within heat emitters (radiators and so on) will reduce heat transfer and result in higher circulating temperatures because the heat cannot escape from the water so readily. Meanwhile within the boiler itself, impaired transfer of heat into the (now hotter) system water will result in excessive heat going up the chimney, evidenced by elevated exhaust temperatures.

Furthermore in both heating and DHW systems, scale could interfere with the operation of control valves and either result in excessive heat output – with a corresponding excessive use of fuel – or inadequate heat output, which will cause people to interfere with the controls, deploy electric heaters, or take other actions that incur excess costs.

Preventing scale build-up

Simplifying the story somewhat, the main constituent of scale is calcium carbonate, which starts to form above about 35°C through breakdown of the more soluble calcium hydrogen carbonate that is present to varying degrees in the public water supply, with  ‘hard’ water containing higher concentrations of it. Calcium carbonate crystals of the normal ‘calcite’ form stick to surfaces and each other, and that is what constitutes limescale.

One way to deal with this is softening which (in its strict sense) involves a chemical process to turn calcium carbonate into sodium carbonate which does not precipitate as crystals but stays in solution. The process is costly in terms of chemicals; a waste product, calcium chloride, needs to be flushed away periodically; and the softened water is unsuitable for drinking and cooking because of its high sodium content.

The alternative to chemical treatment is physical conditioning. Various proprietary methods are available. Some involve electric or magnetic fields which are supposed to affect the calcite crystals in some way (for example giving them an electric charge so that they repel each other, or in some other manner inhibiting their tendency to agglomerate).

Another class of conditioner is electrolytic. Electrolytic devices release of minute quantities of zinc or iron into the water, which change the calcium carbonate to its ‘aragonite’ form which, unlike calcite don’t stick together, so they stay in suspension and do not contribute to scale formation.

For a wide-ranging introduction to energy-saving technologies look out for my one-day ‘A to Z’ courses advertised at

With the exception of electrolytic devices, there is no scientific explanation of how or why most  of these physical conditioners work, and there are no accepted tests of efficacy. There is only anecdotal evidence, but if it works, it works.

The one method of physical condition which is definitely effective (and I can vouch for it personally) is polysilicate-polyphosphate dosing. This has a dual action. It modifies the carbonate crystals to stop them sticking to each other, and it coats the inner surfaces of pipework and appliances to inhibit scale formation.

For anybody wanting further references, this note from WRc commissioned by Southern Water is what I currently regard as the most authoritative advice on the subject of water treatment techniques.

The value of a tree

We all know that trees are good and absorb carbon dioxide. But how good are they? Let’s work it out…

Trees absorb carbon dioxide at different rates depending upon their age, species and other factors but as a rough order of magnitude you can say the figure for a typical established tree is 10 kg per year. The carbon dioxide emissions associated with energy use are 0.2 kg per kWh for natural gas and (in the UK in 2018, including transmission losses) an average of 0.3 kg per kWh for electricity.

So 50.0 kWh of gas or about 33.3 kWh of electricity each generate the 10 kg of CO2 that a single tree can absorb in a year. Take that figure for electricity. As a year is 8760 hours, 33.3 kWh equates to a continuous load of only 3.8W. So one entire tree compensates for one broadband router, a TV on standby, or a couple of electric toothbrushes or cordless phones (roughly).

And as for gas consumption: remember pilot lights? The little flame that burns continuously to ignite the main gas burner? If you had pilot flame with a rating of 100 watts, in the course of a year it would use 876 kWh and require no fewer than 17 trees to offset its CO2 emissions..

Are the assumptions correct?

The first time I published this piece in the Energy Management Register bulletin my estimate of CO2 takeup rates was challenged. Fair enough: I plucked it from stuff I had found on the Web knowing that it might be out by an order of magnitude. So let’s do a sense check.

The chemical composition of wood is 50% carbon (on a dry-matter basis) and all that carbon came from CO2 in the air. So 1 kg of dry woody matter contains 0.5 kg of carbon, which in turn was derived from 0.5 x 44/12 = 1.833 kg CO2 . Thus if we know the growth rate of a tree in dry mass per year, we can multiply that by 1.833 to estimate its CO2 takeup. Fortunately a 2014 article in ‘Nature’  has the growth figures we need. Although there is wide variability in the results, for European species with trunk diameters of 10 cm the typical growth in above-ground dry mass is 1.6 kg per year, equating to a CO2 takeup of only 2.9 kg per year (although this rises to 18 and 58 kg per year for diameters of 40 and 100 cm). So newly-planted trees (which is what we are talking about) are going to fall well short of my 10 kg/year estimate, and it will be years before they reach a size where their offsetting contribution reaches even modest levels.

I like trees – don’t get me wrong – by all means plant them for shade, wildlife habitat, fruit or aesthetic appearance. But when it comes to saving the planet I just think that given the choice between (a) planting a tree and waiting a few years, and (b) cutting my electricity demand by 3.8 watts now, I know what I would go for.