Category Archives: Uncategorized

Training: Energy Conscious Organisation

The Energy Services and Technology Association is offering training for people wishing to improve their skills in the field of behaviour change under the ‘Energy Conscious Organisation’ (EnCO) banner.

By attending the programme’s four 90-minute modules and passing a short exam you can become an EnCO Registered Consultant, and there is then the option of progressing to Approved EnCO Practitioner by implementing the principles, submitting a case study and attending an interview.

If you work for an ESTA member company or are registered as an RPEC consultant, the training is free of charge on a first-come, first-served basis but it is also available to others for a fee. You can download details here and as a subscriber to the Energy Management Register you can claim a 5% discount by quoting the code EMR2012.

Energy audits and surveys: rule of three

ENERGY surveys and audits – deliberate studies to find energy-saving opportunities – can be done with three levels of depth and thoroughness, can look at three broad aspects of operations, and will generally adopt one of three approaches.

Depth and thoroughness

Let’s take depth and thoroughness first. Level 1 would be an opportunity scan. This typically has a wide scope, and is based on a walk-though inspection. It will use only readily-available data, and provide at most only rough estimates of savings with no implementation cost estimates. It will yield only low-risk recommendations (the “no-brainers”) but should identify items for deeper study.

Level 2 is likely to have a selective scope (based perhaps on the findings from a Level 1 exercise). It is best preceded by a desktop analysis of consumption patterns and relationships, which means first collecting additional data on consumption and the driving factors which influence it. It should yield reasonably accurate assessments of expected savings but probably at best only rough cost estimates. It can therefore provide some firm recommendations relating to ‘safe bets’ and otherwise identify possible candidates for investment.

Level 3 is the investment-grade audit. This may have a narrow scope – perhaps one individual project – and will demand a sketch design and feasibility study, with accurate assessments of expected savings, realistic quotations for implementation, sound risk evaluation and (I would recommend) a measurement and verification plan.

Aspects covered

Next we will look at the three broad aspects of operations that the audit could cover. These are ‘technical’, ‘human factors’, and ‘procedural’.

Technical aspects will encompass a spectrum from less to more intrusive (starting with quality of automatic control and set points through energy losses to component efficiencies). In manufacturing operations the range continues through process layouts, potential for process integration and substitution of alternative processes.

Human-factors aspects meanwhile will cover good housekeeping, compliance with operating instructions, maintenance practices, training needs and enhanced vigilance.

Thirdly, procedural aspects will include the scope for improved operating and maintenace instructions, better plant loading and scheduling, effective monitoring and exception handling, and ensuring design feedback.

Approaches to the audit

The final three dimensions relate to the audit style, which I characterise as checklist-based, product-led, or opportunity-led.

The checklist-based approach suits simple repetitive surveys and less-experienced auditors.

Product-led audits have a narrow focus and exploit the expertise of a trusted technology supplier. Because the chosen focus is often set by advertising or on a flavour-of-the-month basis, the risk is that the wrong focus will be chosen and more valuable opportunities will be missed. Or worse still, the agenda will be captured by snake-oil merchants.

Finally we have the ‘opportunity-led’ style of audit. This is perhaps the ideal, although not always attainable because it needs competent auditors with diverse experience and will include the prior analysis and preliminary survey mentioned earlier.

These ideas, together with other advice on energy auditing, are to be covered in a new optional add-on module for my “Energy efficiency A to Z” course which explais a wide range of technical energy-saving opportunities. Details of all my forthcoming training and conferences on energy saving can be found at

Remote measurement and verification

Note: this guidance was developed because of the Covid-19 pandemic. However, by its nature, the pandemic is an unexpected major non-routine event which will in many cases completely invalidate baselines developed before March 2020.  Readers should not expect this advice to remedy the resultant disruption to evaluations. It may prove to be applicable only to ‘retrofit isolation’  assessments and trials using extended sequences of on-off mode changes.

THIS GUIDANCE proposes enhancements to standard measurement and verification protocols to cover the situation where, for reasons beyond the control of the parties involved, a measurement and verification practitioner (MVP) is unable to participate in person.

Firstly the proposal for the energy-saving project should indicate not only the quantity or proportion by which consumption is expected to be reduced, but wherever possible the nature of the expected reduction. For example where data are to be analysed at weekly or monthly intervals, it may be possible to say whether reductions are expected in the fixed or variable components of demand or both; while for data collected at intervals of an hour or less it may be possible to define the expected change in daily demand profile or other parameters related to the pattern of demand.

Setting the expectations more precisely in this manner will help the MVP to detect whether post-implementation results may have been affected by unrelated factors.

Secondly a longer period than usual of pre-implementation data should be collected and analysed. This is necessary in order not only to establish prevailing levels of measurement and modelling uncertainty, but potentially to expose pre-existing irregularities in performance. Such monitoring should employ the same metering devices which will be used for post-implementation assessment.

The causes of historical irregular performance should be traced as they could provide clues about foreseeable non-routine events (NRE) which would then need to be allowed for in post-implementation assessment in case they recur. If NREs are not adequately allowed for, they will at best degrade the analysis and at worst lead to incorrect conclusions.

Thirdly, all parties should remember that as well as foreseeable NREs, there will be unforeseen ones as well. Dealing with is part of standard practice but because the appointed MVP is unable to visit the subject site and interview key personnel, he or she is likely to miss important clues about potential NREs which otherwise would have been evident based on his or her professional experience. It is therefore imperative that a planning teleconference takes place involving local personnel who are thoroughly conversant with the operation and maintenance of the facility. As part of this meeting a knowledgeable client representative should provide the MVP with a clear account of how the facility is used with particular emphasis on non-standard situations such as temporary closures. Pertinent input from client representatives at large would include (to give some examples) control set-points, time schedules, plant sequencing and duty-cycling regimes, occupation patterns, the nature and timing of maintenance interventions and so on. Information about other projects—both active and contemplated—should be disclosed. The MVP has a duty to probe and ask searching questions. It should never be assumed that something is irrelevant to the matter in hand and as a general rule no question asked by the MVP should go unanswered.

It may be helpful to provide the facility of a walk-through video tour for the benefit of the MVP, which can of course be on a separate occasion.

We will be holding this year’s measurement and verification conference, MAVCON20, as a weekly series of two-hour sessions in October and November.  Follow this link for details and booking information

Monitoring solar installations

In a recent newsletter I suggested that somebody wishing to monitor the health of their solar PV installations could do so using ‘back-to-back’ comparisons between them. Reader Ben Whittle knows a lot more about these matters and wrote to put me right. His emails are reproduced here:

I would point out that whilst it may possibly be interesting to compare solar installations and account for cloud cover, personally I wouldn’t bother!

  1. variable cloud cover is variable – you can’t control it and generally it is fair to say that annual variation in the UK rarely deviates +/- 5% annually
  2. if you have a monitoring system, it will be capable of telling you when there is a fault immediately by email rather than waiting to do analysis

In the case of inverter  manufacturers’ own monitoring systems they will directly report faults immediately , usually based on

  1. an actual fault code being generated by the inverter – typically either being switched off and failing to report at all, string insulation resistance faults or other major failures
  2. output not matching other inverters in the same installation or sometimes against a base case / prediction based on yield expected due to weather forecasts
  3. possibly against a self defined target, and a failure to meet it

Third-party monitoring manufacturers will typically do the same as inverter manufacturer monitoring (with the exception of not reporting actual fault codes), but they have the advantage of being able to report on installations with mixed inverter manufacturers being used (possibly new + historic installations in one location or a portfolio of installations from different installers)

One classic mistake made with solar monitoring information is having no clear idea of what and how you are going to deal with all the information! It is time consuming to do and takes a bit of experience to make sense of it all.

So I asked Ben if there was a cost attached and this was his reply:

Most inverter manufacturers provide a solution. Three of the biggest brands (SMA, Fronius, SolarEdge) all have very competent systems, which are hosted for free, but you can get additional services by paying extra.

A typical domestic setup (which could cover any installation to any size in theory) would have basic info on annual, monthly and daily yield, and may also display self consumption rates assuming you have bought the requisite sub meter. Other info can include energy sent to a battery if you have one. It would also notify you if you lost grid connection, or communication faults. Communication is typically managed over wifi for domestic set ups and ethernet in commercial set ups. Remote solar farms do all this over 3g or 4g if there is no nearby telephone infrastructure.

Where you would pay money for a service is for an enterprise solution: this would allow you to also compare multiple installations and give you more detailed performance info, possibly also automating equipment replacement or engineer visits if malfunctions were being detected. (You would only get this from the major manufacturers with a dedicated team in this country, or an O&M service provider who was being paid to keep an eye on performance).

Third party systems typically only work using generation meter and export meter info, but a surprising amount of knowledge can be gleaned from this – you are after all only trying to find anomalies and once you have defined the expected performance initially this is quite straightforward. The advantage of this is that if you are managing lots of different installations with different inverters then you can pull the data all into one database. Big O&M companies may insist on this being added where a service level is being defined – such as 98% availability or emergencies responded to  in under 24 hours. The service will also include additional data points such as pryanometer info and other weather data, depending on the scale of the installation.

The companies who operate big solar farms are often hedge funds and they don’t like leaving systems down and not running for any length of time given the income from feed in tariffs. Though they quite often don’t manage the farms as well as they could do…

Ben Whittle (07977 218473, ) is with the  Welsh Government Energy Service

Heat meters using ultrasonic flow measurement

Clamp-on ultrasonic flow meters are tricky things to deploy and I always get a sinking feeling when somebody says they’re going to use them. In this case they were fitted to measure cooling energy as part of a measurement and verification project.  Provisional analysis in the early weeks of the project showed that all was not well: there were big apparent swings in performance, which were unrelated to what we knew was going on on the plant.

Data from the meters, which were downloaded at one-minute intervals, contained computed kWh values which I was consolidating into hourly totals. The person sending me the data was extracting the kWh figures into a spreadsheet for me but some instinct prompted me to request the raw data, which I noticed contained the flow and temperature readings as well as the kWh results. My colleague Daniel wrote a fast conversion routine which saved our friend the trouble and we discovered that there were occasional huge spikes in the one-minute kWh records which were caused by errors in the volumetric flow rates. The following crude diagram of the minute-by minute flows over several weeks shows that as well as plausible results (under 500 cubic metres per hour) there were families of high readings spaced at multiples of about 750 above that:

Minute-interval flow measurements over several weeks

The discrepancies were sporadic, rare, and clearly delineated so Dan was able to modify his software to skip the anomalous readings and average over the gaps. We were lucky that flow rates and temperatures were relatively constant, meaning that the loss of an occasional minute per hour was not fatal. He also discovered that the heat meter was zeroing out low readings below a certain threshold, and he plugged those holes by using the flow and differential-temperature data to compute the values which the meter had declined to output.

The next diagram shows the relationship between the meter’s kWh output, aggregated to eight-hourly intervals (on the vertical axis) with what we believe to be the true readings (on the horizontal axis). The straight line represents a 1:1 relationship and shows that, quite apart from the gross discrepancies, readings were anomalously high in almost every eight-hour interval.

Relationship between raw eight-hourly kWh and the estimated true values








The effect on our analysis was dramatic. Instead of erratic changes in performance not synchronised with the energy-saving measure being turned on and off, we were able to see clear confirmation that it was having the required effect.


Sankey diagram software

Sankey diagrams depict the flows of energy through a system or enterprise using arrows whose widths are proportional to the magnitudes of the flows.

My friend and former colleague Kevin Cardall recently challenged my newsletter readers to come up with improvements on an Excel-based Sankey Diagram generator which he had devised (illustrated right, and attached here). His work was inspired by this website. Reader David B. responded rapidly with a neat enhancement but we also had a number of alternatives suggested by other readers.

Readers’ recommendations

When the subject of Sankey diagram software came up in 2003 one of my clients recommended SDRAW.  It is a commercial product but there is a free demonstration version.

Readers Colin G., Gary C. and Chris S. mentioned a free online tool, and reader Andy drew my attention to this toolkit for those who want to do it in Excel.



“Energy Service Company in a Box”

ESCO in a box is a concept being developed with government support with the aim of improving takeup of energy-saving measures among small and medium enterprises.

In essence, the project is designed to enable respected local community organisations (for example) to set up as providers of energy-saving services to the SMEs in their area, by equipping them with a complete package of technical, analytical, legal and financial components.

The originators of the idea, EnergyPro Ltd, have produced an  overview of the scheme  and I interviewed their managing partner, Steven Fawkes, about it on 15 April. The recording will be available until 14 May at  (apologies that the recording is missing the first couple of minutes).

Monitoring vehicle performance

Normally when we track vehicle performance we think in terms of miles per gallon or kilometres per litre. So in figure 1 for example we are looking at the weekly km/litre figure for a 32-tonne flatbed lorry delivering building materials:

Figure 1: trend in kilometres per litre

It is just about possible to discern worsening performance towards the end of the trace. But by taking a slightly different approach we can not only confirm that there is an issue, but also learn more about its timing, nature and magnitude. We should start by plotting weekly fuel consumption against weekly distance traveled as in Figure 2. (Distance traveled is the “driving factor” in this analysis not in the sense of driving the lorry, but in the sense that variation in weekly distance traveled “drives” variation in weekly fuel use):

Figure 2: relationship between weekly fuel consumption and distance driven

What we see is that there is an element of consumption (about 40 litres per week in this case) that is unrelated to distance driven. Most likely, this is fuel consumed while stationary. The straight-line relationship gives us a more precise gauge of performance because it allows us to deduce expected consumption each week quite accurately. We can thus show the deviation from expected fuel consumption as a time-history chart (Figure 3):

Figure 3: weekly deviation from expected fuel consumption

From this it is clear that there was a change in behaviour on or about 7 October, which manifests itself as a fairly consistent 50-litre-per week excess almost every week since (see the highlighted points).

Furthermore, we can compare the adverse and achievable behaviour on the scatter diagram (Figure 4) in which the post-change points are marked:

Figure 4: comparison of behaviour before and after the change

The red straight line is a best fit through all the post-change points, and it shows us that the apparent excess fuel consumption is not distance-related. It might be a permanent change in terrain or traffic conditions or a new pattern of deliveries with more waiting time…  Or it might be a new driver who doesn’t turn their engine off while waiting. It probably isn’t a mechanical fault, because that would tend to change the gradient of the line. But at least we know when the change occurred (which will help trace the cause), its nature (which helps eliminate some kinds of fault) and its magnitude (which helps us decide whether to bother pursuing the case).

Try getting those insights from tracking the MPG.

This method of monitoring energy performance also applies to buildings and industrial processes, and you can find training on the method at

ISO50001 Q&A

One of my newsletter readers, A.M., wrote from New Zealand with a series of questions about ISO50001, the management-systems standard for energy management. He has just started to get to grips with the 2018 edition. Here are his questions and my answers:

A.M.: How we distinguish between boundaries and scope? if boundary is simply the physical borders for the system (e.g. the office buildings), what is scope then? and if scope is for example “transportation” and etc., why in SEU [significant energy use] we say “Transportation” could be an SEU as a process?

V.V.: “Scope” means the range of activities covered. For example “manufacturing processes” or “heating, ventilation and air conditioning” or, as you say “transportation”. Within transportation you might have, for example, “freight” as an SEU, but equally you could declare all transport as significant. There is no paradox here.

A.M.: In the new edition, the top management shall take all the responsibilities that the representative had in the last edition. This sounds impossible to delegate all the tasks to the top management. How do we cope with this?

V.V.: If you are responsible for a task you can delegate it but still keep responsibility, i.e., it is your fault if the people you delegated it to fail to carry it out properly. Managers are accountable for the actions of subordinates.

A.M.: In section 4.3, page 8, after b) we have a statement “The organization shall not exclude an energy type within the scope and boundaries” I do not understand the idea! why we are not allowed to do so?

V.V.: The requirement seems logical to me. For one example: if you have transport as your scope and you have plug-in hybrid vehicles, it is reasonable to insist that you cannot exclude any electricity used by them. Another example: if you had an oil-fired boiler and replaced it with a wood-fired one, it would evidently be wrong to exclude the wood fuel from consideration.

A.M.: If a new opportunity would become replacing diesel boiler with wood pellet, it means we are changing the energy types which does not necessarily reduce the energy costs. Can we call it action plans?

V.V.: ISO50001 is about managing energy performance, not costs or carbon. If substituting a different fuel improves the energy performance, it will contribute to your aims and objectives, so it would make sense to classify the work as an action plan.

A.M.: I understand that for each energy type, we identify SEU(s) and for each SEU, we list the action plans. What if one action plan reduces diesel and increases electricity? Do we still keep it as an action plan for diesel?

V.V.: What matters is the overall energy performance. If the amount of electricity consumption that you add exceeds the amount of diesel energy saved, your energy performance would be worse after the project and it would therefore make no sense to include the project in an action plan within your EnMS. If the project is going to improve energy performance, you could declare it as part of an action plan.

LED versus metal halide lamps

Clare C., a regular reader of my energy-management bulletins, was perplexed when she started researching the cost advantages of LEDs as replacement for metal halide (MH) high-bay fittings. She discovered that MH lamps have luminous efficacies very similar to LEDs with both, broadly speaking, yielding about 100 lumens per watt. Certainly she wasn’t going to get the 50% saving she was after, and she asked my opinion.

There are a couple of factors that would tip the balance in favour of LEDs. Firstly, she needed to account for the fact that unlike LEDs, MH lamps need control gear which would add some parasitic load (say 20 watts on a 400-watt lamp).  Secondly, LEDs are more directional and can deliver all their output more effectively to the working space; MH lamps are omnidirectional and need reflectors which may lose some of the light output. So in terms of useful light output per circuit watt, a well-specified and correctly-installed LED fitting may have a moderate advantage.

But the big gain is in controllability. MH lamps have a warm-up time measured in minutes and a ‘restrike’ time (after turning off) which is longer still, to allow them to cool before being turned on again.  This is common to all high-intensity discharge (HID) lamps. It does not matter how long the delay is; it discourages the use of automatic control so  HID lamps are often turned on well before they are needed, and then stay on for the duration. LEDs by contrast can be turned off at will and as soon as they are needed again, they come on. This is where Clare will might get her 50% saving.

High-intensity discharge lamps in a sports hall – a good candidate for LEDs because of erratic occupancy