Category Archives: Energy analysis and reporting

Bulk measurement and verification

Anyone familiar with the principles of monitoring and targeting (M&T) and measurement and verification (M&V) will recognise the overlap between the two. Both involve establishing the mathematical relationship between energy consumption and one or more independently-variable ‘driving factors’, of which one important example would be the weather expressed numerically as heating or cooling degree days.

One of my clients deals with a huge chain of retail stores with all-electric services. They are the subject of a rolling refit programme, during which the opportunity is taken to improve energy performance. Individually the savings, although a substantial percentage, are too small in absolute terms to warrant full-blown M&V. Nevertheless he wanted some kind of process to confirm that savings were being achieved and to estimate their value.

My associate Dan Curtis and I set up a pilot process dealing in the first instance with a sample of a hundred refitted stores. We used a basic M&T analysis toolkit capable of cusum analysis and regression modelling with two driving factors, plus an overspend league table (all in accordance with Carbon Trust Guide CTG008). Although historical half-hourly data are available we based our primary analysis on weekly intervals.

The process

The scheme will work like this. After picking a particular dataset for investigation, the analyst will identify a run of weeks prior to the refit and use their data establish a degree-day-related formula for expected consumption. This becomes the baseline model (note that in line with best M&V practice we talk about a ‘baseline model’ and not a baseline quantity; we are interested in the constant and coefficients of the pre-refit formula). Here is an example of a store whose electricity consumption was weakly related to heating degree days prior to its refit:

Cusum analysis using this baseline model yields a chart which starts horizontal but then turns downwards when the energy performance improves after the refit:

Thanks to the availability of half-hourly data, the M&T software can display a ‘heatmap’ chart showing half-hourly consumption before, during and after the refit. In this example it is interesting to note that savings did not kick in until two weeks after completion of the refit:

Once enough weeks have passed (as in the case under discussion) the analyst can carry out a fresh regression analysis to establish the new performance characteristic, and this becomes the target for every subsequent week. The diagram below shows the target (green) and baseline (grey) characteristics, at a future date when most of the pre-refit data points are no longer plotted:

A CTG008-compliant M&T scheme retains both the baseline and target models. This has several benefits:

  • Annual savings can be projected fairly even if the pre- or post-refit periods are less than a year;
  • The baseline model enables savings to be tracked objectively: each week’s ‘avoided energy consumption’ is the difference between actual consumption and what the baseline model yielded as an estimate (given the prevailing degree-day figures); and
  • The target model provides a dynamic yardstick for ongoing weekly consumptions. If the energy-saving measures cease to work, actual consumption will exceed what the target model predicts (again given the prevailing degree-day figures). See final section below on routine monitoring.

I am covering advanced M&T methods in a workshop on 11 September in Birmingham

A legitimate approach?

Doing measurement and verification this way is a long way off the requirements in IPMVP. In the circumstances we are talking about – a continuous pipeline of refits managed by dozens of project teams – it would never be feasible to have M&V plans for every intervention,. Among the implications of this is that no account is taken (yet) of static factors. However, the deployment of heat-map visualisations means that certain kinds of change (for example altered opening hours) can be spotted easily, and others will be evident. I would expect that with the sheer volume of projects being monitored, my client will gradually build up a repertoire of common static-factor events and their typical impact. This makes the approach essentially a pragmatic one of judging by results after the event; the antithesis of IPMVP, but much better aligned to real-world operations.

Long-term routine monitoring

The planned methodology, particularly when it comes to dealing with erosion of savings performance, relies on being able to prioritise adverse incidents. Analysts should only be investigating in depth cases where something significant has gone wrong. Fortunately the M&T environment is perfect for this,  since ranked exception reporting is one of its key features. Every week, the analyst will run the Overspend League Table report which ranks any discrepancies in descending order of apparent weekly cost:

Any important issues are therefore at the top of page 1, and a significance flag is also provided: a yellow dot indicating variation within normal uncertainty bands, and a red dot indicating unusually high deviation. Remedial effort can then be efficiently targeted, and expected-consumption formulae retuned if necessary.

Using M&T techniques on billing patterns

One of my current projects is to help someone with an international estate to forecast their monthly energy consumption and hence develop a monthly budget profile. Their budgetary control will be that much tighter because it has seasonality built in to it in a realistic fashion.

Predicting kWh consumptions at site level is reasonably straightforward because one can use regression analysis against appropriate heating and cooling degree-day values, and then extrapolate using (say) ten-year average figures for each month. The difficulty comes in translating predicted consumptions into costs. To do this rigorously one would mimic the tariff model for each account but apart from being laborious this method needs inputs relating to peak demand and other variables, and it presumes being able to get information from local managers in a timely manner. To get around these practical difficulties I have been trying a different approach. Using monthly book-keeping figures I analysed, in each case, the variation in spend against the variation in consumption. Gratifyingly, nearly all the accounts I looked at displayed a straight-line relationship, i.e., a certain fixed monthly spend plus a flat rate per kWh. Although these were only approximations, many of them were accurate to half a percent or so. Here is an example in which the highlighted points represent the most recent nine months, which are evidently on a different tariff from before:

I am not claiming this approach would work in all circumstances but it looks like a promising shortcut.

Cusum analysis also had a part to play because it showed if there had been tariff changes, allowing me to limit the analysis to current tariffs only.

The methods discussed in this article are taught as part of my energy monitoring and targeting courses: click here for details

Furthermore, in one or two instances there were clear anomalies in the past bills where spends exceeded what would have been expected. This suggests it would be possible to include bill-checking in a routine monitoring and targeting scheme without the need for thorough scrutiny of contract tariffs.

Data centres and ISO 50001

Certification to ISO50001 can yield benefits but would be fatally compromised if a misleading energy performance indicator is used to track progress.

Power Utilisation Effectiveness, PUE, is the data-centre industry’s common way of reporting energy performance, but it does not work. It is distorted by weather conditions and (worse still) gives perverse results if users improve the energy efficiency of the IT equipment housed in a centre through (for example) virtualisation.

This presentation given at Data Centres North in May 2018 explains the problem and shows how a corrected PUE should be computed.

Pie charts

 

In his highly-recommended book Information dashboard design, data-presentation guru Stephen Few criticises pie charts as being a poor way to present numerical data and I quite strongly agree. Although they seem to be a good way to compare relative quantities, they have real limitations especially when there are more than about five categories to compare. A horizontal bar chart is nearly always going to be a better choice because

  1. there is always space to put a label against each item;
  2. you can accommodate more categories;
  3. relative values are easier to judge;
  4. you can rank entries for greater clarity;
  5. it will take less space while being more legible; and
  6. you don’t need to rely on colour coding (meaning colours can be used to emphasise particular items if needed).

Pie charts with numerous categories and a colour-coded key can be incredibly difficult to interpret, even for readers with perfect colour perception, and bad luck if you ever have to distribute black-and-white photocopies of them.


Data presentation is one of the topics I cover in my advanced M&T master classes. For forthcoming dates click here

 

Common weaknesses in M&T software

ONE OF MY GREAT FRUSTRATIONS when training people in the analysis and presentation of energy consumption data is that there are very few commercial software products that do the job sufficiently well to deserve recommendation. If any developers out there are interested, these are some of the things you’re typically getting wrong:

1. Passive cusum charts: energy M&T software usually includes cusum charting because it is widely recognised as a desirable feature. The majority of products, however, fail to exploit cusum’s potential as a diagnostic aid, and treat it as nothing more than a passive reporting tool. What could you do better? The key thing is to let the user interactively select segments of the cusum history for analysis. This allows them, for example, to pick periods of sustained favourable performance in order to set ‘tough but achievable’ performance targets; or to diagnose behaviour during abnormal periods. Being able to identify the timing, magnitude and nature of an adverse change in performance as part of a desktop analysis is a powerful facility that good M&T software should provide.

2. Dumb exception criteria: if your M&T software flags exceptions based on a global percentage threshold, it is underpowered in two respects. For one thing the cost of a given percentage deviation crucially depends on the size of the underlying consumption and the unit price of the commodity in question. Too many users are seeing a clutter of alerts about what are actually trivial overspends.

Secondly, different percentages are appropriate in different cases. Fixed-percentage thresholds are weak because they are arbitrary: set the limit too low, and you clutter your exception reports with alerts which are in reality just normal random variations. Set the threshold too high, and solvable problems slip unchallenged under the radar. The answer is to set a separate threshold individually for each consumption stream. It sounds like a lot of work, but it isn’t; it should be be easy to build the required statistical analysis into the software.

3. Precedent-based targets: just comparing current consumption with past periods is a weak method. Not only is it based on the false premise that prevailing conditions will have been the same; if the users happens to suffer an incident that wastes energy, it creates a licence to do the same a year later. There are fundamentally better ways to compute comparison values, based on known relationships between consumption and relevant driving factors.

Tip: if your software does not treat degree-day figures, production statistics etc as equal to consumption data in importance, you have a fundamental problem

4. Showing you everything: sometimes the reporting philosophy seems to be “we’ve collected all this data so we’d better prove it”, and the software makes no attempt to filter or prioritise the information it handles. A few simple rules are worth following.

  1. Your first line of defence can be a weekly exception report (daily if you are super-keen);
  2. The exception report should prioritise incidents by the cost of the deviations from expected consumption;
  3. It should filter out or de-emphasise those that fall within their customary bounds of variability;
  4. Only in significant and exceptional cases should it be necessary to examine detailed records.

5. Bells and whistles: presumably in order to give salesmen something to wow prospective customers, M&T software commonly employs gratuitous animation, 3-D effects, superfluous colour and tricksy elements like speedometer dials. Ridiculously cluttered ‘dashboards’ are the order of the day.

Tip: please, please read Stephen Few’s book “Information dashboard design”


Current details of my courses and masterclasses on monitoring and targeting can be found here

Energy monitoring of multi-modal objects

Background: conventional energy monitoring

In classic monitoring and targeting practice, consumption is logged at regular intervals along with relevant associated driving factors and a formula is derived which computes expected consumption from those factors. A common example would be expected fuel consumption for space heating, calculated from measured local degree-day values via a simple straight-line relationship whereby expected consumption equals a certain fixed amount per week plus so many kWh per degree-day. Using this simple mathematical model, weekly actual consumptions can then be judged against expected values to reveal divergence from efficient operation regardless of weather variations. The same principle applies in energy-intensive manufacturing, external lighting, air compressors, vehicles and any other situation where variation in consumption is driven by variation in one or more independently measurable factors. The expected-consumption models may be simple or complex.

Comparing actual and expected consumptions through time gives us valuable graphical views such as control charts and cusum charts. These of course rely on the data being sequential, i.e., in the correct chronological sequence, but they do not necessarily need the data to be consecutive. That is to say, it is permissible to have gaps, for instance to skip invalid or missing measurements.

The Brigadoon method

“Brigadoon” is a 1940s Broadway musical about a mythical Highland village that appears in the real world for only one day a year (although as far as its inhabitants are concerned time is continuous) and its plot concerns two tourists who happen upon this remote spot on the day that the village is there. The story came to mind some years ago when I was struggling to deal with energy monitoring of student residences. Weekly fuel consumption naturally dropped during vacations (or should do) and I realised I would need two different expected-consumption models, one for occupied weeks and another for unoccupied weeks using degree-days computed to a lower base temperature. One way to accommodate this was to have a single more complex model that took the term/vacation state into account. In the event I opted for splitting the data history into two: one for term weeks, and the other for vacation weeks. Each history thus had very long gaps in it, but there is no objection to closing up the gaps so that in effect the last week of each term is immediately followed by the first week of the next and likewise for vacations.

This strategy made the single building into two different ones. Somewhat like Brigadoon, the ‘vacant’ manifestation of the building for instance only comes into existence outside term time, but it appears to have a continuous history. The diagram below shows the control chart using a single degree-day model on the left, as per conventional practice, while on the right we see the separate control charts for the two virtual buildings, plotted with the same limits to show the reduction in modelling error.

Not just space heating

This principle can be used in many situations. I have used it very successfully on distillation columns in a chemical works to eliminate non-steady-state operation. I recommended it for a dairy processing plant with automatic meter reading where the night shift only does cleaning while the day shift does production: the meters can be read at shift change to give separate ‘active’ and ‘cleaning’ histories for every week. A friend recently asked me to look at data collected from a number of kilns with batch firing times extending over days, processing different products; here it will be possible to split the histories by firing programme: one history for programme 20, another for 13, and so on.

Nice try, but…

A recent issue of the CIBSE Journal, which one would have thought ought to have high editorial standards, recently published an article which was basically a puff piece for a certain boiler water additive. It contained some fairly odd assertions, such as that the water in the system would heat up faster but somehow cool down more slowly. Leaving aside the fact that large systems in fact operate at steady water temperatures, this would be magic indeed. The author suggested that the additive reduced the insulating effect of  steam bubbles on the heat-exchanger surface, and thus improved heat transfer. He may have been taking the word ‘boiler’ too literally because of course steam bubbles don’t normally occur in a low or medium-temperature hot water boiler, and if they did, I defy him to explain how they would interfere with heat transfer in the heat emitters.

But for me the best bit was a chart relating to an evaluation of the product in situ. A scatter diagram compared the before-and-after relationships between fuel consumption and degree days (a proxy for heating load). This is good: it is the sort of analysis one might expect to see,

The chart looked like this, and I can’t argue that performance is better after than before. The problem is that this chart does not tell quite the story they wanted. The claim for the additive is that it improves heat transfer; the reduction in fuel consumption should therefore be proportional to load, and the ‘after’ line ought really to have a shallower gradient as well as a lower intercept. If the intercept reduces but the gradient stays the same, as happened here, it is because some fixed load (such as boiler standing losses) has disappeared. One cannot help wondering whether they had idle boilers in circuit before the system was dosed, but not afterwards.

The analysis illustrated here is among the useful techniques people learn on my energy monitoring and targeting courses.

MAVCON17 WAS A HIT

We’ve had some very enthusiastic feedback from delegates at MAVCON17, the third National Measurement and Verification Conference,  which we held on 16 November.

Delegates wrestle with the thorny issue of non-routine adjustments

Adam Graveley of Value Retail for example described it as “a very informative and well-organised conference that provided a great deal of practical insight” .

The event consistently attracts around 70 M&V practitioners who value not only the networking opportunity but also what they call the ‘geek element’ (expert technical papers with extended question-and-answer sessions), group exercises, and a no-holds-barred expert panel discussion for which this year’s theme was “when M&V goes wrong”.

(l. to r.)  Chairman Richard Hipkiss, keynote speaker Denis Tanguay and organiser Vilnis Vesma

Our keynote speaker was Denis Tanguay, Executive Director of the Efficiency Valuation Organisation, the body responsible for the International Performance Measurement and Verification Protocol (IPMVP). We are planning to run MAVCON again in early November 2018 and are open for offers of technical papers and ideas for group exercises.

We are grateful to our other speakers Dave Worthington, Hilary Wood, Colin Grenville, Steve Barker and  Emma Hutchinson and our expert panelists Sandeep Nair, Ellen Salazar and Quinten Babcock. You can read more about them at the conference web site www.MAVCON.uk

We should also acknowledge the venue, the Priory Rooms, for the quality of their service including excellent catering which also drew much favourable comment.

 

Daylight-linked consumption

When monitoring consumption in outside lighting circuits with photocell control, it is reasonable to expect weekly consumption to vary according to how many hours of darkness there were. And that’s exactly what we can see here in this Spanish car park:

It is a textbook example: with the exception of two weeks, it shows the tighest correlation that I have ever seen in any energy-consuming system.

The negative intercept is interesting, and a glance at the daily demand profile (viewed as a heatmap) shows how it comes about:

Moving left to right we see from January to March the duration of daylight (zero consumption in blue) increases. High consumption starts at dusk and finishes at dawn, but from about 10 p.m. to 5 a.m. it drops back to a low level. It is this “missing” consumption for about seven hours in the night which creates the negative intercept. If they kept all the lights on from dusk to dawn the line would go through the origin.

For weekly and monthly tabulations of hours of darkness (suitable for England and other countries on similar latitudes)  click here.

 

Energy Savings Opportunity Scheme

ESOS is the UK government’s scheme for mandatory energy assessments which must be reviewed and signed off by assessors who are on one of the approved registers. We are now in Phase 2 with a submission deadline in December 2019, but the Environment Agency is trying to get participants to act now.

I run a closed LinkedIn group for people actively engaged with ESOS; it provides a useful forum with lots of high-quality discussion.

Background reading

Useful resources

These are documents which I have developed to support the ESOS assessment process. I used them for my assignments during the first phase and have since revised them in the light of experience: