All posts by Editor

SECR in a nutshell

Updated 30/4/21 and 4//4/22

“Streamlined energy and carbon reporting” (SECR) is the term commonly used to describe the regime introduced with the Companies (Directors’ Report) and Limited Liability Partnerships (Energy and Carbon Report) Regulations 2018, Statutory Instrument 1155. This is not a self-contained set of regulations like ESOS; instead it consists of nothing but dozens of amendments to existing company reporting law. In short, undertakings covered by SECR simply need to collate annual total energy and emissions data and give them to their company secretary or accountant for inclusion in the annual report that they already have to prepare.

As this is an extension of financial reporting, compliance will be policed by the Financial Reporting Council, and not, as one might have thought, by the Environment Agency. The good news is that in terms of accuracy and completeness, your SECR reports need only be free of material misstatements, and according to the Government’s published guidance it is fine for a company to omit 2-5% of its energy or emissions if it considers them not to be material in the grand scheme of things.

Who is affected?

SECR applies to all quoted companies, and to unquoted companies and limited liability partnerships (LLP) which meet two of the following three criteria:

  1. At least 250 employees;
  2. £36 million annual turnover or more
  3. Balance sheet of £18 million or more

This is not quite the same as the ESOS regulations, in which an undertaking would be obliged to participate if it met criterion (1) alone.

Undertakings which consumed less than 40,000 kWh in the year being reported do not have to report their actual figures but must still state that they fell below that threshold.

It is fine for a company to omit 2-5% of its energy or emissions if it considers them not to be material

Group reports should include the figures for all subsidiaries apart from those that would be exempt. Under these circumstances a subsidiary need not report its own figures although, of course, it will still need to collate the data for group use.

What must be reported?

The requirement covers energy use and greenhouse gas emissions arising from all use of electricity, gas, and transport fuels. Incidentally the definition of “gas” is not limited to natural gas, but refers to any gaseous fuel so it even includes hydrogen. The inclusion of electricity on an equal footing with other energy sources means that SECR differs from emissions reporting, in which fuels and pirchased electricity are considered under different ‘scopes’. Somewhat bizarrely liquid and solid fuels do not have to be accounted for, unlike in CRC (which SECR supposedly replaces) ESOS and EUETS. Bought-in heat, steam and cooling are included but not compressed air.

Quoted companies must report global figures, but LLPs and unquoted companies only have to declare UK consumption and emissions.

In the main, therefore, any undertaking that already keeps even very basic monthly fuel and electricity consumption records for its fixed assets will have no trouble collating the necessary energy data. Transport fuel, of course, is a different issue. As many an ESOS participant has found, transport fuel data are disproportionately hard to collect relative to its importance in the mix. Luckily, if you can reasonably assert that your transport energy and emissions are not material to the overall picture, you can just leave them out.

My advice would therefore be to look first at transport fuels, decide whether they are material, and if so put resources into capturing the data or estimating the figures.

SECR requires emissions to be reported as well as energy consumptions. The necessary factors are published by the government and undertakings would be well advised to set up a methodical procedure for carrying out the calculations, because they must include details of their methodology alongside the data that they report.

Undertakings must report intensity metrics, of which an example would be kWh per unit of saleable product output. The idea is that stakeholders will be able to see, once a year, what progress the company is making in energy efficiency. This is actually a somewhat naïve and fanciful aim, given all the ways that such simple ratios can be distorted by external factors nothing to do with energy performance. Even more implausible is the idea of making ‘benchmarking’ comparisons between enterprises, but that is the government’s stated objective.

Companies are entitled not to report intensity metrics if, in their opinion, it would be prejudicial to their interests to do so. For example it might entail disclosing sensitive information about their sales volume. One option is to quote a metric based on financial turnover (which is already disclosed anyway). This may not be meaningful, but then neither is anything else they might report.

Finally, annual reports must now include descriptions of the principal measures taken to improve energy efficiency during the year in question, if there were any.

What is the compliance deadline?

Energy, emissions, intensity metrics and associated methodologies must be stated in annual reports covering accounting years starting in April 2019 or later, so by now all companies will have had full reporting years covered by the scheme (the last wave was for reporting years ending in February 2021). Actual report submission deadlines fall six months later for public companies, nine for private companies.


See links to SECR resources

Bulletin 26 April: realities of decarbonisation; T is for targets

Good morning

FEATURED EVENT

We’re hearing a lot in the news, including ambitious government announcements, about ‘decarbonising heat’. Most of the media coverage is about the domestic sector but the chances are that, as a reader, your focus is more on commercial, public-sector or industrial buildings or larger-scale residential facilities. You may have heard that experience with biomass and heat-pump installations has not always been positive, and you will want to understand the problems and pitfalls. You may also be getting questions about how hydrogen in the public supply might play out, and need ready answers.

We’ve therefore arranged a half-day intensive workshop on the practical realities of decarbonising heat in non-domestic buildings, with a team of experienced experts to talk about the lessons that have been learned and the technical and other issues that organisations face in coming decades. Details are at https://vesma.com/z200 and as ever your readers’ discount code is EMR2012.

–o–

T IS FOR TARGETS

In energy management the word ‘target’ has two distinct meanings. The first is the ‘aspirational’ target, usually set from on high without regard to practicability, to reduce consumption by x% within a certain time. It’s not a particularly smart approach. In fact in a large organisation it is almost guaranteed to fail, and here’s why. Top management sets a reduction target of x%. There being no easy, transparent and equitable way to do anything else, all departments adopt the same x% target and pass it down the chain to the lowest-tier managers. For some of them, x% is impossible to achieve so they fail. For others it will be achievable or even easy. They will save x% and then probably stop trying. Why would they over-achieve? They have other work to worry about and anyway we all know if we beat our target our managers will just give us a harder target next time. So take the roughly x% saved by the successful ones, blend this with the lower savings achieved by the first group, and you have an aggregate failure.

For me, achievability is key, and when I talk about a performance ‘target’ I mean just maintaining the best performance you can demonstrably achieve. In other words, avoid accidental excess consumption (see next article). This may not be ambitious but it is worth doing; using regression or other modelling methods supported by cusum analysis it is possible to ensure that everything has its own achievable performance characteristic. The word ‘achievable’ is crucial: it’s much more likely to get buy-in than the megaphone-management targeting that I described earlier.

I said achievable ‘characteristic’ because my concept of a target differs from common understanding in another important respect. My ‘target’ is not expressed as an annual kWh figure, nor indeed as a performance indicator, but in terms of an expected consumption quantity dynamically linked to relevant driving factors, meaning that you can track performance at whatever interval you want.

–o–

£50 FOR THE CHARITY OF YOUR CHOICE?

If you’ve ever stumbled over something that was wasting energy in a manner that was easily avoidable, I’d like to hear about it. Every month we’ll donate £50 (or the equivalent) to the charity nominated by the person whose entry is deemed the best by our guest judge. Details at http://EnManReg.org/starspot

Kind regards
Vilnis

Make appliances rate again

DATELINE 1 APRIL, 2021: The Government is keen to nudge people to choose more energy-efficient household appliances and for many years has helped consumers by getting manufacturers to put energy labels on products, typically rating them A to G to signify that they are more or less energy efficient (a concept too complex for most people to grasp). And for people who find the concept of A, B, C etc too complex to grasp they add coloured arrows of different lengths. The shorter the arrow, the higher the efficiency.

The march of progress has caused problems because many products are now more energy efficient than the bureaucrats foresaw. They are crowded into the ‘A’ rating band, and unfortunately there are no letters before A in the alphabet so ‘A’ is now sometimes subdivided into A+, A++ and A+++. However, most people find this concept too difficult to grasp, so the efficiency scales for affected appliances will be regraded A to G so that for example what was A++ will now become B, A will become D, and so on (they will take F off).

Meanwhile a rival scheme for washer-dryers caught my attention. This gives a three-letter rating signifying the efficiencies of washing, spinning, and tumble-drying parts of the cycle. Thus a machine that is in the most efficient category in every respect gets an ‘AAA’ rating. With a bit of forethought they could have started later in the alphabet to allow room for future improvement. They could even have helped people by using the sequences W to Z for Washing, S to V for Spinning, and D to G for Drying. Then a machine currently labelled ‘ABC’ would become ‘WTF’.

Another satisfied customer

Delighted to receive this unsolicited testimonial from a client who is moving to a new job:

“Thanks so much for the fantastic service you have offered over the years.  Vesma has consistently provided [our company] with the kind of flexible, responsive service that has met our demand for an innovative approach to energy management time and again.  I will continue to recommend your training and consultancy services to others.”

Credit should go to my fellow-director Daniel Curtis who developed the innovative but simple data infrastructure we built our services on, and who was generally the front-line responder when it was needed.

Control charts in energy performance monitoring

Once you have discovered how to routinely calculate expected consumptions for comparison with actual recorded values, you can get some very useful insights into the energy behaviour of the processes, buildings and vehicles under your supervision. One thing you can do is chart the history of how actual and expected consumption compare. In this example we are looking at the daily electricity consumption of a large air-compressor installation:

Comparison of actual daily consumptions with what they should have been given the output of the compressors

The green trace represents expected kWh (computed by a formula based on the daily air output) and the individual points represent the actual metered kWh. Most of the time the two agree, but there were times in this case when they diverged.

It is illuminating to concentrate on the extent to which actual consumption has deviated from expected values, so in the following chart we focus on the difference between them:

The difference between actual and expected consumption.

There will always be some discrepancy between actual and expected consumptions. Part of the difference is purely random, and the limits of this typical background variation are signified by the red dotted lines. If the difference goes outside these bounds, it is probably because of an underlying shift in how the object is performing. In the above diagram there were three episodes (one moderate, two more severe) of abnormal performance. Significant positive deviations (above the upper control limit) are more usual than negative ones because consuming more energy than required for a given output is much more likely than using less.

For training in energy consumption analysis look for ‘monitoring and targeting’ at VESMA.COM

In a well-constructed energy monitoring and targeting scheme, every stream of consumption that has a formula for expected consumption will also have its own control limit. The limits will be narrow where data are reliable, the formula is appropriate, and the monitored object operates in a predictable way. The limits will be wider where it is harder to model expected consumption accurately, and where there is uncertainty in the measurements of consumption or driving factors. However, it is not burdensome to derive specific control limits for every individual consumption stream because there are reliable statistical methods which can largely automate the process.

Control charts are useful as part of an energy awareness-raising programme. It is easy for people to understand that the trace should normally fall between the control limits, and that will be true regardless of the complexity of the underlying calculations. If people see it deviate above the upper limit, they know some energy waste or losses have occurred; so will the person responsible, and he or she will know that everyone else could be aware of it as well. This creates some incentive to resolve the issue, and once it has been sorted out everyone will see the trace come back between the limits.

Demand visualisation with heatmap views

The principle

Widespread adoption of automatic meter reading has given many energy users a huge volume of fine-grained data about energy consumption. How best to use it? A ‘heat-map’ chart is a powerful visualisation technique that can easily show ten weeks’ half-hourly data in a single screen. This for example is the pattern of a building’s gas consumption between November and January:

Each vertical slice of the chart is one day, running midnight to midnight top to bottom, with each half-hourly cell colour-coded according to demand . This creates a contour-map effect and when you look at this specifi example, you can see numerous features:

  • Fixed ‘off’ time;
  • Optimised startup time (starts later when the building has not cooled down as much overnight);
  • Peak output during startup;
  • Off at weekends but with some heating early on Saturday mornings;
  • Shut-down over Christmas and New Year; and
  • A brief burst of consumption during the Christmas break, presumably frost protection.

Further examples

This building’s gas consumption pattern is quite similar to the previous one’s (they both belong to the same organisation), but the early-morning startup boost is much more evident and occurs even during the Christmas and New Year break:

Next we have a fairly typical profile for electricity consumption in an office building. What is slightly questionable is the higher daytime consumption near the start (April) compared with the end (June). This suggests the use of portable heaters. Note also that the peak half-hourly demands can easily be seen (Friday of the second week and Wednesday of the fiifth week). In both cases it is evident that those peaks occurred not because of any specific incident but because consumption had generally been higher than usual all day:

In this final example we are looking at short-term heatmap views of electricity feeding a set of independent batch processes in a pharmaceutical plant. The left-hand diagram is the actual measured consumption while the right-hand diagram is the expected profile based on a mathematical model of the plant into which we had put information about machine scheduling:

 

 

 

 

 

 

Proving savings: case history using fine-grained energy data

This case history shows how fine-grained energy measurements enabled savings to be verified in the face of assorted practical problems.

THE STORY concerns a large air-conditioned establishment in the middle east which had been fitted with adiabatic cooling sprays on its central chillers. It is one of a number belonging to an international chain and my purpose was to establish whether similar technology should be contemplated elsewhere in the chain.

I had originally been commissioned to give a second opinion on the savings claims made by the equipment supplier. Although the claims were quite plausible, they lacked some credibility because they were based on an extremely short evaluation. So in an initial effort at  independent checking I obtained several years’ worth of monthly consumption data at the whole-site level and analysed it against local cooling degree days. The results were ambiguous because there appeared to be some unrelated phenomenon at play which resulted in the site toggling on a seasonal cycle between two performance characteristics, one significantly worse than the other, with an impact sufficient to mask the beneficial effect of the energy conservation measure (ECM).

Without reliable evidence I declined to verify the supplier’s assessment, and recommended a deliberate test using a ‘retrofit isolation’ approach based on three existing electricity submeters (one per chiller) and a new heat meter in the common chilled-water circuit. Because nobody wanted to pay for a proper heat meter, a clamp-on ultrasonic flow element was specified. At this point the pandemic struck.

Because we had data-logged metering, and because the adiabatic cooling system sounded as if it would lend itself to being turned on and off, the measurement and verification plan was based on day-on, day-off testing. I had proposed a ten-week testing campaign which would give us 35 observations in each state. The plan called for two regression models to be developed: one for the ECM ‘on’ days and the other for the ‘off’ days. Extrapolation from the regression formulae would indicate a percentage difference and variance within each would confirm how much uncertainty there was.

Then came the collision with reality. The ECM supplier said disabling it wasn’t as simple as turning the cooling sprays off; he insisted that the mesh screens which were part of the installation should come off as well and be reinstated when the spray was re-enabled. This was a quite reasonable stance because it gave a more valid comparison, but unfortunately he couldn’t afford to send a man to site every day to do what was necessary. Meanwhile the establishment’s manager had got wind of the project and put pressure on the site engineer not to disable the ECM, which he was convinced was saving him a lot of money which (thanks to lockdown) the business could not afford to lose. He also had a point. Luckily we agreed a compromise: in exchange for them coming down to a three-day on-off cycle, I promised to monitor things closely and terminate the test as soon as conclusive results emerged.

Needless to say, the ultrasonic meter let us down. The technician responsible for downloading its data reported that some of its hourly totals looked suspiciously high, and he proposed filtering them out. But I feared that he might not be able to capture all the rogue points. Some of them might only be slightly wrong, but wrong nonetheless. When we drilled down we discovered that the raw data were stored at one-minute intervals, with the measurement fault manifesting itself as gross errors confined to occasional one-minute records. You can see this from this figure, which spans several months but plots every single one-minute record:

Figure 1: one-minute interval flow measurements fell into three bands during June and July. Only values in the bottom band can be trusted.

Valid readings at the one-minute interval clearly fell below a threshold of about 450 m3/hr, and abnormal readings were (a) very clear and (b) sparsely distributed through time, so my colleague Daniel Curtis was able to pushed the records through a sieve, take out the lumps and thereby cleanse the data (and that included reinstating hourly values which the meter software had censored because they appeared to be too low). We were helped in this by the flow normally being relatively constant, so that simple interpolation was an accurate gap-filling strategy. When we compared reported flow measurements with corrected values at eight-hour intervals we saw that in reality almost every reported value had been wrong:

Figure 2: raw flow measurements in eight-hour intervals compared with their cleansed values. In the absence of errors all the points would fall on the 1:1 line

Analysis then proceeded using cleaned-up eight-hour interval data, but we were still not out of the woods. From the individual chillers’ electricity meters we could see that the site operations team were changing the chiller sequence from time to time. Fortunately their manager agreed that barring emergencies they could leave the same chillers in service, with the same one in the lead, until the test finished. They also agreed not to change the chilled-water set point temperature, which we discovered they were in the habit of tweaking. Of course in a textbook measurement and verification exercise these static factors would have emerged during planning but this project was being conducted in a hurry on a shoestring and managed remotely. That wasn’t the end of it: later in the test we would have interruptions because the chiller maintenance firm was scheduled to clean the condenser coils. More on that later.

Nor were those the only complications. A few weeks after we started testing in anger in June, the plant did one of its random, unexplained and unexpected changes in energy performance. In Figure 3 you can see the deviation from predicted consumption for intervals when the ECM was active:

Figure 3: deviation from expected consumption, based on performance observed in June

The chillers change their behaviour about two-thirds of the way through this history. If we look at the regression model for the ‘ECM on’ condition we can see the unexplained shift very clearly:

Figure 4: with the ECM active, performance in July was consistently different from what it was in June

So we ended up, in effect, with two tests: one for June and one for July. Looking now at the comparison between ‘on’ and ‘off’ performance in July alone we saw a clear difference:

A similar picture was obtained from observations made in June and the conclusions were that savings of 16.1% and 17.5% were attributable to the ECM in June and July respectively. However, as a further bonus we observed:

  • The chiller installation’s performance spontaneously deteriorated by 8% at the end of June, echoing behaviour first witnessed in 2019. Identifying the cause will probably save money quite easily; and
  • Cleaning the condenser coils made no difference to performance. They were probably clean and it was a waste of money, so I suggested not cleaning them until condenser temperatures started to rise.

What lessons would I draw from this episode? That the verifier needs to be vigilant, sceptical, and cautious but flexible. In fact flexibility is needed on all sides, and that is best served by developing trust; trust in this case was built up through openness and continuous communication.

Energy audits and surveys: rule of three

ENERGY surveys and audits – deliberate studies to find energy-saving opportunities – can be done with three levels of depth and thoroughness, can look at three broad aspects of operations, and will generally adopt one of three approaches.

Depth and thoroughness

Let’s take depth and thoroughness first. Level 1 would be an opportunity scan. This typically has a wide scope, and is based on a walk-though inspection. It will use only readily-available data, and provide at most only rough estimates of savings with no implementation cost estimates. It will yield only low-risk recommendations (the “no-brainers”) but should identify items for deeper study.

Level 2 is likely to have a selective scope (based perhaps on the findings from a Level 1 exercise). It is best preceded by a desktop analysis of consumption patterns and relationships, which means first collecting additional data on consumption and the driving factors which influence it. It should yield reasonably accurate assessments of expected savings but probably at best only rough cost estimates. It can therefore provide some firm recommendations relating to ‘safe bets’ and otherwise identify possible candidates for investment.

Level 3 is the investment-grade audit. This may have a narrow scope – perhaps one individual project – and will demand a sketch design and feasibility study, with accurate assessments of expected savings, realistic quotations for implementation, sound risk evaluation and (I would recommend) a measurement and verification plan.

Aspects covered

Next we will look at the three broad aspects of operations that the audit could cover. These are ‘technical’, ‘human factors’, and ‘procedural’.

Technical aspects will encompass a spectrum from less to more intrusive (starting with quality of automatic control and set points through energy losses to component efficiencies). In manufacturing operations the range continues through process layouts, potential for process integration and substitution of alternative processes.

Human-factors aspects meanwhile will cover good housekeeping, compliance with operating instructions, maintenance practices, training needs and enhanced vigilance.

Thirdly, procedural aspects will include the scope for improved operating and maintenace instructions, better plant loading and scheduling, effective monitoring and exception handling, and ensuring design feedback.

Approaches to the audit

The final three dimensions relate to the audit style, which I characterise as checklist-based, product-led, or opportunity-led.

The checklist-based approach suits simple repetitive surveys and less-experienced auditors.

Product-led audits have a narrow focus and exploit the expertise of a trusted technology supplier. Because the chosen focus is often set by advertising or on a flavour-of-the-month basis, the risk is that the wrong focus will be chosen and more valuable opportunities will be missed. Or worse still, the agenda will be captured by snake-oil merchants.

Finally we have the ‘opportunity-led’ style of audit. This is perhaps the ideal, although not always attainable because it needs competent auditors with diverse experience and will include the prior analysis and preliminary survey mentioned earlier.

These ideas, together with other advice on energy auditing, are to be covered in a new optional add-on module for my “Energy efficiency A to Z” course which explais a wide range of technical energy-saving opportunities. Details of all my forthcoming training and conferences on energy saving can be found at https://vesma.com/training.

Remote measurement and verification

Note: this guidance was developed because of the Covid-19 pandemic. However, by its nature, the pandemic is an unexpected major non-routine event which will in many cases completely invalidate baselines developed before March 2020.  Readers should not expect this advice to remedy the resultant disruption to evaluations. It may prove to be applicable only to ‘retrofit isolation’  assessments and trials using extended sequences of on-off mode changes.


THIS GUIDANCE proposes enhancements to standard measurement and verification protocols to cover the situation where, for reasons beyond the control of the parties involved, a measurement and verification practitioner (MVP) is unable to participate in person.

Firstly the proposal for the energy-saving project should indicate not only the quantity or proportion by which consumption is expected to be reduced, but wherever possible the nature of the expected reduction. For example where data are to be analysed at weekly or monthly intervals, it may be possible to say whether reductions are expected in the fixed or variable components of demand or both; while for data collected at intervals of an hour or less it may be possible to define the expected change in daily demand profile or other parameters related to the pattern of demand.

Setting the expectations more precisely in this manner will help the MVP to detect whether post-implementation results may have been affected by unrelated factors.

Secondly a longer period than usual of pre-implementation data should be collected and analysed. This is necessary in order not only to establish prevailing levels of measurement and modelling uncertainty, but potentially to expose pre-existing irregularities in performance. Such monitoring should employ the same metering devices which will be used for post-implementation assessment.

The causes of historical irregular performance should be traced as they could provide clues about foreseeable non-routine events (NRE) which would then need to be allowed for in post-implementation assessment in case they recur. If NREs are not adequately allowed for, they will at best degrade the analysis and at worst lead to incorrect conclusions.

Thirdly, all parties should remember that as well as foreseeable NREs, there will be unforeseen ones as well. Dealing with is part of standard practice but because the appointed MVP is unable to visit the subject site and interview key personnel, he or she is likely to miss important clues about potential NREs which otherwise would have been evident based on his or her professional experience. It is therefore imperative that a planning teleconference takes place involving local personnel who are thoroughly conversant with the operation and maintenance of the facility. As part of this meeting a knowledgeable client representative should provide the MVP with a clear account of how the facility is used with particular emphasis on non-standard situations such as temporary closures. Pertinent input from client representatives at large would include (to give some examples) control set-points, time schedules, plant sequencing and duty-cycling regimes, occupation patterns, the nature and timing of maintenance interventions and so on. Information about other projects—both active and contemplated—should be disclosed. The MVP has a duty to probe and ask searching questions. It should never be assumed that something is irrelevant to the matter in hand and as a general rule no question asked by the MVP should go unanswered.

It may be helpful to provide the facility of a walk-through video tour for the benefit of the MVP, which can of course be on a separate occasion.

We will be holding this year’s measurement and verification conference, MAVCON20, as a weekly series of two-hour sessions in October and November.  Follow this link for details and booking information

Monitoring solar installations

In a recent newsletter I suggested that somebody wishing to monitor the health of their solar PV installations could do so using ‘back-to-back’ comparisons between them. Reader Ben Whittle knows a lot more about these matters and wrote to put me right. His emails are reproduced here:

I would point out that whilst it may possibly be interesting to compare solar installations and account for cloud cover, personally I wouldn’t bother!

  1. variable cloud cover is variable – you can’t control it and generally it is fair to say that annual variation in the UK rarely deviates +/- 5% annually
  2. if you have a monitoring system, it will be capable of telling you when there is a fault immediately by email rather than waiting to do analysis

In the case of inverter  manufacturers’ own monitoring systems they will directly report faults immediately , usually based on

  1. an actual fault code being generated by the inverter – typically either being switched off and failing to report at all, string insulation resistance faults or other major failures
  2. output not matching other inverters in the same installation or sometimes against a base case / prediction based on yield expected due to weather forecasts
  3. possibly against a self defined target, and a failure to meet it

Third-party monitoring manufacturers will typically do the same as inverter manufacturer monitoring (with the exception of not reporting actual fault codes), but they have the advantage of being able to report on installations with mixed inverter manufacturers being used (possibly new + historic installations in one location or a portfolio of installations from different installers)

One classic mistake made with solar monitoring information is having no clear idea of what and how you are going to deal with all the information! It is time consuming to do and takes a bit of experience to make sense of it all.

So I asked Ben if there was a cost attached and this was his reply:

Most inverter manufacturers provide a solution. Three of the biggest brands (SMA, Fronius, SolarEdge) all have very competent systems, which are hosted for free, but you can get additional services by paying extra.

A typical domestic setup (which could cover any installation to any size in theory) would have basic info on annual, monthly and daily yield, and may also display self consumption rates assuming you have bought the requisite sub meter. Other info can include energy sent to a battery if you have one. It would also notify you if you lost grid connection, or communication faults. Communication is typically managed over wifi for domestic set ups and ethernet in commercial set ups. Remote solar farms do all this over 3g or 4g if there is no nearby telephone infrastructure.

Where you would pay money for a service is for an enterprise solution: this would allow you to also compare multiple installations and give you more detailed performance info, possibly also automating equipment replacement or engineer visits if malfunctions were being detected. (You would only get this from the major manufacturers with a dedicated team in this country, or an O&M service provider who was being paid to keep an eye on performance).

Third party systems typically only work using generation meter and export meter info, but a surprising amount of knowledge can be gleaned from this – you are after all only trying to find anomalies and once you have defined the expected performance initially this is quite straightforward. The advantage of this is that if you are managing lots of different installations with different inverters then you can pull the data all into one database. Big O&M companies may insist on this being added where a service level is being defined – such as 98% availability or emergencies responded to  in under 24 hours. The service will also include additional data points such as pryanometer info and other weather data, depending on the scale of the installation.

The companies who operate big solar farms are often hedge funds and they don’t like leaving systems down and not running for any length of time given the income from feed in tariffs. Though they quite often don’t manage the farms as well as they could do…


Ben Whittle (07977 218473, ) is with the  Welsh Government Energy Service