Stand: 21.08.19 10:00 Uhr
Many thanks to ml for the translation!

On Friday, 9 August 2019, a major disturbance in the UK’s electricity transmission system was presumably caused by the failure of a gas-fired power station (664 MW) and shortly afterwards by the additional failure of a wind farm (406 MW) from 16:54 (15:54 UTC). Initially this resulted in a massive under-frequency of 48.889 Hz (extranet.nationalgrid.com), whereby, under the ENTSO-E Network Codes, the first security level was triggered at 49 Hz. This meant the immediate load shedding (disconnection) of approximately 10 per cent of consumers (which affected approximately one million people). As a result, by 16:01 the frequency rose again to 50.246 Hz, which successfully prevented the major disturbance from spreading. By 17:40 the power supply was fully restored. (The power stations were back in action after 15 minutes and National Grid says local power suppliers were meeting demand by 17:40. But the knock-on effect is likely to be felt for several hours to come.).

Although a blackout did not occur, the major disturbance did indeed have a significant impact on other infrastructures. In the instance of one hospital, problems arose again with the emergency power supply (see also: Berlin-Köpenick). Restoring railway services took a particularly long time, which once again is indicative of the massive problems which must be taken into account for the time when the entire rail and air traffic come to an unscheduled and chaotic standstill. How can these systems be restarted, or how long will it take if the power failure lasts not just for 15 minutes but for several hours, or even days and it will then take even more days until telecommunications services function again? In the meantime, what happens to the staff who are stranded somewhere or other? And there are many more questions which will present themselves!

Update 21.08.19: Interim Report nationalgridESO 

into the Low Frequency Demand Disconnection (LFDD) following Generator Trips and Frequency Excursion on 9 Aug 2019; Quelle: nationalgridESO 

Enlarge infographic; Quelle: nationalgridESO 

Summary of event

Prior to 4:52pm on Friday 09 August Great Britain’s electricity system was operating as normal. There was some heavy rain and lightning, it was windy and warm – it was not unusual weather for this time of year. Overall, demand for the day was forecast to be similar to what was experienced on the previous Friday. Around 30% of the generation was from wind, 30% from gas and 20% from Nuclear and 10% from interconnectors.

At 4:52pm there was a lightning strike on a transmission circuit (the Eaton Socon – Wymondley Main). The protection systems operated and cleared the lightning in under 0.1 seconds. The line then returned to normal operation after c. 20 seconds. There was some loss of small embedded generation which was connected to the distribution system (c. 500 MW) due to the lightning strike. All of this is normal and expected for a lightning strike on a transmission line.

However, immediately following the lightning strike and within seconds of each other:

  • Hornsea off-shore windfarm reduced its energy supply to the grid Little Barford gas power station reduced its energy supply to the grid
  • The total generation lost from these two transmission connected generators was 1,378 MW. This unexpected loss of generation meant that the frequency fell very quickly and went outside the normal range of 50.5 Hz – 49.5Hz.

The ESO was keeping 1,000 MW of automatic “backup” power at that time – this level is what is required under the regulatory approved Security and Quality of Supply Standards (SQSS) and is designed to cover the loss of the single biggest generator to the grid.

All the “backup power” and tools the ESO normally uses and had available to manage the frequency were used (this included 472 MW of battery storage). However, the scale of generation loss meant that the frequency fell to a level (48.8 Hz) where secondary backup systems were required to disconnect some demand (the Low Frequency Demand Disconnection scheme) and these automatically kicked in to recover the frequency and ensure the safety and integrity of the network

This system automatically disconnected customers on the distribution network in a controlled way and in line with parameters pre-set by the Distribution Network Operators. In this instance c. 5% of GB’s electricity demand was turned off (c. 1 GW) to protect the other 95%. This has not happened in over a decade and is an extremely rare event. This resulted in approximately 1.1m customers being without power for a period.

The disconnection of demand along with the actions of the ESO Control Room to dispatch additional generation returned the system to a normal stable state by 5:06pm. The DNOs then commenced reconnecting customers and supply was returned to all customers by 5:37pm.

Immediate consequences

There were a number of very significant consequences from these events, the most significant of which include:

  • 1.1 million electricity customers were without power for between 15 and 50 minutes. A number of a particular class of trains operating in the South-East area were unable to stay operational throughout the event and, in a number of cases, required an engineer to be sent out to the individual train. This was likely a significant factor in the travel disruption on the rail network (nearly 1,500 trains cancelled or delayed).
  • Some other critical facilities were affected including Ipswich hospital (lost power due to the operation of their own protection systems) and Newcastle airport (disconnected by the Low Frequency Demand Disconnection scheme).

Preliminary findings

Our preliminary findings based on analysis to date are:

  • Two almost simultaneous unexpected power losses at Hornsea and Little Barford occurred independently of one another – but each associated with the lightning strike. As generation would not be expected to trip off or de-load in response to a lightning strike, this appears to represent an extremely rare and unexpected event.
  • This was one of many lightning strikes that hit the electricity grid on the day, but this was the only one to have a significant impact; lightning strikes are routinely managed as part of normal system operations.
  • The protection systems on the transmission system operated correctly to clear the lightning strike and the associated voltage disturbance was in line with what was expected.
  • The lightning strike also initiated the operation of Loss of Mains (LoM) protection on embedded generation in the area and added to the overall power loss experienced. This is a situation that is planned for and managed by the ESO and the loss was in line ESO forecasts for such an event.
  • These events resulted in an exceptional cumulative level of power loss greater than the level required to be secured by the Security Standards and as such a large frequency drop outside the normal range occurred.
  • The Low Frequency Demand Disconnection (LFDD) system worked largely as expected.
  • The Distribution Network Operators quickly restored supplies within 31 minutes once the system was returned to a stable position.
  • Several critical loads were affected for a number of hours by the action of their own systems, in particularly rail services.

 

Initial deductions by Herbert Saurugg

  • This event is surprising to the extent that until now it has been assumed that massive problems will arise in the UK power supply mainly in winter and at sub-zero temperatures: Großbritannien kämpft gegen den Blackout [Great Britain fights against the blackout]
  • As shown by the analysis of the power generation landscape at the time of the disturbance, it is probable that the readily controllable gas-fired power stations were a decisive factor for the disturbance being rectified within a very short period of time. In addition, at approximately 16:50 more than 18 GW (coal, gas, nuclear and hydro) were in the grid, which also ensured an adequate momentary reserve!
  • David Hunter, energy analyst at Schneider Electric, told the BBC that although the grid is “pretty safe and pretty reliable”, this was a “wake-up call” to the energy industry and businesses with critical infrastructure.
  • Two power stations shutting down almost simultaneously is “a very rare event”, says David Hunter. “That took the National Grid by surprise.” He says an investigation into the causes may show that the two failures were “coincidental and unconnected”, adding there have been occasions when two generators shut down independently before. But he said a power station dropping off the grid can also create a “domino effect”, where other generators buckle under the strain of making up for the shortfall in power.
  • Unfortunately, this shows once again that major disturbances are possible. Including in Europe. However, it should not be assumed that damage limitation will always be as successful as it was in this event.

Preparations for a blackout are vital and essential for survival in modern society!

If such a disturbance had occurred in Germany last weekend, for example, there would have been very little chance of staving it off: with a load of about 58 GW, about 53 GW from renewable energy and only 16 GW were produced by conventional power stations (momentary reserve: 1.5 GW from rapidly reactive gas-fired power stations).

Details and background information

Frequency response

 

 

Data source: www2.bmreports.com

By way of comparison, in January 2019 the frequency in the ENTSO-E network fell to 49.8Hz (Normal range: 49.8 – 5.2 Hz). At the time of the previously greatest major disturbance on 4 November 2006, the frequency in Western Europe had fallen to “only” 49 Hz.

On 4 November 2006, 22:10: the UCTE System splits into three areas of sharp under-frequency and over-frequency respectively: up to 49.0 Hz, up to 50.6 Hz and up to 49.7 Hz (as shown below).

Under the European Network Codes from a frequency of 49 Hz an immediate load shedding of about 12.5% must ensue (1 million people affected) in order to prevent a system collapse (a blackout). This measure took effect and was successful. The next stage would have been triggered at 48.8 Hz.

Load matching according to ENTSO-E Operation Handbook Policy 5

Compliance with the increment of the steps in load matching at a maximum of 200 mHz must be observed. A more finely graduated step (e.g. 100 mHz or a maximum of 10% of load) can be expedient, based on network reasons.

For manageability, at least 4 steps are recommended.

49.0 Hz

1st step of automatic load matching with a reduction of the network performance by about 12.5% of the network load. (Total load matching of ca. 12.5%)

48.8 Hz

2nd step of automatic load matching with a reduction of the network performance by about 12.5% of the network load. (Total load matching of ca. 25.0%)

48.6 Hz

3rd step of automatic load matching with a reduction of the network performance by about 12.5% of the network load. (Total load matching of ca. 37.5%)

48.4 Hz

4th step of automatic load matching with a reduction of the network performance by about 12.5% of the network load. (Total load matching of at least 50.0%)

 

ENTSO-E Network

The British grid is connected with the continental European grid. Basically, however, no disturbance should be able to spread through the direct-current connection. That is why there was no immediate danger for Central Europe.

Comment by Franz Hein: 

In my view there is, however, the problem that a (too) high load variation can (or could) occur. Until now we have thought that the momentary reserve together with the primary control, especially in the Central European grid, is so great or efficient that we can (must) regulate the (currently, to some extent still “loose”) 3000 MW load step. But that is only the view of the present situation. We are in the process of taking power stations out of operation to a greater extent and thereby reducing the momentary reserve. This will make the system behaviour “sharper”, i.e. a load step changes the frequency more rapidly and the entire system becomes more fragile. Were we to have more electronically controlled primary control, in turn that creates (greater) control. But there is a limit to this. The momentary reserve “produces” the frequency of the alternating current on the basis of the principle of conservation of energy via the mechanical energy in the rotating movement. That is the parameter for the control tasks. The frequency must be measured (and be able to be measured). The measuring process must be such that the measurement result is suitable for the control. This takes us at some time or other to physical limits over which no rapidly reacting electronic control then has any control. We must bear the whole in mind even if we choose particular aspects.

 

Possible causes

DominoeffekteIt is always the case that a major disturbance/blackout is triggered not by one single event but always by the interconnection of events which are actually controllable. Therefore, caution must be exercised when attributing possible causes. 

  • Industry experts said that a gas-fired power station at Little Barford, Bedfordshire, failed at 16:58 (664 MW), followed, two minutes later, by the Hornsea offshore wind farm disconnecting from the grid (406 MW).The two generator failures meant a loss of about five per cent of the grid’s power over 90 minutes.
  • RWE, owner of the Little Barford power station, said it shut down temporarily as a routine response to a technical issue, and called for National Grid and Ofgem to investigate the „wider system issues“.
  • And Orsted, the owner of the Hornsea offshore wind farm, said automatic systems on Hornsea One „significantly reduced“ power around the same time others failed.
  • It was a rare and unusual event, the almost simultaneous loss of two large generators, one gas and one offshore wind, at 4.54pm.

Franz Hein’s comment on a rare and unusual event”

And that was “only” a simultaneous failure of two functioning components which were completely independent of each other. The further explanations then showed the effects of components which were interlinked and independent of each other as parts of process chains. This view is of importance and, unfortunately, it is overlooked (or suppressed) far too often. However good networked process chains are, they have the fatal characteristic that the failure of one link in the chain leads to the failure of the chain in its entirety. For this, all that is needed is to permit the dependencies of the chain links between each other to be too “inflexible” and no “breathing” and autonomous operating of the chain links. This “breathing” is named by me in the energy logistics which I advocate; “energy provision “or “energy buffering”, “energy roll” and “long-term storage”. And that must happen autonomously because, otherwise, somewhere there will again be some central (common) component which leads to total blackout. The “breathing” and the autonomous operating creates tolerance ranges and permits extensions of the chain connections. In turn, that is likely to prevent a rupture. But, of course, that is also finite (stable).

 

Installed power station capacity in Great Britain

  • 28 GW natural gas
  • 16 GW coal
  • 22 GW wind
  • 13 GW PV
  • 9 GW nuclear
  • 2.5 GW hydro

Sources: Wikipedia, Wikipedia, further details: gridwatch.co.uk

 

Power production at 16:54 (BST)

2.83 GW of PV electricity produced at 15:50. Apparently, the 50.2 Hz problem did not matter. Other gas-fired power stations were apparently able to compensate for the failure of the Little Barford gas-fired power station: no loss of performance was to be seen here and that also explains the rapid restoration of the frequency and hence the stability of the system. Nevertheless, it is an open question as to why a loss of performance of ca. 1 GW – while production exceeded 30 GW – caused such a serious drop in frequency. 

 

Effects

  • Power was lost across Ipswich Hospital amid a national power failure after the back-up generator failed to work. The site was hit by a 30-minute power cut and staff are looking into why the back-up generators did not kick in. People at the hospital said “sirens went off” as power was lost.
  • King’s Cross was one of the worst-hit stations, with all trains suspended for several hours.
  • David Hunter, energy analyst at Schneider Electric, said a power station dropping off the grid can also create a “domino effect”, where other generators buckle under the strain of making up for the shortfall in power.
  • People stuck in trains for up to nine hours. Trains began to run out of Kings Cross late on Friday night after the station was shut down for several hours. Throughout the Friday evening rush hour there was huge disruption on the railways: police officers were called in to help travellers and delayed passengers were stranded for hours. Lawal Brown boarded a Thameslink train at Stevenage at 16:45. It took nearly six hours before he was evacuated onto another train and many more before he made it home. Network Rail said the blackout affected signalling systems and power supply equipment across a large part of the rail system, but backup systems stepped in. That still meant some delays because of safety requirements, says Nick King, network services director for Network Rail. But he says further difficulties were caused by a “major systems failure” on “one particular fleet of trains”. Thameslink has acknowledged that its trains required a technician to restart them after the power cut. Nearly 1,500 trains cancelled or delayed.
  • According to power utilities around the country, nearly one million people had to grapple with the blackout, including 300,000 in London and southeast England and 500,000 in the Midlands, southwest England and Wales. Some 110,000 were affected in Yorkshire and northeast England.
  •  

 

Statements

The Observer view on Britain’s blackout

https://www.theguardian.com/commentisfree/2019/aug/11/observer-view-on-britains-need-for-new-energy-policy 

And while a multiple generator failure may be a rare event, it is far from unheard of – the last was in 2008. Britain’s energy network should be able to cope with a once-in-a-decade event without causing so much potentially dangerous disturbance. Even though the culprit in this instance was not a cyber attack, it illustrates just how vulnerable we may be to a malign attack of this nature.

This should ring alarm bells about the resilience of British infrastructure to rare but far from unprecedented events. Resilience planning requires a joint effort by industry and government. But because Whitehall has been so consumed by Brexit in recent years, resilience planning – alongside many other big policy challenges facing the country, like a solution to the social care crisis – appears to have fallen by the wayside. What other vulnerabilities are there in the system that could be exploited by our enemies?

The power outage should function as a rude awakening to the brittleness of core parts of British infrastructure to cope with events that should not be debilitating. Is this really a country ready for the huge strain a no-deal Brexit would probably place on essential services? It hardly looks like it.

It could take months to work through the lessons of Friday’s failure (see also 10 January 2019, where it took almost 5 months for the report to be available!)

Power Cut Statement

https://www.nationalgrideso.com/media-test/power-cut-9-august-eso-statement

A National Grid Electricity System Operator spokesperson said: “We appreciate the disruption caused by yesterday’s power outage and investigations have continued overnight to better understand the situation.

“As the Electricity System Operator we do not generate power directly, but use the power made available by the industry to manage the system and balance supply and demand. The root cause of yesterday’s issue was not with our system but was a rare and unusual event, the almost simultaneous loss of two large generators, one gas and one offshore wind, at 16.54pm. We are still working with the generators to understand what caused the generation to be lost. 

“Following the event, the other generators on the network responded to the loss by increasing their output as expected. However due to the scale of the generation losses this was not sufficient, and to protect the network and ensure restoration to normal operation could be completed as quickly as possible, a backup protection system was triggered which disconnects selected demand across GB. 

“Following the incident, the system was secured, and the Electricity System Operator gave the all clear to the Distribution Network Operators (NDOs), power companies who are responsible for supply at a local level, within 15mins, so that they could start to restore demand. All demand was reconnected by the DNOS by 17.40pm. We appreciate the disruption cause and will continue to investigate, with the generators involved and wider stakeholders, to understand the lessons learned.”

U.K. Power Failure Leaves Commuters Frustrated at Infrastructure

The incident raises questions about the state of the country’s infrastructure, which has had less investment than most other countries in the Organization for Economic Co-operation and Development over the last three decades. Chancellor of the Exchequer Sajid Javid said Friday, before the outage, that he will publish a National Infrastructure Strategy in the autumn as the U.K. seeks to boost investment in areas including transport and digital connectivity.

Chaos loomed three times before

https://www.theguardian.com/world/2019/aug/13/tuesday-briefing-alarming-frequency-of-near-blackouts

The National Grid put Friday’s chaotic blackout down to “incredibly rare” circumstances, but the electricity network has had three near-misses in as many months leading up to it.

The Guardian understands that in every month since May there has been a “frequency excursion” – a severe dip in the grid’s frequency from its normal range of around 50Hz. On Friday the blackout was triggered when the frequency slumped to 48.88Hz. Industry sources have confirmed the grid’s frequency has also fallen below 49.6Hz three other times in recent months – the deepest falls seen on the UK grid since 2015.