Since electricity has become the lifeblood of modern society, any interruption in supply has a huge impact on individuals, businesses and communities. While the transmission and distribution sector of the power industry makes great efforts to avoid such disruptions, the question must still be asked: is everything possible being done?
The electrical power business is going through another major transformation. Need for affordable and sustainable energy that still meets ever-increasing demand is driving growth in renewables (often remotely located) and also international electricity trading. That dynamic in turn has created the rise of so-called ‘super grids’ – extremely high voltage networks that transport electricity over much greater distances, often between countries and sometimes even between continents. These large networks place added pressure on grid reliability. Losing power in a single region would be bad enough, but imagine what might happen if supply to half a continent were interrupted. Such a threat already attracts media headlines that speculate on when and where ‘the big one’ might strike.
Is there anything more that network operators can do to further reduce the risk of major power disruptions?
Consider how outages arise. The 2013 Eaton Blackout Tracker, for example, reported that bad weather was the leading cause of unscheduled disturbances in the U.S., accounting for 30% of major outages. Faulty equipment and human error together contributed a further 29%. Weather is obviously uncontrollable and, if current models of climate change prove accurate, will become even more extreme. Still, we can lessen the impact by ensuring localized problems do not escalate across the grid as well as by installing equipment that can withstand even extreme conditions.
Experience during the 2011 earthquake in eastern Japan demonstrates that such a strategy is feasible. A total of 848 pieces of key equipment at 200 high voltage substations were damaged by the temblor, causing the immediate shutdown of 80 stations. Clearly a major disruption. Yet electricity was fully restored in less than a week thanks to the stringent seismic specifications for all equipment installed in the region. The solution to reduce the impact of bad weather and other natural disasters is much the same, namely reducing equipment faults through higher quality components and better training of personnel on how to respond to these problems.
Most will agree that one of the most effective ways to ensure the quality of T&D components and reduce the frequency of major disturbances due to their failure is through independent testing-based certification. Such certification gives network operators confidence that equipment will perform correctly and safely under normal as well as under fault conditions. That not only helps reduce the frequency of failures but also the broader risk that any one failure might cascade across the network.
Certification must of course be based on international standards. But which? Present standards were developed largely based on experience from yesterday’s networks and service conditions. With the power industry going through significant change due to the growth of renewables, increased grid utilization and longer transmission lines, these factors as well as climate change could affect the service conditions that components must routinely face in the future. In particular, more intelligence within networks raises the threat of cyber-attacks, adding yet another level of risk. Therefore, current standards need to be reviewed to ensure they will be suitable for future challenges as well.
That could mean adapting existing standards or creating entirely new ones, both for new types of components and for verifying power systems as a whole. Ensuring that the standards remain appropriate is a task that requires the combined expertise of the industry, working together through bodies such as IEC, IEEE, and CIGRE.
While making efforts to improve quality, it’s also important to remember that modern components in the grid are not simply ‘plug-and-play’. Highly complex interactions can occur between them and these can greatly impact the stresses any single component might face. Hence, it’s essential that utility personnel fully understand their networks, not only at the system level but also in terms of the special stresses any equipment might face under ‘non-standard’ conditions.
Unfortunately, outsourcing practices by the power industry in recent years suggest that this kind of knowledge may increasingly be flowing away from network operators. That leaves them at risk. To combat the ‘brain drain’, more and more testing institutions (including DNV GL) offer training courses that provide the skills and knowledge engineers need for a more complete understanding of their system.
Such insight is essential for both planning and operating grids. In the planning stage, if you don’t fully understand all the conditions a component might face in your system, how can you define the correct specifications for buying it? At the same time, a systems-level understanding helps operating staff identify which unusual conditions represent a particular threat to the network. Ongoing education is the best way to maintain such an understanding – and indeed even opens up the intriguing possibility of one day certifying people just as we currently certify components.