Among the recent trends impacting the electrical products industry is intense global competition across the supply chain, placing unprecedented emphasis on cost and product optimization. This has caused products to be designed within ‘tighter envelopes’ as electrical equipment OEMs rely on their experience with materials to establish natural boundaries. They then employ sophisticated computing methods and FEM to confirm these limits. Given this, it may be reasonable to conclude that these products are performing better than ever. But experience spanning many years at one of the world’s leading test laboratories suggests that this has not been the case. For example, initial failure rates during type tests are reported not to have changed during the past 5 years and in fact not even over the past two decades. The underlying message is that independent testing and certification are as relevant today as when they were the only tool available to assure equipment and components performed as expected. Users should therefore not treat critical assets such as breakers, transformers, insulators, etc. as ‘off-the-shelf’ commodities. In fact, these are highly specialized items whose performance must always be verified. For example, out of 166 power arc tests conducted on insulator strings at one large laboratory, 34 percent failed to pass. When it came to cables and accessories, failure rates during initial type tests due to issues such as tracking and erosion on sheds or mechanical deformation have been even higher – sometimes on the order of 50 percent.
Even sophisticated equipment such as large power transformers has not fared much better during testing since 1996, with an average 23 percent not passing initial type tests. In fact, combining data on assorted power equipment – from disconnectors to line traps, from switchgear panels to distribution transformers – about one-quarter undergoing initial type tests at one laboratory did not pass. This rate of test failures is seen as due to several factors. First has been pressure across industry to design more compact and slimmer components to reduce the ‘footprint’ of power infrastructure. This also reduces safety margins. Second is the fact that manufacturers, facing intense cost pressure, try to squeeze maximum physical ratings from products, without adding more materials. Finally, customers are demanding faster turnaround and such time pressures on projects mean that manufacturers have to find ways to respond to tight deadlines imposed by customers.
At the other side of this same discussion, key performance indicators in the utility sector are driven by investor expectations of how much has to be spent on the network to yield the desired level of system performance. There are also regulatory and public relations issues in terms of pressure to avoid outages and demonstrate that the network has been designed and built to the required standard. In this regard, data taken from documents that track outages in North America, for example, show that these have trended upward and reached a seven-year high in 2014. One of the factors behind this is believed to be interconnection of networks, which has increased short circuit currents from the more normal 20 kA up to as much as 50 kA. There is also increased switching to deal with increasing unexpected network conditions and events. All this places greater stress on equipment such as breakers and indeed is one of the reasons behind renewed interest in DC as well as fault current limiters. Networks are also more loaded meaning that all components, from cables to transformers, experience higher nominal loading and are running ‘hotter’. At the same time, faulty equipment has become one of the main contributors to outages. In 2014, for example, poorly performing equipment accounted for 28 percent of all outages in the U.S. and 30 percent of the total in Canada.
Among the critical assets in any network are power transformers. The classical utility nightmare is an exploded unit that burns for days and takes an entire substation out of service for an extended period. Some utilities rely on spares to ensure fast response to such rare events. A far less costly solution is to ensure that only high quality units are installed in the first place and that these have passed all required testing. Large power transformers are often unique to a specific application and are typically produced as single units or in small batches. Verification of performance using a simple design review or by calculations may not be sufficient and transformer failure rates during testing support this conclusion. Transformers often fail on impedance testing or sometimes pass but then fail upon visual inspection of their tanks. Testing permits measurement of when a fault starts to evolve and power supply can be switched off to clear it fast. The advantage is that the energy put into the fault is reduced and damage is limited such that the unit can be returned for re-work. Studies place global transformer failure rate at around 0.6 per cent and up to 20 per cent of these are the direct result of short circuits, where large currents and severe mechanical forces risk deforming a unit that is not structurally well designed. Therefore, verification of a transformer’s ability to survive short circuit is key and there are two ways to do it. The first, referred to as design review, sees consultants brought in to verify the calculations of forces and stresses. They then compare these with critical values based on past testing or internal rules. However, since the manufacturer already knows which questions will be asked, the risk of failure is virtually zero. Moreover, this process cannot always account for transient phenomena and excludes impact on key sub-components such as the bushings. The second methodology is to conduct laboratory testing whereby a transformer is subjected to actual short circuit current as might be encountered during service. Experience is that this is the most secure way to reduce risk of transformer failure. For example, experience at one test laboratory showed that between 20 and 30 per cent of all large-power transformers tested failed. Most often, this was due to an increase in reactance beyond the limit set in the standards and which is an indication of internal deformation. At the same time, unexpected outcomes can also be triggered by short circuit current, including damage to bushings, oil spill or internal flashover. The highest reliability in terms of verifying short circuit withstand is through full-scale testing, as per international standards. Circuit breakers are another key system component and these traditionally account for a large proportion of the high power testing business. Data has been assembled at one major testing organization on 2000 samples and care was taken to classify why some of these failed and also during which test. Evaluation has also been made in regard to which voltage classes yielded highest failure rates. All this points to the conclusion that laboratory-based testing remains the most secure process for the power industry to mitigate risks of equipment failure, especially since it is utilities themselves that must bear the consequences of failures.