Temporal Trends in U.S. Electricity Reliability
Written by Joe Eto
A major study of reliability trends shows that power interruptions have increased at a steady rate of about 2 percent per year over a period of ten years. These findings stand up robustly to analysis of measurement error and bias, and set the stage for study of what causal factors are at work.
Since the 1960s, the U.S. electric power system has experienced a major blackout about once every ten years. Each has been a vivid reminder of the importance society places on the continuous availability of electricity and has led to calls for changes to enhance reliability. Such calls imply judgments about what reliability is worth and how much should be paid to ensure it.
In an effort to inform discussions of power system reliability, researchers at Lawrence Berkeley National Laboratory (LBNL) conducted an assessment of trends in electrical interruptions experienced by U.S. consumers. The analysis considered up to ten years of electricity reliability information collected from a convenience sample of 155 U.S. electric utilities, which account for roughly 50 percent of total U.S. electricity sales. (A convenience sample is one based on readily available information rather than strictly randomized surveying.)
Using statistical methods that account for utility-specific effects, LBNL found that reported reliability has been getting worse over time: The reported average duration and frequency of power interruptions has been increasing at a rate of approximately 2 percent annually.
Although these findings are statistically significant, it is important to place them in the appropriate context. These average annual trends are modest in comparison to the routinely larger year-to-year variations in the average duration and frequency of power interruptions experienced by utility customers. This means that in any year, while the average customer may experience varying numbers of service interruptions or outage lengths, the overall trend is towards declining reported reliability.
LBNL makes no claims regarding the applicability of these findings to the reliability of the U.S. electric power system as a whole. Strictly speaking, these findings apply only to the sample of utilities from which LBNL was able to collect reliability information. Some regions of the country are under-represented and the analysis is based largely on information from investor-owned utilities.
In an effort to explore potential sources of measurement error, LBNL found statistically significant evidence that the installation or upgrade of an automated outage management system correlates with an increase in the reported duration of power interruptions. This finding confirms anecdotal evidence long suspected within the industry that reliance on older manual measurement methods results in under-reported reliability.
This suggests that the more accurate measurement of reliability, rather than lower actual reliability, "explains" the statistically significant trend of decreasing reported reliability over time. That is, if service interruptions are counted more accurately, it might appear that there are more or longer interruptions. However, the analysis takes this factor into account explicitly by accounting for utility-specific effects and still found statistically significant secular trends of declining reliability over time.
In the study, LBNL went on to examine a potential source of measurement bias in the form of utility reporting practices. They found that reliance on IEEE Standard 1366-2003 correlates with higher average reported reliability compared to reliability reported using locally established reporting standards. Taking this into account, the trend of decreasing reported reliability over time remains statistically significant and is approximately at the same magnitude—that is, decreasing at roughly 2 percent annually.
In summary, consideration of the two potential sources of measurement error and bias did not change the direction of these trends nor their statistical significance. With these findings, the focus can now be turned towards understanding the causal factors to explain the observed trends.
LBNL has begun this process by examining potential links with aggregated measures of weather variability (for example, heating and cooling degree days) and a simple measure of utility size. To date, we have found neither to be statistically significant.
In the future, there are several factors LBNL believes should be considered to better understand the drivers behind these trends. These factors include: more disaggregated measures of weather variability such as lightning strikes and severe storms; utility characteristics, like the number of rural versus urban customers, or the extent to which transmission and distribution lines are overhead versus underground; and utility spending on transmission and distribution maintenance and upgrades, including advanced smart grid technologies such as the smart grid.
The author hopes this work will help solidify the factual basis upon which future decisions about U.S. reliability policy, practices and technology will be made.
Joseph H. Eto, an IEEE member, is a staff scientist at the Lawrence Berkeley National Laboratory where he manages a national lab/university/industry R&D partnership called the Consortium for Electric Reliability Technology Solutions (CERTS). He has been involved in the preparation of every major electricity policy study conducted by the U.S. Department of Energy over the past decade, including the Power Outage Study Team (2000), the National Transmission Grid Study (2002), the US-Canada Final Report on the August 14, 2003 Blackout and both DOE National Electric Transmission Congestion Studies (2006 and 2009). He holds an AB in philosophy and an MS in energy and resources, both from the University of California at Berkeley, and is a registered professional mechanical engineer in the state of California.