High Performance Computing and Security Analysis
By Bruce Wollenberg
It was shown almost fifty years ago how to apply sparsity programming techniques to power flow analysis, and several decades later ways were found to adapt those techniques to vector processing. But this kind of processing for security analysis required expensive supercomputers. Now, thanks to inexpensive processing cards spun off from those used in computerized gaming, we are on our way to being able to process large numbers of AC power flow cases on power system models and identify worrisome cases.
Security analysis of power systems requires you to provide operators with a prediction of how outages will affect power systems. Traditionally this means building a model of the power system in a central operation’s computers and subjecting it to repeated outage analysis as one line after another and one generator after another is taken out, one at a time. The resulting line flows and bus voltages are read off the model’s calculated results to predict whether any flow limits or bus voltages limits were violated. If the security analysis predicts any outage problems, operators can take action to redispatch system generation, reconfigure the transmission system, adjust interchange with neighboring systems, or even shed load if warranted.
The main idea in this kind of analysis has been to prevent any single event from resulting in a state of overload or bus voltage violation that could lead to even more outages. The challenge to engineers designing security analysis software was to provide coverage of enough cases fast enough to give adequate warning of an outage.
The same kind of security analysis is executed by engineers planning new additions to generation or transmission. Here, power system models are adjusted to reflect future conditions, and here too, the ability to cover thousands of cases rapidly also is important.
This article will stress real-time operators’ needs but the techniques apply to planners as well.
My early career was spent working for companies that built energy management systems (EMSs). Many early EMS designs specified that security analysis be done by what we now call Line Outage Distribution Factors (LODFs), which can calculate approximate values of line flows for a given voltage very rapidly. It became obvious later on that many of the major power blackouts were caused not by line overloads but by voltage problems leading to voltage collapse. This necessitated that AC models and not LODF calculations be done for security analysis. Now the execution of thousands of AC powerflow cases became a real impediment to providing the operators with a fast analysis of how secure their system was. All those AC powerflow cases took too long to calculate.
I left industry for the University of Minnesota in 1989. Later, in the early 1990s, a graduate student and I were funded by EPRI to explore the use of supercomputers in performing security analysis. At that time there was a large CRAY supercomputer on campus (a Cray X-MP-EN/4-64), and we were able to obtain funding to give us an account to use it through the campus network.
The secret to the speed of these supercomputers was their ability to do vector processing, wherein a vector of one set of numbers could be multiplied times another vector, or the column of a matrix, so that the first element of one vector is multiplied by the first element of the other vector and the product stored; then the product of the second element is multiplied times the second element of the other, and then added to the first product, and then this result stored. The vector processing aspect of these computers was built to carry out such vector multiplies very rapidly on long vectors.
The problem with using vector processing computers for power system calculations is that we do not require the multiplication of long vectors times each other where each term in the vector is of interest. Rather, power system equations are made up of sparse matrices and it was through the exploitation of the matrix sparsity that engineers like William Tinney, working at Bonneville Power Administration, were able to develop reliable power flow calculations in the 1960s using “sparsity programming” techniques. Here, if the Y matrix or the Jacobian matrix of Tinney’s Newton Power Flow were full—all terms non-zero—then a vector processor could help in speeding up the calculation. But these matrices are sparse and therefore vector processing has little advantage.
What we did with our campus CRAY computer was to recast the security analysis problem into one where we did a single part of each outage case powerflow in the security analysis in close succession. By forming the problem so that a vector contained an element from each contingency case, we ended up with full vectors and, as a result, the vector processing computers were able to run thousands of power flow outage cases very fast.
We published a paper in IEEE Transactions, and there was a little bit of interest—but mainly I heard utility engineers say what I had expected. “We don’t buy CRAY computers.” And indeed, the cost of a CRAY computer would amount to a large percentage of the cost of an entire EMS system.
Enter the computer gaming industry. In the past twenty years, computer games were developed that soon overtook the ability of the graphical display electronics to render fast-changing, high-definition images desired. Besides that, gaming programs are among desktop-scale computers most demanding of raw computer power and have spawned new developments in liquid cooling systems, giving users the ability to overclock the computer—that is, to speed up the system clock so the entire system runs faster. NVIDIA, a supplier of subsystems and components for gaming systems, developed specialized cards for driving graphics displays and an important part of them was a vector processor. Once this became known, people interested in scientific programming started using the NVIDIA card’s capability for scientific calculations. This gave NVIDIA the idea of creating a card whose purpose was not to drive a graphics display but to provide special vector processing capabilities at a very low price.
We have been adapting the same security analysis techniques we developed for the CRAY computer for a PC with an NVIDIA TESLA card for the last two years. It is our hope to eventually show the industry a high-speed security analysis engine running on a PC with a high-speed vector processing card that costs under a thousand dollars. This should open the door to running large numbers of AC power flow cases on power system models and should at last bring some ability to go into those worrisome (n-2) cases as well. A new era of speed in security analysis is about to break for both real-time operational security analysis and for power system planners.
Bruce Wollenberg, a Life Fellow of the IEEE and a member of the National Academy of Engineering, has been a professor of electrical engineering at the University of Minnesota since 1989. He is Director of the University of Minnesota Center for Electric Energy. The co-author of the textbook Power Generation Operation and Control (John Wiley), his main interests are the application of mathematical analysis to power system operation and planning problems. Wollenberg graduated from Rensselaer Polytechnic Institute with a bachelor’s in electrical engineering in 1964 and master’s in electric power engineering in 1966; his doctoral degree in systems engineering was conferred by the University of Pennsylvania in 1974. He worked for Leeds and Northrup Co. in North Wales, Pa., from 1966 to 1974; Power Technologies Inc. in Schenectady, N.Y. from 1974 to 1984; and Control Data Corporation Energy Management System Division in Plymouth, Minn., from 1984 to 1989.