Interview with Dr. Mladen Kezunovic
Dr. Kezunovic serves several leading roles at the university: Director, Smart Grid Center; Site Director, NSF Power Systems Engineering Research Center; Director, Power Systems Control Protection Lab. As the Principal Consultant of XpertPowerTM Associates he provided consulting services to over 50 utilities and vendors worldwide in the past 25 years. He was a Principal Investigator on over 100 R&D projects, published more than 550 papers and gave over 100 invited lectures, short courses and seminars around the world. He is an IEEE Fellow and Distinguished Speaker, CIGRE Honorary Member and Fellow, and Registered Professional Engineer in Texas. He is the recipient of the Inaugural 2011 IEEE Educational Activities Board Standards Education Award “for educating students and engineers about the importance and benefits of interoperability standards” and CIGRE Technical Committee Award for "remarkable technical contribution to the study committee B5, protection and automation" in 2013.
In this interview, Dr. Kezunovic answers questions regarding his IEEE Smart Grid webinar. To view this webinar on-demand, click here.
QUESTION: Knowing how far behind we are in terms of adapting technology, what needs to happen to capitalize on the adoption of the technology to proceed onwards with the presented framework?
We need to develop testing tools that allow one to perform various type of tests during the life cycle management of a given technology: acceptance, commissioning, routine maintenance, troubleshooting.
What is BIL?
Basic Insulation Level. You may Google further explanations.
What was the approximate increase in data (in megabytes or gigabytes) in Example-1, where historical lightning data and historical weather data were added to BIL_old? Is this really a Big Data problem, or just a new model?
We are dealing with historical data. This can quickly build up to terabytes. You may Google NOOA and Vaisala sources of weather and lightning data respectively and learn more about the size. Scaling the data analytics to be able to ingest such large volumes of data is a challenge.
With increasing application of smart grid, the management of data will be a huge task in front of utilities. How can this huge data can be saved and managed?
It depends on the purpose and expected benefits. There are many options for storing and managing data from doing it in-house to using external services. In all cases, historical data should be preserved since the predictive techniques will definitely benefit from as long historical data file as it is feasible to store.
In the respective slide regarding reasons of outage, there was a curve labeled "unclassified". Are there classified causes of outage? Why are there classified? Is it a matter of cybersecurity?
I am not sure which slide this relates to but in general, we are not referring to data security issues. The classified and unclassified terms are used in the context of supervised and unsupervised learning.
Do you think that power system protection will rely more on the big data to detect faults in the future?
The relaying has a role of disconnecting (isolating) faults as soon as possible to preserve safety and operational reliability. This role of conventional protection will always be needed. The use of Big Data may be used to predict where the faults may occur under certain environmental (weather) conditions. That prediction helps in in developing more efficient mitigation strategies including the restoration management.
How do you ensure that you've selected the most appropriate variables to determine the risks for cost effective risk mitigation?
Every new effort in the risk analysis has to be evaluated against an existing approach/data or else it will be hard to establish an absolute risk reference. Hence it is extremely important to understand the phenomena associated with the hazard and vulnerability that affect the risk, and be able to test the assumptions using field data. In the case of the risk prediction associated with tree trimming or insulator replacement, it is relatively easy to perform the risk assumption validation by looking at what has actually happened in the system in terms of failures/faults at the locations with the high risk prediction.