Interview with John McDonald on Technological Advancements Beyond Smart Grid, Part 1
John D. McDonald, P.E., is Smart Grid Business Development Leader for GE Power’s Grid Solutions business. John has 44 years of experience in the electric utility transmission and distribution industry. John received his B.S.E.E. and M.S.E.E. (Power Engineering) degrees from Purdue University, and an M.B.A. (Finance) degree from the University of California-Berkeley. John is a Life Fellow of IEEE, and was awarded the IEEE Millennium Medal, the IEEE Power & Energy Society (PES) Excellence in Power Distribution Engineering Award, the IEEE PES Substations Committee Distinguished Service Award, and the IEEE PES Meritorious Service Award. John is Past President of the IEEE PES, the VP for Technical Activities for the US National Committee (USNC) of CIGRE, the Past Chair of the IEEE PES Substations Committee, and the IEEE Division VII Past Director. John was on the Board of Governors of the IEEE-SA (Standards Association). John was elected to Chair the NIST Smart Grid Interoperability Panel (SGIP) Board from 2010-2014. Currently, John is on the NIST Smart Grid Advisory Committee (SGAC) to provide updates on NIST Smart Grid activities. John received the 2009 Outstanding Electrical and Computer Engineer Award from Purdue University. John teaches a Smart Grid course at the Georgia Institute of Technology, a Smart Grid course for GE, and Smart Grid courses for various IEEE PES local chapters as an IEEE PES Distinguished Lecturer. John has published one hundred papers and articles and has co-authored five books.
In this interview, John answers questions from the first two sessions of his tutorial, "Technological Advancements Beyond Smart Grid." For more details regarding these questions, please view tutorial on-demand on the IEEE SG Resource Center.
What is the Field Grid Agent in the diagram on the integrations diagram? (slide 13)
On slide 13 you see several Field Grid Agents throughout the grid enterprise. From a functional point of view, the agent grid knows not only about agents, but about their computational requirements (e.g., how they can be broken up into processes, so they can be distributed across multiple computers), and about available computational (and other) resources. The agent provides a unified, heterogeneous distributed computing environment in which computing resources are seamlessly linked. In addition, the agent grid extends the idea "upward" to agents. These agents play the roles of applications whose computations can be distributed within this distributed computing environment, resources that can be used within this environment, and infrastructure components of this environment. At the same time, there is an interface between the computational and agent layers, so that at least some agents, e.g., those that do load balancing, can operate on the computing level grid. Agents also serve as wrappers of resources (and mediators between them) in these architectures. The assumption is that "agent technology" (viewed broadly) provides mechanisms for late binding, reconfiguration, load balancing optimizations, achieving and maintaining systemic properties like survivability and scalability, and coordinating teams and organizations.
Who will own the SG data? Squabbles over who owns the data might delay adoption.
Consumer Smart Grid data (e.g., smart meter readings) are owned by the consumer. The electric utility uses the data to generate customer’s bills but cannot give or sell the personal identifiable data to any third party. What constitutes “personal information” on the Smart Grid is the subject of much discussion.
Personal information is defined by the Freedom of Information and Protection of Privacy Act (FIPPA) and the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA), as “recorded information about an identifiable individual.” Once it becomes apparent that a Smart Grid technology, system or project will involve the collection of personal information, either directly or through some form of data linkage, privacy considerations immediately apply. Digital smart meter data, like all digital data, is vulnerable to accessing, copying, matching, merging and widespread dissemination.
What are the practical steps a utility can take to adopt the technologies and methods you are advocating?
Whenever a utility adopts new technology there are two important steps to be taken. One is to examine current business processes. To effectively use the new technology, the utility will need to revise some, eliminate some, and prepare some new ones. The second step is to examine the organizational structure and employee skill sets. Normally, changes are needed to the organizational structure and new skill sets are needed by employees.
One example is there used to be separate microprocessor-based boxes for protection, communications and Remote Terminal Unit (RTU) functions. Likewise, utilities had separate groups in their organizations to support the three different boxes. Technology evolved so that these three functions are now in the same box. Utilities have transitioned from three separate groups to one group, and a new skill set of “super IED specialist” was created.
Will utilities need to be concerned about increased liability created by increased monitoring and diagnostics?
Increased monitoring and diagnostics results in more data, and more analytics to analyze the data to provide value to the utility. Many of these monitoring and diagnostic system platforms have cloud components. Increased liability can result from personal identifiable data being given to a third party, and from a cyber security breach with cloud-based platforms. The manufacturers and utilities continue to emphasize privacy of information, cyber security and physical security in every product development cycle and every project.
Could you discuss more on the business case for postponing an asset investment by two years resulting in 10% cost reduction?
Deferring a capital expenditure is a balancing act between inflation and the present worth of money. By waiting, the capital expenditure will cost more due to inflation. In the example below, deferring a $1,000 investment two years saves $91, or approximately 10% of the expenditure.
Could you also elaborate more on the benefits of PMU application?
A PMU provides 30 to 60 measurements per second, compared to SCADA data that is sampled every 2 to 4 seconds. The PMU measurements provide much more insight into the behavior of the grid. Benefits of PMU applications include enhancing situational awareness in real-time, setting of frequency rate change (df/dt) relays for power system emergency control schemes, forensic analysis of faults / disturbance analysis / determining the exact instant of fault and fault clearing time, detection and analysis of oscillations in the power system, angular separation value observed through the PMU to validate the network model in SCADA as well as off-line simulation software applications, and validation and determination of the need for Special Protection Schemes (SPS).
What percentage of utilities have a 3-phase network model?
Today all utilities have a 3-phase network model. Their primary use of the model is for an off-line software modeling and analysis program, such as CYME, MILSOFT (Windmil), DNV GL (SynerGEE Electric), and DigSILENT (PowerFactory). For utilities implementing an Advanced Distribution Management System (ADMS), they have a 3-phase network model in their Geographic Information System (GIS), Outage Management System (OMS) and ADMS.
With so much data coming in the control center, is there any good practices for alarming in distribution?
For distribution applications it is critical to utilize intelligent alarm processing. Primary alarms are those that need the attention of the operator. Secondary alarms are the result of a primary alarm, and do not need the operator’s attention. For every 10 alarms, typically 8 to 9 of the alarms are secondary alarms. Effective alarm filtering can ease the burden on the operator of reviewing the alarms in the alarm summary display. Intelligent alarm processing techniques include areas of responsibility partitioning / routing, alarm point priority filtering, timed alarm suppression, and knowledge-based alarm filtering (direct and indirect linkages). In addition, status points should have multiple levels of memory to capture multiple state changes between master station scans. These status points with memory are called “status with memory (SWM)” points or “multiple change detect (MCD)” points. More information is contained in my article titled “Overwhelmed by Alarms: Blackout Puts Filtering and Suppression Technologies in the Spotlight” in Electricity Today magazine, Issue 8, Volume 15, 2003. I was asked to write this article after the 2003 blackout.
Do you believe a DERMS should have a network model or should the ADMS be the master and only real-time network model that applications would use?
Since the DERMs is a new development, it should have a network model and be a standalone system as its functionality matures with each field installation. In the future, the DERMS functionality may be integrated into the ADMS.
What is a good IEEE Standard, paper, etc., for an OT/IT network configuration for the design of Monitoring Network to alert Substation maintenance personnel in utilities?
I am not aware of any IEEE or IEC standards, but I have listed 6 articles and 1 book I have authored or co-authored on big data, analytics, and enterprise data management:
- McDonald, John D., “Substation Automation – IED Integration and Availability of Information”, IEEE Power & Energy, March/April 2003.
- McDonald, John D., Carrasco, Joe, Wong, Chiu, “Riverside Initiates Substation Automation, Plans SCADA and Data Warehouse”, Electricity Today, Issue 8, 2004.
- McDonald, John D., Rajagopalan, Shankar, Waizenegger, Jack R., Pardo, Fernando, “Realizing the Power of Data Marts – A Water and Power Utility Taps Nonoperational Data with a Power System Data Mart Project”, IEEE Power & Energy, May/June 2007.
- McDonald, John, “Data Marts – A Wealth of Untapped Information”, The Bridge (the magazine of Eta Kappa Nu – Electrical Engineering Honorary), Autumn 2009.
- McDonald, John, Ipakchi, Ali, “Roadmap to the Future: Integrating Substation Information and Enterprise Level Applications”, Electric Energy T&D Magazine, September-October2006 Issue.
- McDonald, John, “Embracing Holistic Data Management Prepares Utilities for the 21st Century: How Power Utilities Can Best Manage Data in the Future, and Be Transformed”, 2018 TechCon North America, February 2018.
- Edited by R. Arghandeh and Y. Zhou, Big Data Application in Power Systems, Elsevier, 2018. Chapter 1, “A Holistic Approach to Becoming a Data-Driven Utility”, written by McDonald, John.
To view past interviews, please visit the IEEE Smart Grid Resource Center.