No DMS is an Island

By Jeffrey Katz

The total effort in some Advanced Distribution Management System (ADMS) projects has been more than an epsilon above the estimates. Twenty-twenty hindsight tends to point to data management, system integration, and configuration challenges, although the latter is usually traceable to aspects of data management. This article will look at some of the causes of speed bumps in these projects, in the hope that those who are preparing the next request for proposals will take some of these points in to consideration.

One project, two views

From the perspective of information technology/computer science, there is a large system integration scope, a large software and hardware infrastructure component, and a concise, well-defined Advanced Distribution Management System (ADMS) application. The view from the operational technology/electrical engineering perspective sees a large equipment scope, a medium-size application scope, and a small component for computer hardware and infrastructure software.

Some thoughts come to mind. How can this disparity exist? Neither view is completely correct. What project scope does the utility see, endure, and pay for?

An ADMS tends to be a transformation project, not a drop-in replacement such as changing the oil filter on a car. The emphasis is on the “S”; it is a system. Cloud computing tends to reduce system integration and IT infrastructure complexities. There has been at least one completed proof-of-concept project done for an ADMS in the Cloud.

Systems integration and equipment is considered. The effects of hardware, core application, configuration, data verification, validation, and velocity, tend to be additive, in that they affect each other. For everyone’s success, it may be useful for the utility to consider more of the whole picture, and discuss with both the ADMS and SI vendors the whole transformation, and not just one new application. As much as possible, there should be a reference architecture for the ADMS project, for the fewer repeatable components and methods, the larger the lifetime total cost of ownership. The truth is that no two utility situations, or even uses cases, for the ADMS are going to be identical. The project problems though tend to be more similar. The following steps seem to be useful:

  1. Qualify the request for proposal (RFP) requirements to the vendors’ capabilities. Typically thought is given to mapping the Functional Requirements, mapping the Non Functional Requirements (often difficult because people make assumptions), mapping the Organizational implications (who is affected by what the new system does), Business Process implications (what is different given how the new system will operate), key metrics and timelines (what else is in motion at the same time that will be expected to fit in, what immovable deadlines are there, documenting known organizations involved (just knowing that someone in the utility knows certain answers to support the DMS upgrade does not mean they are available for the project, or even believe in it), Risk Areas (Murphy’s Law), and Issues Areas (what may be a problem because the quality and availability of data is assumed)
  2. The second step is to assess the client requirements against the vendors’ methods. The IT people will often call this a framework, or a high-level reference architecture. Seeing what they do not do, and what is often necessary for the project to succeed, which was not on the utility’s radar, can be informative. It is just as important for the vendors to identify what the utility is specifically asking for. Certainly, there will be technical components and solutions. However, there may not be enough attention paid to integration with existing systems, and whether those connected systems and applications are identified. If the utility does not specify which integration technologies are referenced or preferred, it may be harder for them to operate, or connect other systems, around the ADMS. The business needs have to be identified, because the vendors need to be sure that what they are proposing solves those needs. The Lines of Business need to be identified too, since everyone from planning to procurement, management to maintenance, are likely to be either involved or affected. Concurrent projects that may be in progress, or on the near horizon, need to be considered, because there may be IT Enterprise Architecture choices that can make other projects easier that work with the ADMS, if everyone knows they are coming. Benefits to be realized are not necessarily the same as business needs; often the needs are spoken and the benefits implied. If there are expectations that were shown to those authorizing the budget, everyone should know what those are, for everyone to assess the success.

Besides benefits and needs, there are likely to be business changes expected. Use of techniques such as Component Business Modeling help get clarity to what actually happens now, and what will happen, so processes that may need to be re-engineered do not come as a post-commission aftershock.

Training is often an afterthought, but people are the most variable part of a large project. Current system experts need to be immersed throughout; it bodes better than two crammed weeks at the end. The end users, and the internal tech support groups, also need to be included, not told. No project is perfect, so transition and support needs and expectations should not end up as contract additions. New software may need new skills from IT, and these should be developed while the ADMS project is executing. Solving an inevitable problem that dropped from the sky is much harder than one in which the in-house staff can participate in.

Building something so solid in an existing infrastructure requires knowledge of all involved of what the utility is keeping or replacing, what is required new, which preferences or standards are in place.

Dealing with the data deluge

First, inconsistent data can be worse than no data. Multiple sources of the same data will, after the Murphy filter, become inconsistent. Data that is in a formal data model is much easier to check, verify, extract, and re-use. This may be buried in an Engineering Data Warehouse, a Data Lake, or any number of terms for data stores. What is important to the utility is that the data is valuable, that the engineering staff is likely to invest many unbudgeted hours getting the data organized, and as such its value increases. Therefore, an architecture in which the data is available to the whole company, and not just locked inside the ADMS, can contribute extra value to the utility in the long term. Enquire if the ADMS has import and export capabilities, to avoid having the utility do the engineering and data science work more than once.

For a downloadable copy of the March 2017 eNewsletterwhich includes this article, please visit the IEEE Smart Grid Resource Center

Contributors 

 

katz

Jeff Katz is the Chief Technology Officer of the Energy and Utilities industry at IBM. He has contributed to the industry’s framework, Solution Architecture For Energy (SAFE), the IBM Innovation Jam workshops and the IBM Intelligent Utility Network initiative, and he is the primary industry liaison with IBM Research. He has presented at many conferences on smart grid architecture, innovation, and cyber security, including IEEE’s ISGT. Before joining IBM, he was the manager of the computer science department at what was at first ABB’s U.S. Corporate Research Center and then became ALSTOM’s Power Plant Laboratory. Prior to that, he was with ABB Power Generation, managing development of computer systems for nuclear power plants. He is a member of the IBM Academy of Technology and of Sigma Xi, the science honors society, and he serves on the IEEE Standards Association Standards Board.


Past Issues

To view archived articles, and issues, which deliver rich insight into the forces shaping the future of the smart grid. Older Bulletins (formerly eNewsletter) can be found here. To download full issues, visit the publications section of the IEEE Smart Grid Resource Center.

IEEE Smart Grid Bulletin Editors

IEEE Smart Grid Bulletin Compendium

The IEEE Smart Grid Bulletin Compendium "Smart Grid: The Next Decade" is the first of its kind promotional compilation featuring 32 "best of the best" insightful articles from recent issues of the IEEE Smart Grid Bulletin and will be the go-to resource for industry professionals for years to come. Click here to read "Smart Grid: The Next Decade"