By James Mater
Once a standard for smart grid interoperation has been adopted by an official international standards development organization (SDO), such as IEEE or IEC, products developed to that standard should be able to easily communicate in order to work together – i.e., interoperate correctly with no additional integration engineering and debugging, also known as “plug and play”. Unfortunately, the approval of a communication standard, such as OpenADR, IEEE 2030.5 or IEC 61850, is only the starting point.
Quality of a standard
The key quality attributes that determine the likelihood of a standard maturing into something approaching “plug and play” are specificity of the standard itself, optionality allowed, and industry agreement on what constitutes conformance to the standard.
One major issue in technical software standards is the temptation for the writers to simply assume that others understand what is meant by a particular function defined by the specification. This assumption leads to differing interpretations that result in interoperability problems. The world of software implementation is replete with examples. Thus, the better and more precise the technical specification for the standard is, the more likely it can mature to something useful.
There is built-in tension between being prescriptive in writing a standard and providing a wide latitude to implementers to choose from a range of options to better meet application requirements. Unless there is a clear set of core functions required and validated for each implementation, there is a risk that different vendors will implement differing options that can lead to interoperability issues.
Lastly, a critical indicator of a standard’s likely success is the industry’s ability to agree on what is called a “Protocol Implementation Conformance Statement” or PICS. This is a subset of the functionality of the standard and becomes the basis for a certification program to ensure conformance to the PICS by implementations of the standard. As an indicator of the likelihood of a standard becoming useful and deployable at scale, this is perhaps the clearest indicator and, if missing, should raise a red flag.
There are different flavors of certification programs, and the contribution to interoperability and “plug-and-play” differ from one flavor to another. Self-certification programs can be effective for more mature standards if there is a rigorous set of test specifications, test tools, and independent results validation process.
Conformance certification based on a detailed industry PICS for a standard and conducted by a third-party test lab is one of the best indicators of the interoperability of products that implement the specific standard.
Finally, the most thorough and effective certification programs include both conformance and interoperability testing. While conformance testing typically validates that the industry PICS was correctly implemented, it does not validate that products will interoperate with other products using the standard. Interoperability can be achieved the hard way – i.e., issues found in the field during integration testing – or through so-called “plugfests” or “interop” events in which vendors find out where the interoperability issues occur so they can be fixed before deployment in the field. As the ecosystem matures, interoperability-specific test tools can shorten the process of finding and fixing interoperability problems.
Conformance and interoperability testing are complementary. Between them, the maturation of the standard in the market is accelerated. Key indicators of a standard’s viability are a formal conformance test program offered by an industry trade alliance or some other entity, along with customers for products who require certification by the program. The more vendors that have submitted products for this testing and the more that customers require it, the more confidence in the ultimate usefulness of the standard.
As noted, the best indicator of a successful standard is broad market adoption. But in lieu of that evidence, there are at least three key indicators of likely success:
- Some sort of formal industry trade alliance for the standard exists and is prosperous. This demonstrates industry commitment and investment, suggesting that the standard will be successfully deployed and evolved as needed. The more members and funding, the more likely the standard to reach a critical mass in the market. Good examples are the Wi-Fi Alliance, the Bluetooth SIG and the OpenADR Alliance.
- Pilot, demonstration and research programs that demonstrate use of the standard have been funded and completed or are in process at leading utilities or labs. These should have published results showing the efficacy of the standard for the use cases.
- A strong indicator of market adoption is the formal mandate of a standard by regulators or major utilities. This almost guarantees that the industry will respond by supplying products that incorporate the standard as specified by the regulatory body or purchasing requirements of the utilities.
In summary, there are a number of key indicators that can be looked at to assess the likelihood that a new smart grid standard will become successful and useful for the industry. These include the quality of the standard itself, the availability of a recognized test and certification program, and the existence of an industry trade alliance to evolve and promote the standard. The use of the standard in pilot and demonstration projects and/or mandated use by regulators and leading utilities can provide additional confidence in a given standard’s viability.
James Mater founded and has held several executive positions at QualityLogic Inc. from June 1994 to present. He is currently co-founder and general manager of smart grid. He is a member of the GridWise Architecture Council, founder and past chair of Smart Grid NW, an original member of the Test and Certification Committee of the Smart Grid Interoperability Panel, and prolific author and lecturer on interoperability and smart grid standards. From 2001 to October, 2008, James oversaw QualityLogic as President and CEO. From 1994 to 1999, he founded and built Revision Labs, which merged with Genoa Technologies in 1999 to become QualityLogic. Prior to QualityLogic, James held product management roles at Tektronix, Floating Point Systems, Sidereal and Solar Division of International Harvester. He is a graduate of Reed College and Wharton School, University of Pennsylvania.