Computational needs of the electric grid of the future
A July 2019 Navigant report on growth of distributed energy resources (DER) annual capacity predicts that we will reach 600 GW of total DER capacity in eight years, out of which nearly 250 GW will come from distributed solar assets. Managing grid stability with such a massive influx of these highly variable energy resources is non-trivial. This requires that utilities make significant investment in the ability to effectively and reliably dispatch these resources while maintaining the grid’s transient stability, grid inertia needs, and fault levels. The North American Transmission Forum outlined the challenges posed by this influx of intermittent DER, both at the planning stages in addition to the operational stage. These challenges echo the key next steps that the NERC DER Review outlined to meet the needs for grid stability. An excerpt from the NERC guidelines is stated below to spur conversation on how these could be met with the “on-demand,” hyperscalability and computational abilities provided by AWS.
The NERC report states: “Data requirements and sharing of information across the transmission-distribution (T-D) interface should be further evaluated to allow for adequate assessment of future DER deployments. The important near-term issue is sharing of information to facilitate accurate modeling for transmission planning and operations. At some point, additional consideration may be needed for stability, protection, forecasting, reactive needs, and real-time estimates for operating needs.” To meet the need for dynamic load flow analyses in addition to state estimation for systems with ever increasing variable DERs, utilities can most efficiently do this by running on cloud. The Department of Energy (DOE) funded research to develop a “Visualization and Analytics of Distribution Systems with Deep Penetration of Distributed Energy Resources” (VADER) platform, which was built on AWS. The evolution, extension, maturation, and adoption of platforms like VADER is something that all utilities will need to rapidly deploy in the next few years. AWS endeavors to lead the industry in commercialization of such models so that they are readily deployable by utilities.
The NERC report calls out the modeling needs for such a distribution grid. “Based on reliability considerations for modeling purposes, generation from DER should not be netted with load as penetration increases. Load and DER should be explicitly modeled in
- Steady-state power flow and short-circuit studies
- Dynamic disturbance ride-through studies and transient stability studies for Bulk Power System (BPS) planning with a level of detail that is appropriate to represent the aggregate impact of DER on the modeling results over a 5–10 year planning horizon.”
A technically, operationally, and economically prudent approach to do these highly complex load flows and dynamic state estimation will necessitate that these workloads run in a cloud infrastructure. As we see the early availability of state estimation as an analytics function offered by many advanced distribution management system (ADMS) vendors, it will become increasingly necessary to run some of these components of ADMS in the cloud as well.
The report calls out that “dynamic models for different DER technologies are available and should presently be used to model the evolving interconnection requirements and related performance requirements.” As is evidenced by numerous examples across the industry, these models are best built on cloud. By using AWS services like Amazon SageMaker, engineers in utilities can easily benefit from using the latest data science and modeling techniques. Using AWS artificial intelligence and machine learning for developing these dynamic models equip utility operations to understand and forecast grid performance. APN Partners like Opus One, Enbala and AutoGrid are some examples of cloud native solutions in this area.
Modeling for the future with cloud compute agility
The US is fast approaching an inflection point where a grid that is based upon an ever-increasing share of distributed energy resources cannot be modeled or safely and efficiently run with economic and technical limitations of on-premises data center resources. In a paper published in Renewable and Sustainable Energy Reviews, the authors provide a detailed analysis on why current systems and rules that take days to run analyses are unsuitable for a variable renewable energy (VREs) led grid. “In long-term energy models, which are usually used to define the composition of and pathways to a future energy system, the temporal variability is often underrepresented. A too coarse time-step can give poor estimation of the operation of the system, leading to unfavorable investments, overestimation of the VRE share and an underestimation of the costs.” A cloud-based implementation of these modeling tools can provide the kind of agile compute services required to deliver these dynamic modeling results in a timeframe that is both useful and cost effective.
Renewables pass key tipping point
Data from the US Energy Information Administration showed that last year the US consumed more renewable energy than coal for the first time ever. Parts of Europe and Australia are already there. The UK has consistently generated more energy from renewables in the last three quarters than from fossil fuels. APN Partners like Reactive Technologies are providing the impact of these changes to grid inertia, a key factor towards grid stability. All aspects of the grid including forecasting, availability, and dispatch of these resources will need a compute environment that is secure, resilient, infinitely scalable, and available on demand. Twelve years ago, the US Department of Energy published a paper that called out the need for active grid management (AGM) using real-time distribution analysis. Grid complexity has grown since then and today’s AGM will need the scalability and extensibility of cloud to perform. Visit AWS Power & Utilities page for more information.