AWS for Industries

Near-real-time power plant optimization for energy trading and dispatching using Amazon Web Services

The declared goal of German-based energy company EWE AG is the systematic use of modern, cloud-based IT solutions in all business areas. Highly available, high-performance, and flexibly scalable IT solutions are essential to maintain competitive advantages in the current and upcoming energy markets. In particular, the short-term power-trading market is associated with considerable challenges. For example, the price information in the order book of the continuous intraday market changes many times per second for various trading products. The volatility of these products can be immense because of the highly fluctuating, weather-dependent power-production profiles and the uncertainty on the electricity market, among other factors. Successful marketing of flexibilities, especially in the fast intraday market, requires an extremely active position management that must gather and process complex datasets from numerous sources. This process is becoming an increasing challenge for power traders. Systems for decision support, automation, and trading algorithms are therefore crucial to provide a smooth and profitable asset management.

The fully automated marketing and control of power plants from the cloud is unprecedented in the much-cited “digitalization of the energy industry.” The solution implemented by EWE AG, Volue Trading Solutions (Volue ASA), and Amazon Web Services (AWS) has now been in use for several years without active interventions of power traders, dispatchers, or risk controllers. It has been implemented for the marketing of renewable energies (wind, solar, and biogas) and thermal power plant capacities as well. The procedure is described in more detail below, using an industrial power plant that converts gases from chemical processes into electricity as an example.

The following sections will dive into microservice architecture, trading flow, and optimization.

Architecture of the microservice cloud-based approach

An overview of the microservice architecture and the main components for the fully automated flexibility marketing for an industrial power plant is provided in the simplified process diagram and the subsequent explanations (see figures 1 and 2).

Figure 1 Infrastructure and AWS components

Figure 1. Infrastructure and AWS components

The central infrastructure component is Kubernetes for scheduling containers (for example, “Trading Services” in figure 1). We use Amazon Elastic Kubernetes Service (Amazon EKS), which automatically manages the availability and scalability of Kubernetes in the cloud. Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones, facilitating an intelligent distribution of traffic within the cloud infrastructure. Furthermore, Amazon OpenSearch Service—an open-source, distributed search and analytics suite—is in charge of near-real-time search, monitoring, and analysis of log data received from the different components and trading services. In conjunction with Fluentd/Fluent Bit for streamlining the logs, Amazon OpenSearch Service and Amazon Managed Grafana—a fully managed service for Grafana, a popular open-source analytics platform that helps you to query, visualize, and alert on your metrics, logs, and traces—facilitate monitoring and alerting as well as near-real-time dashboards of all present containers, including the trading algorithm. Because availability and reliability are key for the underlying use case, Amazon CloudWatch Synthetics supervises all provided endpoints and APIs within the infrastructure permanently. These endpoints are published through NGINX, which serves as a webserver, and through the web framework Flask in the underlying services (typically implemented in Python at EWE AG). Each endpoint is secured by the authentication functionalities of Amazon Cognito, which provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features. Besides these endpoints, all services are decoupled and communicate through a RabbitMQ message broker. RabbitMQ implements the Advanced Message Queuing Protocol (AMQP). This messaging approach is used, for example, in the event of a trade on the energy market by the trading algorithm. In this use case, the adjusted schedule needs to be transferred to the power plant immediately. The information of the trade is stored in the queue by the publisher to prevent data loss and will be processed whenever the consumer, our hypothetical power plant, is ready.

Trading services of the algorithmic trading framework

The following architecture (figure 2) shows the containerized services (the yellow rectangles) that are needed for optimizing and dispatching the power plant.

Figure 2 Relevant trading services for asset-based algorithmic trading

Figure 2. Relevant trading services for asset-based algorithmic trading

These services fulfill the criteria of reusability for other trading algorithms. Hence, the described architecture serves as a holistic trading platform and framework.

(1) The general goal of this asset-based trading algorithm is to automatically trade flexibilities of the power plant on the intraday market (EPEX SPOT and Nord Pool). For this purpose, flexibility data from the power plant is mandatory (like power output based on the predicted temperature, allowed ramps, efficiencies, and cost information). The power plant under consideration is connected to a virtual power plant (VPP) software. The incoming data is then sent by the VPP service through RabbitMQ and is interpreted by the two consumers “vpp-adapter-flexibility-data” and “vpp-adapter-master-data.”

(2) A subsequent representational state transfer (REST) API call verifies that the service “intraday-flexibility-api” stores the processed data in Amazon DocumentDB (with MongoDB compatibility), a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale. This database is split into a hot collection (for very fast data access) and a cold collection (for archiving purposes). In addition to the POST request, the service “intraday-flexibility-api” can use GET requests for providing optimization data.

(3) The container “asset-optimization” consumes this optimization data. The service runs every few minutes to reoptimize the plant’s schedule. It is the major core component of the whole process and contains the pure optimization model. The container collects the following:

  • the flexibility data from the “intraday-flexibility-api” service
  • the latest trades from the “intraday-deal-management” service
  • the current order-book data from the “intraday-continuous-power” service

(4) After a performed optimization run, “asset-optimization” sends possible orders through RabbitMQ to the “intraday-risk-control” service, which in turn checks that the orders are within the predefined risk management limits. Then, the risk module actively forwards the accepted orders to the “intraday-continuous-power” adapter using the message bus component.

(5) The “intraday-continuous-power” adapter uses the powerful order-implementation service Volue Algo Trader provided by Volue ASA to implement access to the intraday market. Volue Algo Trader fully automates intraday trading workflows by realizing programmatic market access to the EPEX SPOT and Nord Pool. At the core, this service is a robust and powerful client-server solution combined with high-performance runtime algorithms. Its use brings significant relief to intraday trading desks, reduces trading costs, and systematically capitalizes on trading opportunities in highly volatile and complex markets.

Within the “intraday-continuous-power” adapter service, a permanent WebSocket connection to the implementation service is established for the reverse direction. When an order becomes a trade, the “intraday-continuous-power” service is notified through WebSocket and the trade information is forwarded to the “intraday-deal-management” service.

(6a) The “intraday-deal-management” service represents the core component of our central position management tool for all short-term trading activities and serves as a database for trades. It provides trade data that must be considered in the next optimization run.

(6b) Additionally, traders can actively monitor positions, orders, and trades through this position management tool. This tool also includes a stop-switch functionality for each trading algorithm to stop or hibernate trading activities. The status of the stop switch is considered in the “intraday-risk-control” service.

(7) In order to close the cycle, when a new trade is made, the adjusted schedule needs to be sent directly to the power plant’s instrumentation and control components, which are fully connected to the already named VPP. Therefore, a trade must be forwarded through RabbitMQ to the “vpp-adapter-backchannel” service. This service transforms a trade into a suitable format for the VPP while calculating the plant’s new schedule. In any case, this process is very time sensitive because trades can still be performed 5 minutes before physical fulfillment, and the power plant needs to adjust its power output accordingly (including possible ramps). Therefore, the connectivity to the VPP and the power plant is tested continuously through the container “vpp-adapter-heartbeat-backchannel.” If there is an error in the chain, traders will be notified automatically with a description and a recommendation for further interactions.

(8) For back-office processes and scheduling purposes, all trades and derived information (for example, schedules and profits) must be captured in our energy trading and risk management (ETRM) system. The container “asset-optimization-booking” is responsible for collecting and calculating this information before it is pushed through a POST request to the “deal-booking” service and then to the ETRM system.

Optimization model for asset-based algorithmic trading

In the optimization process, a lot of different technical constraints as well as market conditions must be considered to properly map the reality of the plant. The goal is to maximize the plant’s profit by adjusting the plant’s schedule with respect to its technical limits. This balance can be achieved by either increasing or decreasing the power plant‘s output (that is, selling versus buying back on the market). Therefore, continuous, nonnegative variables are used for the resulting power output, sell quantity, and buy quantity. Each type of variable is limited by certain technical constraints, such as minimal and maximal power outputs, and by market constraints, such as the maximal buyable and sellable quantities. Furthermore, all variables are time indexed across the available trading horizon of the intraday market. This time indexing means that each variable exists for each quarterly hour, because quarter-hourly products are the finest tradable granularity on the intraday market. In summary, a very large number of continuous, nonnegative variables are needed to describe the plant.

Additionally, the efficiencies and thus the cost structure of incremental adjustments to the plant’s schedule are mapped through linear stepwise functions. In other words, if the resulting power plant schedule lies within a certain power interval, the cost and efficiency associated with this interval must be used in the optimization. The number of intervals is not limited, but five intervals have proven to be practical. In the mathematical model, binary variables (for each interval and again for each time step) ensure that the right interval and thus the right costs will be considered. The overall costs and potential profits given by the market prices in the order book across the time axis must be observed in the target function of the optimization problem. These factors determine the plant’s optimal schedule.

In total, the resulting mathematical model is a typical mixed-integer linear optimization problem (MILP). The solution space is considerably large, and the process of finding the optimal solution for the given parameters is complex. Even though not proven yet, it tends to be an NP-hard optimization problem, somewhat comparable to the well-known knapsack problem. (For example, Garey and Johnson, 1979, report extensively on the hardness of optimization problems.) This means it is unlikely that a polynomial time algorithm exists that will solve any instance of the given problem.

In academic literature, this approach belongs to the category of “unit commitment” problems. (For a general research overview, see Abujarad, Mustafa, and Jamian, 2017; for a tight MILP formulation, see Morales-España, Latorre, and Ramos, 2013.) To be more precise, maximizing the profit of a single power plant is known as the “self-commitment” problem (for example, see Wood, Wollenberg, and Sheblé, 2013). We have slightly adjusted this general approach and tailored it to the use case under consideration.

The container “asset-optimization” (see figure 2) implements the optimization model in a Python service. It is solved using a standard mathematical MILP-solver. Due to rather small problem sizes and the powerful AWS Cloud infrastructure, an optimal solution (with an allowed gap of 1 percent) can be calculated within milliseconds on a standard machine (even though the problem tends to be NP-hard). Because the market prices are changing very quickly, the optimization is solved every 5 minutes to reoptimize the plant’s schedule.

Conclusion

In large trading houses in the energy sector, it is not uncommon for several traders to be entrusted with the dispatching of power plants. Resources and potentials are thus tied up. Instead of the repetitive work of dispatching, these traders could develop additional revenue-generating strategies—for example, algorithmic, cloud-based optimizations. The basis for these optimizations is provided by the secure, reliable, and flexibly scalable infrastructure of AWS that offers support for modern IT architectures. EWE AG and AWS are seeing great future opportunities in automated trading. The described approach should therefore be seen as a blueprint of possible applications. Further use cases from the area of “asset-based algorithmic trading” (for example, automated wind dispatching) have already been implemented using the described reusable infrastructure or have been offered to third parties as a software-as-a-service product.

References

Abujarad, Saleh, M. W. Mustafa, and J. J. Jamian. “Recent Approaches of Unit Commitment in the Presence of Intermittent Renewable Energy Resources: A Review.” Renewable and Sustainable Energy Reviews 70, no. C (2017): 215–23. https://doi.org/ 10.1016/j.rser.2016.11.246.

Garey, Michael R., and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman and Company, 1979.

Morales-España, Germán, Jesus M. Latorre, and Andres Ramos. “Tight and Compact MILP Formulation for the Thermal Unit Commitment Problem.” IEEE Transactions on Power Systems 28, no. 4 (February 2013): 4897–4908. https://doi.org/10.1109/TPWRS.2013.2251373.

Wood, Allen J., Bruce F. Wollenberg, and Gerald B. Sheblé. Power Generation, Operation, and Control. 3rd ed. New York: Wiley, 2013.

Dr. Alexander Franz

Dr. Alexander Franz

Dr. Alexander Franz is a former energy trader and now works as a technical manager, heading the Trading IT Software Development department at EWE AG. His major goals at EWE AG are automation, digitalization, and “cloudification.” Moreover, he is interested in mathematical optimization, big data, and algorithmic trading and enjoys sharing his experience and knowledge.

Peter Stomberg

Peter Stomberg

Peter Stomberg is an industry veteran with multiple years of experience in different management roles. Currently, he is heading the Trading IT department at EWE AG and is responsible for the company’s cloud transformation. He enjoys connecting people and technology to encourage thriving business success.

Sascha Janssen

Sascha Janssen

Sascha Janssen is a Senior Solutions Architect at AWS, helping Power & Utility customers to become a digital utility. He enjoys connecting 'things', build serverless solutions, and use data to deliver deeper insights.