Their importance will only increase as the long-term trends around decarbonisation, digitalisation, and shift towards distributed generation will drive complexity and volatility. A systematic approach that harnesses the power of modern analytics and machine learning to optimise forecast performance is necessary to remain competitive.
Forecasts play a crucial role in the energy sector as a key driver of decision-making in trading and risk management. This is also the case for long-term valuation and strategic planning as energy companies are highly exposed to external factors including commodity prices, macroeconomic variables, policy, and regulation. These factors typically explain more than two-thirds of the enterprise value of an energy company.
In energy trading, fundamental factors such as weather, flows, and balances as well as technical indicators, volatility, skewness, market positioning, hedging activities of other participants are all in the mix of variables that drive opportunity and risk. Many of these factors are highly variable and uncertain themselves. The complexity is also increasing as renewable and distributed generation technologies are penetrating the market, intensifying generation intermittency and price volatility while shifting value away from larger wholesale markets into multiple smaller and more fragmented pockets. In this environment, predicting future movements, recognising emerging patterns and responding to them rapidly are imperative. This has been driving energy trading organisations to invest in proprietary solutions driven by data science. Thankfully, the availability of granular digital data has also been improving and many vendors now offer specialist data products and forecasts which can be tailored to a user’s requirements.
Forecasts play a crucial role in the energy sector as a key driver of decision-making in trading and risk management
However, there are still challenges. One of the issues is about forecast variance. As forecasters multiplied in numbers so did their views and many forecasts tend to differ substantially across vendors. Similarly, forecasts can shift abruptly and without much explanation. In our applications, we have observed that even for well-established markets the highest quartile of forecasts for a single variable can be more than twice that of the average, while the lower end can be 60-70 percent less than the average at any point. In the presence of such variance, it can be difficult to determine what the forecasts are really signalling. On the other hand, for niche markets and variables, small sample bias is still the more common problem as only a handful of views are available. In this case, the challenge is extracting the best information from a small set that matters to the respective user.
Addressing these challenges requires a systematic approach to acquiring, evaluating and tuning forecasts for maximum financial performance, ideally as part of a wider strategy towards the optimal use of information across the organisation. Implementation can be done in 5 interdependent steps summarised in Figure 1 below.
Every organisation is different, both in terms of its needs and capabilities. As a result, the implementation of a process like this and the associated challenges will vary. In some cases, it can be difficult and expensive, especially if forecasting is not centrally coordinated. However, the impact on performance and subsequent commercial gains will likely to be substantial.
In our team, we have achieved massive improvements in forecast KPIs, by implementing systems that employ machine learning to identify factors behind past mistakes and correct them in a systematic and adaptive manner. Beyond performance improvement, it helped us discover strengths in forecasting certain indicators we were not aware of and that some forecast KPIs had distinct maturity curves. The latter helped optimise the time spent on quality control before publication as well as providing guidance to the users on when and how much to rely on a forecast following its publication.
Even if such benefits may not be important immediately, organisations that fundamentally rely on forecasts for decision-making may not have a choice but to adopt such an approach in the longer term to remain competitive. A systematic approach can help achieve this via enabling the integration of AI and machine learning into selecting and improving forecast performance and quality of information they convey. A key driver of long-term success will be early adoption of these systems to ‘train’ them on data and accumulate knowledge as the real value to the organisation will be in what is learned, rather than the algorithms which are already well-developed and available open source.
ROBERT J. PRILL, Director, Data Science and AI, Computational Biology and Data Sciences, Ferring Research Institute, Inc. and YONG YUE, Senior Director, Computational Biology and Data Sciences, Ferring Research Institute, Inc.