Day 2 of our final EUCP meeting began with a look at work on improving decadal forecasting, largely carried out by Work Package 1. We first heard from Carlos Delgado-Torres and his work on the quality assessment of multi-model decadal forecasts using CMIP6. This has involved assessing hundreds of historical and future climate simulations. They have good skill for simulating temperature, but less for rainfall. The long-term, multi-model ensemble is better than around half of the shorter forecasts, but not up to the level of the best forecast. The results demonstrated that initialising the models with historical observed climate data has value, as does a larger ensemble of climate simulations.
Panos Athanasiadis then gave us a very useful list of recommendations for the future development of decadal prediction systems, out to five years ahead. Increasing the number of climatic variables covered by the predictions, as well as increasing model resolution and ensemble size, all have proven benefits to the skill of the predictions. It is important to include specific events like volcanic eruptions, while accounting for Greenland ice sheet melting is important for addressing errors in sea-surface temperature simulations. Reducing the known biases in the models will also be important for the future of decadal forecasting.
Delegates then heard from Doug Smith on the need for improved understanding of decadal predictability. Decadal predictions have become operational in recent years through the WMO Lead Centre for Annual-to-Decadal Climate Prediction, which is supported by EUCP and released a new landmark forecast a few days after the meeting. Doug described various ways in which confidence in these predictions can be improved, which should lead on to them being more widely used in strategic decision-making by climate information users. Understanding the climate drivers involved and remaining biases will be key to this.
We then heard about the applications of these decadal predictions from Julia Lockwood, who has been working on this as part of the C3S_34c project with the Copernicus Climate Change Service. This project has engaged users of climate information in various industries to build case studies of how they might use these predictions. These users spanned the insurance, water management, agriculture and energy industries, and all found some example of how decadal predictions can supply them with useful information, such as 5-year forecasts of insured losses from hurricanes in the USA. Julia had several recommendations for how these predictions can be effectively put in the hands of users, including using large ensembles to demonstrate skill and improving the users’ confidence.
The next session of the meeting covered the activities of Work Package 5 on merging climate predictions and projections across timescales. Gabi Hegerl described her work on moving towards using consistent observational constraints in both short-term predictions and longer-term projections. The team have discovered that these constraints are not independent; the order they are applied matters. They may also be interdependent or related, which must be accounted for when combining constraints. The effect of constraint may also be affected by the model used or the season the simulation covers. Accessibility and openness in the type of constraint used and in which context across studies will help develop research in this area and working with users will help determine which type of constraint is best for their application.
Daniel Befort then told us about his assessment of concatenating climate predictions and projections, simply placing the longer projection after the short-term prediction. This has the advantage of being relatively easy to do, but could it introduce errors? There are problems if there is too big of a jump between the predictions and projections at the point they join, which can be a particular issue due to the relatively large variance seen in climate projections. The team assessed the scale of this problem, which is particularly acute in the North Atlantic, Greenland and northern Europe, and developed a calibration method to assist this concatenation process, reducing this problem of inconsistency everywhere except the North Atlantic.
The final talk of the day was given by Markus Donat, who described an alternative method of combining decadal predictions and climate projections: using predictions to constrain the longer projections, choosing those projections which best match predictions in the first part of their run. Studies have tested this method at both regional and global scale and found it to be an effective method of reducing the variability of the long-term projections, sometimes yielding better skill than in the short predictions. This method still requires refinement and testing with different models and ensemble sizes, but it shows great promise at carrying the value of short predictions beyond ten years in the future.
The second day of the meeting ended with a panel discussion covering the last two sessions, and a question was posed to all the panellists: Why do we need seamless predictions? The responses covered many different interpretations of this question, with many panellists highlighting what seamless predictions might mean to users and what they need out of them, against their use in scientific research. Users may not mind exactly how a set of climate information has been generated, only that it is robust, reliable and actionable. From a scientific standpoint, defining what we mean by ‘seamless’ will be useful for user interaction, while we need a better understanding of the underlying drivers behind what we see in these simulations in order to give users confidence in the information. These simulations seem well-positioned to meet a growing need from users who are increasingly acknowledging that climate change is happening now and are eager to utilise climate information in their future planning.
See also Part 1 and Part 3 of this report for more on our Final Meeting.