Skip to content
 

Pure AI-based weather forecasting models – Where are we and where should we go?

Presented by: Imme Ebert-Uphoff and Jacob Radford

Hosted by: CIRA, CSU; Machine Learning Group

Date: October 17, 2023 1:30 pm
Location: CIRA Commons

Over the past 18 months purely AI-driven global weather forecasting models have been emerging that demonstrate increasingly impressive skill.  These models are typically trained on ERA5 data and are completely data-driven – most of them do not include a single physical equation.  Many of these models are orders of magnitude faster than NWP models and can run on modest computational resources enabling repeatable on-demand forecasts competitive with NWP. The low computational cost enables the creation of very large ensembles, which better represent the tails of the forecast distribution, which, if an ensemble is well calibrated, allows for better forecasting of rare and extreme events.

 

As of right now these models are still in the proof-of-concept stage, but new models emerge roughly monthly with rapidly increasing abilities, raising the question whether AI models might soon compete with NWP models for selected forecasting tasks. While these models have not yet undergone the necessary vetting for transition into forecast operations, we can begin to lay the foundation towards this goal. This includes not just bulk verification, but also familiarizing forecasters with the output, strengths, and weaknesses of AI models and soliciting feedback from forecasters on where they envision AI models benefiting forecast processes. To foster this communication, a group of scientists from CIRA and NOAA-GSL have started to visualize the output of these models and are currently ramping up activities to evaluate these models.  At CIRA we now – thanks to Jacob Radford and Robert DeMaria – run several AI models locally and display 7-day global forecasts on a CIRA webpage.  We are also setting up a multi-year archive of forecasts for scientists to dig into. This presentation aims to bring everyone up to speed on these recent activities and to solicit feedback (and potential collaboration) regarding additional evaluation criteria and methods.