Imprecise, not inaccurate, weather forecasting

← prev
On the response of Euro-Atlantic regimes to forcing and the Palmer Hypothesis

Contents
next →
The path to km-scale climate modelling

Matthew Chantry
Milan Klöwer
 

Matthew Chantry1 and Milan Klöwer2

1 ECMWF
2 University of Oxford


Two themes have been central to Tim’s career: Striving for increased spatial resolution in weather and climate models and the value of stochasticity to represent model uncertainty. But the value of stochasticity brings into question what is the necessary numerical precision for calculations in weather and climate models. Could we reduce the precision to accelerate simulations? Working under Tim’s guidance, a large group of researchers explored whether numerical precisions lower than standard double precision could free up computational resources to be reinvested into high resolution.

To date, the largest success in this work has been the adoption of single precision floating-point numbers for the IFS forecast model at ECMWF. This began with work in Tim’s group by using OpenIFS to test the sensitivity to the forecast model to reduced precision. The success of this study led ECMWF to fine-tune the results. With vanishingly few exceptions the model was ported to single precision — resulting in computational savings of 40%. Using the freed resources, in the same release cycle the ensemble forecasting system received an upgrade in vertical resolution. The upshot was that a reduction in numerical precision can indeed increase forecasting accuracy. A vindication of Tim’s approach.

The success of single-precision IFS motivated the question: How low can we go? Could 16-bit half precision floats, a popular number format on GPU hardware, have a use in weather and climate computing? Half precision results in only 3 decimal places of precision and limits the largest representable number to 65,504. For rapid testing and to better understand the impact of reduced precision, work by Tim and colleagues led to the development of the Fortran reduced precision emulator. Using this emulator half precision was explored in the OpenIFS model, the SPEEDY model, land surface schemes, climate simulations, radiative transfer models, the Lorenz system, data assimilation, superparameterizations and more.

Perhaps the most successful application of half precision has been in the Legendre transforms of IFS. A key component of any spectral code, they are computationally expensive, especially at high resolution. As Legendre modes represent different length scales, a higher precision can be used for the larger, more certain synoptic scales, and a lower precision for smaller, more uncertain, scales. Such scale-selective precision allowed half precision for most of the Legendre transform calculations and linked precision to Tim’s previous works in predictability and stochastic perturbations.

In Tim’s view, a natural place for reduced precision was in the parameterized physics of weather and climate models which come with a higher uncertainty. To approach this aspect, Tim and colleagues used neural networks to create emulators, or imprecise representations of a physical parameterization scheme. Whatsmore, neural networks can more easily be calculated at reduced numerical precision, including half precision for performance. For parameterization of non-orographic gravity wave drag Tim and colleagues found that these neural networks can be used within IFS with no degradation of forecast accuracy. Work continues to realise the benefits of this approach and target operationalisation.

At higher and higher resolution, weather and climate models produce increasingly large amounts of data that have to be compressed for efficient storage, distribution and analysis. ECMWF’s data archive will approach or exceed 1 Exabyte within the next decade. But data storage can be drastically reduced when limiting output precision to only those bits that contain information. Using information theory, Tim’s group introduced the concept of the bitwise real information content and proposed it to guide necessary precision for data output — with the potential to large storage savings through lossy data compression.

While floating-point numbers are the de facto standard to represent real numbers in binary, Tim realised early that it was unclear whether they are the best format for climate computations. Under his guidance, the alternative posit number format was first tested in idealised models. Posits indeed have an advantage over floats and could replace them if support becomes available on future computing hardware.

Quantum computers perhaps offer a different route to accelerated weather and climate computing. This subject combined Tim’s first love, quantum physics, with his ambitions to advance weather forecasting. Tim’s group examined the challenges ahead to using quantum computers for earth system modelling. Advantageously, the Earth system could be represented in a relatively small number of qubits, the quantum equivalent of the conventional bit. Disadvantageously, only a small amount of information could be retrieved before the quantum entanglement collapsed and the simulation would have to be rerun. Quantum computers are therefore unlikely to be used in the near future for many areas of climate computing.

Tim made the world of weather and climate forecasting reassess the long-held belief that high precision computations are a necessity for accurate predictions. Ripples from his work are still propagating through the field, with many forecasting centres experimenting with reduced precision. This has and will continue to lead to more accurate forecasts, an increasingly vital tool in a changing climate.

Thank you Tim for your leadership and mentorship on this body of work.

 

 

← prev
On the response of Euro-Atlantic regimes to forcing and the Palmer Hypothesis
↑ top next →
The path to km-scale climate modelling