Improving Weather Forecasts by Reducing Precision

Weather forecasting relies on supercomputers, used to solve the mathematical equations that describe atmospheric flow. The accuracy of the forecasts is constrained by available computing power. Processor speeds have not increased much in recent years and speed-ups are achieved by running many processes in parallel. Energy costs have risen rapidly: there is a multimillion Euro annual power bill to run a supercomputer, which may consume something like 10 megawatts [TM210 or search for “thatsmaths” at irishtimes.com].

The characteristic butterfly pattern for solutions of Lorenz’s equations [Image credit: source unknown].

Early computer programs for weather prediction, written in the Fortran language, stored numbers in single precision. Each number had 32 binary digits — or bits — corresponding to a decimal number with about seven significant digits. Later models moved to double precision, each number having 64 bits and about 15 accurate decimal digits. This is now common practice.

It seems self-evident that higher numerical precision would result in greater forecast accuracy, but this is not necessarily the case. Observations, which provide starting data, have only a few digits precision. We may know the temperature to one tenth of a degree, or the wind speed within one metre per second. Representing these values with several digits beyond the decimal point may be futile. It could be likened to giving somebody’s height to the nearest millimetre or, in double precision, the nearest micron, which is meaningless.

Computational resources

Of course, higher precision does reduce errors during the millions of computations needed for a forecast. But higher precision also implies greater storage requirements, larger data transmission volumes and longer processing times. Are these costs justified, or can limited computational resources be used in more efficient ways?

At the European Centre for Medium-Range Weather Forecasts (ECMWF), researchers have been seeking ways to reduce computing power. They have found that 64-bit accuracy is not necessary, and that, with 32-bit numbers, forecasts of the same quality are obtained much faster. The 40% saving can be used in other ways, like enhancing spatial resolution or increasing ensemble size. Thus, a reduction in numerical precision can result in improved forecast accuracy.

Atmospheric Chaos

Atmospheric flow is chaotic: this means that a small change in the starting values can lead to a wildly different forecast. To allow for chaos, meteorologists run their models multiple times with slightly different starting values. The spread of these “ensembles” gives a measure of the confidence that can be placed in a forecast, and probabilities of different scenarios can be given. Researchers at ECMWF were surprised that reducing numerical precision of the computations had little influence on the quality of the ensemble forecasts.

For climate simulations, which run over decades or centuries, it is expected that, by using reduced number lengths, substantial savings will be possible. Some sensitive operations like matrix inversion may require double precision but the bulk of the calculations can be done with 32 bits. Current work is also testing half-precision (16-bit numbers) for non-sensitive components of the models.

Chaos was well-described by meteorologist Ed Lorenz, who asked if a tiny disturbance could cause a storm in a far-away continent. The idea is encapsulated in a limerick:

Lorenz demonstrated with skill

The chaos of heatwave and chill:

Tornados in Texas

Are triggered by flexes

Of butterflies’ wings in Brazil.

Sources

Vaňa, Filip, Peter Düben, Simon Lang and Tim Palmer, 2017: Single precision in weather forecasting models: an evaluation with the IFS. Mon. Weather. Rev., Vol. 145, p. 495.

* * * * *

That’s Maths II: A Ton of Wonders

by Peter Lynch now available.
Full details and links to suppliers at
http://logicpress.ie/2020-3/

>>  Review in The Irish Times  <<

* * * * *

 


Last 50 Posts

Categories