Thursday, 21 January 2016

Joint winners of the RAPID Challenge

Ten scientists ventured to predict the AMOC in the RAPID challenge competition. The preliminary analysis of the data has been completed, and it is time to announce the 'best prediction'.  But how do we quantify the best prediction? And do any of these predictions have useful forecast skill?

Competitors were asked to estimate what the mean AMOC would be in each of 6 quarters starting with April to june 2016 and finishing with July to September 2015. The ten entries and the AMOC calculated from a preliminary analysis of the new data is plotted below.

The thick black line shows the provisional RAPID data and the vertical bars are +/- 1 Sv. The thick dashed line shows the mean of all the entries. Coloured lines show the individual entries. Of these the two best estimates are highlighted with thick lines. Green is the MPI prediction,  and grey is the Met Office prediction.

During this period the mean of preliminary analysis gives a mean AMOC of 15.05 Sv with a standard deviation of 1.04 Sv. This standard deviation is less than half that of the preceding three years, indicating that the AMOC has been relatively stable during the last 18 months.

How skilled were the predictions?

Common measures of skill compare the forecast errors with the variability of the data. So based on the variability of the data an accuracy of 1Sv seems a good benchmark. Another useful benchmark is a prediction based on persistence, i.e. the last measured value. The absolute error of persistence was below 1Sv for the first three quarters but rose after that.

Four of the ten entries had an error less than 1Sv for the first quarter and each of these was also better than persistence. These results suggest a significant amount of skill when predicting three-months ahead, and the mean of all the predictions was almost spot on the value of the data with an error of just 0.05 Sv. In the second quarter things were very different. Only one prediction bettered persistence.

None of the competition entries had absolute errors of less than 1Sv for all of the first three quarters. Over the full 6 quarters the root mean square error (RMSE) of the entries ranged from 1.4Sv to 3.7Sv.

Joint winners

The lowest RMSE was the prediction of Leon Hermanson from the UK Met Office. The only prediction to have skill in both of the first two quarters was the entry by Daniela Matei and Helmuth Haak from the Max Planck Institute. These two entries are declared to be joint winners and will each receive a Discovery mug. Congratulations!

Clear need for observations

None of these hind casts showing skill beyond 6 months, and a prediction based on persistence (perhaps by chance) beat all of them. Thus the need for observations is clear, and the RAPID team will next be going to sea in Spring 2017 to retrieve the next measurements of the AMOC.

The small print:

The results are only provisional. Full calibration and quality control of the data have not yet been completed. The calculation of the transport in the Western Boundary Wedge has not been completed and we have used the average value of preceding years for this component.

Written by David Smeed, posted by Val

No comments:

Post a comment