To download the blog post as a pdf file, click here.
Abstract
Jan-Erik Solheim, Kjell Stordahl and Ole Humlum (hereafter SSH) published two articles in 2011 and 2012 about the relationship between the mean temperature in a solar cycle and the length of the previous solar cycle [1, 2]. For the northern hemisphere, they found a negative correlation between those two variables. A long solar cycle is followed by one with a low temperature, and a short solar cycle is followed by one with a high temperature. SSH named this the Previous Solar Cycle Length Model. For simplicity, in this note I refer to it as the Solar Cycle Model or just the model. For the same reason, I usually omit the word mean when referring to the mean temperature in a solar cycle.
SSH claim that their model describes a cause-effect relationship, i.e. that it has predictive power. Solar cycle 24 had just started when they wrote their articles. SSH predicted a significant temperature decrease in solar cycle 24. That solar cycle has just ended, and now it is possible to check if their prediction came true. It did not.
The temperatures fitted well with the Solar Cycle Model until the mid-1970s, but not later. The mean temperatures during the last solar cycles have been much higher than predicted by the model.
1 Introduction
When a solar cycle has ended, its length is known, and the Solar Cycle Model claims that it can predict the mean temperature in the next solar cycle. Solar cycle 23 ended in November 2008 after having lasted for an unusually long time, well over 12 years. SSH therefore wrote in [1]: 'We predict an annual mean temperature decrease for Svalbard of 3.5 ± 2°C from solar cycle 23 to solar cycle 24.' Their next article [2] concentrated on the North Atlantic region including Norway and Iceland. They wrote that the Model 'provides a tool to predict an average temperature decrease of at least 1.0°C from solar cycle 23 to solar cycle 24 for the stations and areas analyzed.'
Back in 2012 I doubted that the length of a solar cycle controls the temperatures in the next solar cycle. I therefore programmed the Solar Cycle Model myself and I downloaded the same temperature and solar cycle data that SSH had used in their analysis. I found the same negative correlation between the mean temperature in a solar cycle and the length of the previous solar cycle as SSH did. This correlation was strong till the mid-1970s, but not thereafter.
In their two articles, SSH show results when their model predicted the temperature in solar cycle 24. They do this in 16 figures, each consisting of four graphical plots. In a short chapter (3.5 in [2]), they tell that they, as a test of the model, used the model to predict the temperature in solar cycle 23 based on the temperatures measured in the solar cycles up to and including solar cycle 22. But they did not show the differences between the measurements and the predictions, neither graphically nor numerically. They did not do that, neither for solar cycle 23 nor for the solar cycles before that. In the blog post you are reading now, I do this for all the eight temperature series which I analyze. In this way I can see how well or badly the predictions for the earlier solar cycles match with the temperatures that were measured in those cycles. I show the differences between predictions and measurements graphically in the figures that follow. For solar cycle 23 the measured temperatures are higher than predicted by the model; this applies for all the eight temperature series. Appendix C shows that for six of them the temperature was higher than the upper limit of the 95 percent confidence intervals around the predictions. In this way I found that the model predicted the temperature well for the solar cycles up to and including the one that ended in the mid-1970s, but not for the cycles after that.
In 2012 I published my analysis on Skeptical Science and on the blog post Solar Cycle Model fails after mid-1970s. I showed that the temperatures in the three solar cycles after the mid-1970s were much higher than predicted by the Model, and that the same was true to an even greater extent for the temperatures measured so far in the ongoing solar cycle 24.
In 2014 I updated my analysis with more temperature data for the ongoing solar cycle 24. It showed the same as my analysis two years earlier. The temperatures measured so far in the ongoing solar cycle 24 were much higher than predicted by the model. I published the results on a Norwegian discussion forum, and I got feedback from Jan-Erik Solheim, the lead author of the two articles. His only argument impossible to counter was that we have to wait until solar cycle 24 has ended before we can conclude with respect to the prediction for that cycle.
2 Solar Cycle 24 has ended
Some months ago a panel co-chaired by NOAA and NASA decided that Solar cycle 24 ended in November 2019 and that solar cycle 25 started in December 2019, see Hello Solar Cycle 25. They expect Solar Cycle 25 to have the same strength as cycle 24, and they have 'high confidence that Solar Cycle 25 will break the trend of weakening solar activity seen over the past four cycles. “We predict the decline in solar cycle amplitude, seen from cycles 21 through 24, has come to an end,” said Lisa Upton, Ph.D., panel co-chair and solar physicist with Space Systems Research Corp. “There is no indication we are approaching a Maunder-type minimum in solar activity.”'
The Norwegian organization Klimarealistene still argues that the ongoing climate change is mainly caused by changes in the solar activity, i.e. not by human activities. The three authors of [1, 2] are all members of Klimarealistene's Scientific Advisory Board. I have not seen that they have admitted that their Solar Cycle Model totally failed in its predictions for solar cycle 24. Some months ago the lead author Jan-Erik Solheim wrote on Klimarealistene's web site (Klimanytt 288) that solar cycle 25 has started without mentioning his failed predictions for the solar cycle that just ended. On the contrary, he wrote about the connection between solar activity and the climate, about the little ice age caused by low solar activity, and that it will be exciting to see if low solar activity in this century will cause a colder climate.
Because there is no sign that SSH will tell about how their Solar Cycle Model failed in its predictions for solar cycle 24, I will do so. I have repeated my analysis with updated temperatures and updated information about the solar cycles. The results are shown in the rest of this blog post.
3 The Solar Cycle Model run with different temperature series
SSH ran their Solar Cycle Model with temperature series downloaded in 2011. Then Solar Cycle 24 had just started. This chapter describes the results when I ran the model with temperature series downloaded in November 2020. I first ran the model with temperature series for the areas where SSH claimed that the model has predictive power. Then at last I ran it with the average of four temperature series with global coverage. As you will see, all model runs show that the predictions for solar cycle 24 were totally wrong.
The previous blog post, Local and regional temperature series, describes the temperature series that I now use and from where they are downloaded.
The figures in this chapter show the measured temperature in the solar cycles as blue circles and the predictions for these temperatures as red stars. This is for all solar cycles that have ended, i.e. up to and including solar cycle 24. The temperatures are shown as anomalies relative to the reference period from January 1881 till December 1910. The horizontal x value of the blue circles and the red stars is in the middle of the solar cycle they represent.
SSH applied the Durbin-Watson statistical test to check if there was 'too much' autocorrelation in the data. I apply the same test, and I will comment if the test was OK, almost OK or not OK. SSH also checked if the calculation of the regression line was statistical significant or not, i.e. had a p-value lower than 0.05. I do the same. In the text I will comment if these tests were OK or not. The numerical test values are given in Table 3 in Appendix 3.
In addition to what is shown in the figures, I let the model use the temperatures shown with the blue circles to predict the temperature for the ongoing solar cycle 25.
The following subchapters show the results when the model is run with different temperature series.
3.1 Norway and Svalbard
The BEST temperature series Norway covers the land areas in Norway and Svalbard.
|
Figure 1: The blue circles show the mean temperature in solar cycles 10 to 24. The red stars show the model's predictions for solar cycles 15 to 24.
|
Each red star in Figure 1 is the result of one run with the Solar Cycle Model. The model applied all temperatures up to and including solar cycle 14 to predict the temperature in cycle 15, all temperatures up to and including solar cycle 15 to predict the temperature in cycle 16, and so on. Ten model runs are done to generate the figure.
Figure 1 shows that the model predicted the temperatures well up to and including solar cycle 21 which ended in August 1986. Thereafter the temperatures have been higher than predicted by the model. Solar cycle 22 was 0.47°C warmer than predicted; it is only 2.24 percent probability for such a high (or even higher) temperature provided that the model is correct. Solar cycle 23 was 0.81°C warmer than predicted; it is only 0.39 percent probability for such a high temperature provided that the model is correct. The model performs even worse for solar cycle 24, which was 2.1°C warmer than predicted. It is practically zero percent probability for such a high temperature provided that the model is correct.
The statistical test values were OK when the model predicted the temperature in solar cycle 24. But because the temperature in solar cycle 24 was so much higher than predicted, they were far from OK when the model predicted the temperature in solar cycle 25.
3.2 Iceland
The BEST temperature series Iceland covers the land areas in Iceland.
|
Figure 2: The blue circles show the average temperature in solar cycles 10 to 24. The red stars show the model's predictions for solar cycles 15 to 24. |
Figure 2 shows that the model predicted the temperatures well up to and including solar cycle 22 which ended in July 1996. Thereafter the temperatures have been higher than predicted. Solar cycle 23 was 0.31°C warmer than predicted; it is 11.13 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 1.98°C warmer than predicted; it is practically zero percent probability for such a high temperature provided that the model is correct.
The statistical test values were OK when the model predicted the temperature in the solar cycles up to and including number 24. But because the temperature in solar cycle 24 was so much higher than predicted, they were far from OK when the model predicted the temperature in solar cycle 25.
3.3 Near Longyearbyen
The BEST near Longyearbyen temperature series is for a location 30 kilometer south southwest of Longyearbyen. It is far away from the fjord at which Longyearbyen is located, and therefore not so influenced by the lack of sea ice in the recent years as Longeyarbyen and Svalbard Lufthavn are. The previous blog post showed that the eKlima temperature series for Svalbard Lufthavn has much more warming in the recent years than the BEST series has. I use the BEST series as basis for Figure 3. It is considerable longer than the eKlima series is.
|
Figure 3: The blue circles show the average temperature in solar cycles 10 to 24. The red stars show the model's predictions for solar cycles 15 to 24. |
Figure 3 shows that the model predicted the temperatures well up to and including solar cycle 22 which ended in July 1996. Thereafter the temperatures have been higher than predicted. Solar cycle 23 was 0.80°C warmer than predicted; it is 1.87 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 3.33°C warmer than predicted; it is practically zero percent probability for such a high temperature provided that the model is correct.
The statistical test values were OK when the model predicted the temperature in the solar cycles 18 to 24, the limits included. But because the temperature in solar cycle 24 was so much higher than predicted, they were far from OK when the model predicted the temperature in solar cycle 25.
3.4 Svalbard Lufthavn
The eKlima homogenized temperature series for Svalbard Lufthavn Longyearbyen starts in september 1898. It covers four fewer solar cycles than the BEST near Longyearbyen series does. In addition to that, eKlima and BEST apply different methods for adjustment and homogenization. I run the Solar Cycle Model with both of them to check the robustness of my analysis. The previous subchapter shows the results when run with the BEST series, this shows the results when run with the eKlima series.
|
Figure 4: The blue circles show the average temperature in solar cycles 14 to 24. The red stars show the model's predictions for solar cycles 19 to 24. |
Figure 4 shows the same pattern as the previous figure with the results when the model was run with the BEST near Longyearbyen temperatures. The model predicted the temperatures well up to and including solar cycle 22. Thereafter the temperatures have been higher than predicted. Solar cycle 23 was 1.34°C warmer than predicted; it is 3.87 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 5.67°C warmer than predicted; it is practically zero percent probability for such a high temperature provided that the model is correct.
The statistical test values were OK when the model predicted the temperature in the solar cycles 20 to 24, the limits included. But because the temperature in solar cycle 24 was so much higher than predicted, they were far from OK when the model predicted the temperature in solar cycle 25.
3.5 Vardø
I use the the eKlima temperature series for Vardø radio station because it is a long one.
|
Figure 5: The blue circles show the average temperature in solar cycles 10 to 24. The red stars show the model's predictions for solar cycles 15 to 24. |
Figure 5 shows that the model predicted the temperatures satisfactorily up to and including solar cycle 21 which ended in August 1986. Thereafter the temperatures have been higher than predicted. Solar cycle 23 was 0.69°C warmer than predicted; it is 2.22 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 2.30°C warmer than predicted; it is practically zero percent probability for such a high temperature provided that the model is correct.
The statistical test values were OK when the model predicted the temperature in the solar cycles 18 to 24, the limits included. But because the temperature in solar cycle 24 was so much higher than predicted, they were far from OK when the model predicted the temperature in solar cycle 25.
3.6 Dombås
The homogenized (reconstructed) eKlima temperature series for Dombås starts in August 1864.
|
Figure 6: The blue circles show the average temperature in solar cycles 11 to 24. The red stars show the model's predictions for solar cycles 16 to 24. |
Figure 6 shows that the model predicted the temperatures satisfactorily for the solar cycles 17 to 21, the limits included. Thereafter the temperatures have been higher than predicted. Solar cycle 23 was 0.86°C warmer than predicted; it is 0.98 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 1.85°C warmer than predicted; it is 0.05 percent probability for such a high temperature provided that the model is correct.
The statistical test values were OK when the model predicted the temperature in the solar cycles 22 and 23. They were almost OK when the model predicted the temperature in solar cycle 24. But because the temperature in solar cycle 24 was so much higher than predicted, they were far from OK when the model predicted the temperature in solar cycle 25.
3.7 Northern Hemisphere Land only
SSH used the HadCRUT3 NH temperature series for the northern hemisphere. I did the same in both 2012 and in 2014. We all concluded that there was autocorrelation in the temperatures when the Solar Cycle Model used this series to predict the temperature in solar cycle 24.
Met Office does not support the HadCRUT3 temperature series any more. In their articles, SSH concentrated primarily on the land areas in the northern hemisphere. I therefore choose to use the BEST NH land only temperature series for the northern hemisphere as a replacement for HadCRUT3 NH.
|
Figure 7: The blue circles show the average temperature in solar cycles 10 to 24. The red stars show the model's predictions for solar cycles 15 to 24. |
Figure 7 is similar to the corresponding figure made in 2012 with the Hadcrut3 NH temperature series, except for the temperature in solar cycle 24 which was not fully available in 2012.
The model predicted the temperatures rather well up to and including solar cycle 20. Thereafter the temperatures in the solar cycles are much higher than predicted by the model. Solar cycle 20 ended in February 1976, which is the background for saying that the solar cycle model has failed since the mid-1970s.
Solar cycle 23 was 0.76°C warmer than predicted; it is 0.35 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 1.80°C warmer than predicted; it is 0.01 percent probability for such a high temperature provided that the model is correct.
The statistical tests were OK when the model predicted the temperature in the solar cycles 17 to 21, the limits included. Thereafter these tests went from being OK to absolutely not being OK. Because the temperature in solar cycle 24 was so much higher than predicted, the statistical tests failed completely when the model predicted the temperature in solar cycle 25.
3.8 Global coverage
A blog post published some months ago, Global warming is accelerating, showed that four temperature series with global coverage largely agree on the temperature development since 1850. The series were NASA GISTEMP, NOAA Global, Berkeley BEST and HadCRUT4 kriging. Figure 8 shows the results when the Solar Cycle Model runs with the average of these four series.
|
Figure 8: The blue circles show the average temperature in solar cycles 10 to 24. The red stars show the model's predictions for solar cycles 15 to 24. |
Figure 8 shows the same as Figure 7; the solar cycle model has failed since the mid-1970s.
Solar cycle 23 was 0.47°C warmer than predicted; it is 1.42 percent probability for such a high temperature provided that the model is correct. The model performs worse for solar cycle 24. It was 1.13°C warmer than predicted; it is 0.02 percent probability for such a high temperature provided that the model is correct.
The temperature series that are the basis for Figure 8, includes the sea surface temperatures. The termal inertia of the ocean may be the reason that the misses in the last solar cycles in Figure 8 are less than they are in Figure 7.
4 An illustration of how the Solar Cycle Model works
This chapter illustrates how the model works and how temperature data for a new solar cycle may totally change the correlation that is supposed to control the temperature in the next solar cycle.
The two next figures show how the model predicts the temperatures in solar cycle 24 and in the ongoing cycle 25. They show how the miss in the prediction for cycle 24 causes the model to reevaluate the correlation between solar cycle length and the temperature in the next solar cycle.
|
Figure 9: The Solar Cycle Model predicts the temperature in solar cycle 24 based on the temperatures in and the length of the solar cycles up to and including solar cycle 23. |
The explanation in this paragraph is common for Figure 9 and 10. The vertical axis y is the average temperature in the solar cycles. The horizontal axis x is the length of the previous solar cycle. The model uses the x and the y values of the blue circles when it predicts the temperature for the next solar cycle. The blue regression line is a best fit to the blue circles. It shows the correlation that the Solar Cycle Model uses to predict the temperature in the next solar cycle. The red star is the prediction for the next solar cycle. The blue star is the temperature measured in the next solar cycle. It does not contribute when the model calculates the regression line.
Figure 9 shows that there is a rather strong negative correlation between solar cycle length and the temperature in the next solar cycle when data for solar cycles up to and including solar cycle 23 is used.
|
Figure 10: The Solar Cycle Model predicts the temperature in solar cycle 25 based on the temperatures in and the length of the solar cycles up to and including solar cycle 24. |
Figure 10 shows a rather weak negative correlation between solar cycle length and the temperature in the next solar cycle. When comparing Figure 9 and 10 we see that the slope of the regression line is more than halved when data for just one more solar cycle, number 24, is added.
When I do similar model runs with the eKlima temperature series for Svalbard Lufthavn, the correlation changes sign when adding the data for solar cycle 24. Based on the data up to and including solar cycle 23 the model says that a long solar cycle will be followed by a solar cycle with low temperatures. The temperature in solar cycle 24 was much higher than predicted. Therefore, when data for that solar cycle is used in the calculation of the regression line, the model says that a long solar cycle is followed by a solar cycle with high temperatures.
A robust and correct model does not behave like this.
5 Conclusion
Now I used other temperature series than I did in 2012. The solar cycle 24 ended in November 2019, so they now fully cover that cycle. The results are still about the same. The conclusion that I wrote in 2012 is still valid, so I just repeat it.
The temperatures fit well with the Solar Cycle Model until the mid-1970s. Since then, however, temperatures have been much higher than predicted by the model. The Solar Cycle Model therefore cannot be used to predict future temperatures.
If there is a real, physical reason why temperatures fitted so well with the Solar Cycle Model until the mid-1970s, that reason must be a solar radiative forcing. If so, as this forcing is still present after the mid-1970s, it can no longer dominate. Another forcing, or several, must have become dominant. My analysis says nothing about what the new dominant forcing(s) may be. But it is natural to think of the human-induced forcings, which many scientists claim became dominant in the 1970s.
Skeie et al [3] show that the sum of the various human-induced forcings first became positive in the 1970s. Before 1970, the sum was small and the sign varied. After 1970, the sum has increased steadily up to a substantial positive value in 2010. This is the probable explanation why there is an increasing gap between the Solar Cycle Model's predictions and the observed mean temperatures after the middle of the 1970s.
Appendix A. Overview of the solar cycles
A new solar cycle starts when the number of sunspots is at its minimum. NASA explains how this is decided in the article
How Scientists Around the World Track the Solar Cycle. '
Around the world, observers conduct daily sunspot censuses. They draw the Sun at the same time each day, using the same tools for consistency. Together, their observations make up the international sunspot number, a complex task run by SILSO [World Data Center for the Sunspot Index and Long-term Solar Observations]. Some 80 stations around the world contribute their data'.
On their web page
Cycles Min/Max, SILSO states the start month of the solar cycles up to and including solar cycle 24. On September 15, 2020,
they wrote '
December 2019 confirmed as starting point of the new solar activity cycle [25]'. This is the basis for Table 1, which I used when I programmed the Solar Cycle Model.
|
Table 1. Solar Cycle (SC) start, end and duration. |
Appendix B. Mathematics used in the analysis
B.1 Uncertainty in predictions
The results from a regression analysis can be used to predict future measurements. There are uncertainties associated with such predictions. The uncertainty may be expressed as a 95% confidence interval around the prediction. There is a 95% probability that the next measurement will fall inside this interval. When I calculate the confidence interval for the prediction, I take into account
both the uncertainty of the next measurement
and the uncertainty of the estimate itself. I have explained this in more detail in an earlier blog post, see equation (3.3) in
Confidence intervals around temperature trend lines, with reference to chapter 8.3.11 in
Statistical Analysis in Climate Research written by Hans von Storch and Francis W Zwiers in 2001.
The blog post you are reading now expresses the uncertainty as a probability to measure a temperature as high as, or higher than, the one measured provided that the model is correct. To do that, I reorganize the aforementioned equation (3.3) to calculate the t-value of the measurement. The t-value is the difference between the measurement and the prediction divided with the uncertainty of the prediction. Thereafter I apply the cumulative distribution function of the T distribution to calculate the probability to get a t-value as big as, or greater than, the one I just calculated.
B.2 The Durbin-Watson test for autocorrelation
SSH place great emphasis on the Durbin-Watson statistical test, so I have also used it in my analysis.
After a regression analysis, the measurements will usually deviate more or less from the regression curve. The deviations, the vertical distances between the measurements and the curve, are called residuals. The Durbin-Watson test calculates a value that is used to determine whether there is statistically significant autocorrelation in the residuals. Equation (1) shows how this value d is calculated. d is always between 0 and 4.
In Equation (1), N is the number of measurements and ei is the residual of measurement number i.
Positive autocorrelation means that consecutive residuals tend to have the same sign, causing d to be substantially less than 2. Negative autocorrelation means that consecutive residuals tend to have the opposite sign, causing d to be substantially larger than 2.
A positive autocorrelation is statistically significant when d is smaller than a critical value dc. The critical value depends on the significance level and on the number of observations. But dc also depends on the data. Therefore, statistical tables indicate a lower value dL and an upper value dU for dc. When d is smaller than dL, there is statistically significant positive autocorrelation. When d is greater than dU and less than (4-dU), there is no statistically significant autocorrelation, positive nor negative. When d is between dL and dU, we are not certain if the positive autocorrelation is statistically significant or not. The same holds for values of d greater than 2. When d is greater than (4-dL), there is statistically significant negative autocorrelation.
|
Table 2 Critical values for the Durbin-Watson test value d. The values are for α equal to 0.05 and when d is calculated using the residuals of a linear regression analysis. |
Statistically significant autocorrelation is an indication that the regression curve is an incomplete model for the measurements. Autocorrelations are usually positive. We can compensate for positive autocorrelation when we calculate the uncertainties by setting the number of independent measurements Neff to less than the actual number of measurements N. This approach is often used when calculating uncertainties associated with trends based on monthly temperatures. For the Solar Cycle Model, on average, 11 years pass between each observation (mean temperature), and the number of observations is therefore only between 10 and 15. It feels wrong to reduce the number of independent measurements, and I have not done so. SSH touch on this in [1, 2], but they also opt not to reduce Neff. If we had reduced Neff, the 95% confidence interval would have broadened, making the prediction look both better and worse: better, because measurements would be more likely to fall within the 95% confidence interval around the prediction; worse, because the greater uncertainty of the predictions would make them less likely to be useful.
The Durbin-Watson test is well described in
Wikipedia, with links to other sources.
Appendix C. Statistics for the temperature series.
The Durbin-Watson test checks if there is autocorrelation in the residuals after a regression analysis. SSH use this test in [1, 2]. I use the same α (0.05) when testing for significance. The test classifies the result in one of three categories. They are, with my interpretation in parenthesis: no significant autocorrelation (passing the test), possibly significant autocorrelation (almost passing the test), or significant autocorrelation (failing the test). In table 3, the Durbin-Watson test value d is written in black when there is no significant autocorrelation, in red when possible significant autocorrelation and in bold red when significant autocorrelation.
The slope of the calculated trend line between the observed mean temperatures and the lengths of the previous solar cycles is statistically significantly different from zero when the p-value is less than 0.05. The p-value in table 3 is written in black when the p-value is less than 0.05 and in bold red when it is greater than that.
For each solar cycle and each temperature series, Table 3 shows four values. They are explained in the bullet list below. In the explanations, 'solar cycle' means the solar cycle number shown in the column heading.
- pth [%] is the probability of measuring a temperature as high as, or higher than, the one measured, provided that the model is correct. The calculation is based on the temperature measured in the solar cycle and the model's prediction of that temperature. The prediction is based on data up to, but not including, the solar cycle.
The three next values are statistical results when the model predicts the temperature in the next solar cycle. The prediction is based on data up to and including the solar cycle.
- dw The Durbin-Watson test value. See the explanation in the first paragraph of the appendix.
- slope [°C/year solar cycle length]. It tells how sensible the prediction of the temperature in the next solar cycle with respect to the length of the previous solar cycle is.
- p-value. The probability that random noise in the temperature measurements could result ina slope as large as, or larger than, the slope calculated. A value less than 0.05 is interpreted as statistical significant and written in black. A value larger than 0.05 is written in bold red.
|
Table 3. Solar Cycle Model Statistics. The column headings 20 to 24 are solar cycle numbers. See explanation in the text. |
The probability of measuring a temperature as high as, or higher than, the one measured in solar cycle 24, provided that the model is correct, is for all practical reasons equal to zero. Likewise, the two statistical test values dw and p-value show that the model is unsuitable for predicting the temperature in solar cycle 25. The last column in Table 3 shows that this applies to all temperature series.
The next-to-last column in Table 3 shows that the model was unable to predict the temperature in solar cycle 23 as well. This applies to all temperature series. Solar cycle 23 ended in November 2008.
The first column in Table 3, the one for solar cycle 20 which ended in February 1976, shows that the model performed well with all the temperature series until then. The temperatures predicted for that cycle matched well with the temperatures that were measured, and the statistical tests for the predictions of the temperatures in solar cycle 21 were all OK. But the next columns show that the model started to perform worse thereafter. This is not surprising, as explained in the Conclusion earlier in the blog post.
References
1.
Jan-Erik Solheim, Kjell Stordahl and Ole Humlum.
2.
Jan-Erik Solheim, Kjell Stordahl and Ole Humlum.
3.
Skeie et. al.
Dett er jo glimrende. Jeg er mektig imponert over arbeidet og innsatsen du har lagt ned i dette. Jeg synes du burde ta sikte på å publisere det i et av tidsskriftene SSH publisert i for snart ti år siden. Uansett får du aldri SSH til å innse/innrømme at deres prediksjoner var feil
SvarSlettTusen takk for oppmuntrende kommentar.
SlettJeg har tenkt på muligheten av å prøve å få inn en såkalt Commentary i tidsskriftet som SSH publiserte artikkelen i [2]. Men jeg både tror at det er for sent å gjøre det, og at jeg som en ukjent pensjonist ville hatt små muligheter til å få det publisert. Hvis jeg skulle gjort det, burde jeg ha gjort det i 2012 da jeg første gang jobbet med solsyklusmodellen til SSH. Allerede da var det klart at den basert på data frem t.o.m. solsyklus 23 bommet grovt på dens prediksjoner for solsyklus 24. Og at temperaturene målt frem til 2012 var mye høyere enn modellen predikterte basert på data t.o.m. solsyklus 24.
Jeg diskuterer gjerne videre oppfølging med deg på hapetja krøllalfa online punktum no.