regression model input

In the following we describe the data required to set up a regression model of net land sink. For numerical accuracy and to take into account seasonality in plant growth, the local temperature sensitivity of land carbon flux and ocean carbon sink models should be assessed at least at monthly resolution. Therefore, the ocean and land inputs to the equation (2) is first calculated monthly and then averaged at the annual resolution for regression analysis.

Rodenbach et al.19 Received local temperature sensitivity (C*) Interannual carbon flux in global CO2 Vice versa. An extension of the classical carbon flux inversion in which the time-dependent land carbon exchange is optimized, the ‘NEE-T inverse’ imposes a linear relationship between the prevailing temperature anomalies and the inter-annual carbon flux in each land grid cell. In the inversion, the average seasonal cycle, the trend of the seasonal cycle, and the long-term flux trend of the carbon flux are calculated separately. C* is resolved seasonally (with a four-week decorrelation scale) but repeated every year, giving 52/4 = 13 independent degrees of freedom in time. the resulting C* The field from the Carboscope inverse ‘sEXTocNEET_v2021’ is provided on a spatial grid of 2.5° × 2° and at daily resolution, which we average monthly. to get the final weighting factor C,I,som) used in our regression modelling, we normalize the sensitivity to the monthly mean \(\hat{\gamma }\) in each grid cell I Out of Ann By total cells:

$$\gamma \left(i,\mathrm{mon}\right)=\frac{1}{12}\frac{\hat{\gamma}\left(i,\mathrm{mon}\right)}{ \mathop{\sum }\limits_{k=1}^{12}\mathop{\sum }\limits_{l=1}^{n}\hat{\gamma }\left(l,k\right)} $$

(5)

Monthly temperature fields were downloaded from Berkeley Earth30 on a 1° × 1° latitude-longitude grid. The data are re-gridded to match the resolution of the Jena Carboscope inversion results and subsequently decimated in each grid cell by subtracting an exponential moving average of past temperatures with a time constant of 2.5 years. This avoids the causality problems associated with other filters, such as a centered running mean, which effectively incorporates information about the future, and allows the model to be updated easily as it follows the leading of the available data. produces valid output on the edge. Subtracting the exponential moving average is essentially the same as assuming that temperature controls carbon fluxes into or out of carbon pools with a turnover time of 2.5 years, parallel to the Rafelsky et al.22,

For the main configuration of our net land sink model, the ‘weighted-t’ model, C is calculated as \(\varGamma =\mathop{\sum}\nolimits_{i=1}^{n}{\gamma}_{i}{T}_{i}\)sum of all Ann gridded temperature sensitivity load CI and local temperature anomalies TeaI Worldwide. For an alternative configuration of the model, we also calculate the global mean land temperature and the Nino 3.4 index from the re-gridded temperature fields and use either C,

de-seasonalized CO2 Observations from the South Pole (SPO) and Mauna Loa (MLO, Hawaii) are available at monthly resolution of Scripps CO.2 program after 195839, Their monthly averages have been merged with a plank fit at Law Dome Ice Core CO.2 Records available from Scripps CO2 Website to extend time series to 180039,40,

Atmospheric growth rate of CO2 (AGR) is calculated as the 12 month-centred difference. For example, AGR for May 2018 is calculated as the difference in CO2 concentrations between November 2018 and November 2017.

Monthly ocean carbon uptake is calculated using the pulse-response ocean model published by Jose et al.33which only requires a history of atmospheric CO2 concentration as input. The model fully accounts for the non-linearity of carbon dissolution chemistry through higher-order parametrization and represents carbon exchange with the deep ocean via circulation through a simple pulse-response function. The model has been shown to effectively simulate the behavior of three-dimensional ocean models for the twenty-first century at a fraction of the computational cost. The model is built from an equilibrium position at 1800 and with parameter values ​​for the HILDA (HS + LS) model configuration given in Table 2 and section A2.2 of Jose et al.33, Model outputs have been scaled up again to match the integrated ocean sink between 1800 and 2021 of 176.03 GTC reported in the Global Carbon Project (GCP).5,

We use annual emissions from fossil fuel combustion and industrial processes as reported by GCP5, To reduce the number of terms in the budget, the effects of cement carbonation sinks have been grouped. Frevoked guess.

carbon monitor37 Began reporting monthly fossil emissions starting in 2019, which we use to highlight the effects of the COVID-19 pandemic in the picture. 2, To remove seasonal cycle effects from the data and create a consistent time series with emissions reported by GCP5We first divided the post-2020 carbon monitor emissions by their monthly value in 2019 and then multiplied that discrepancy by the GCP annual emissions for 2019.

Regression model set-up and uncertainties

First, a regression model is fitted to all input data between 1960 and 2020 to obtain the best estimate of the coefficient A, b And C And this R-The class value of the model. Then, the input data is split M= 12 non-overlapping blocks of five years to determine the uncertainty on the model parameters and find the generalization error which measures how well the model performs on data it was not trained on. Grouping into blocks addresses potential concerns about autocorrelation in data with <5 year timescales. M – 1 block, i.e. ‘training data’, is used to fit the regression model and obtain parameter estimates, and the remaining block, i.e. ‘validation data’, is used to evaluate model performance according to mean square error is done for (MSE). The partitioning process is iterated using each block once as validation data and can be thought of as a combination of the statistical methods ‘jackknifing’ and ‘cross-validation’.

prepares a suit of jackknifing M = 12 subjugate models that were fitted to different choices of training data and used to determine uncertainties for different parameters of the regression models including coefficients A , b And CAnd this R2 price. Taking into account the overlap in training data for different subordinate models, we report the uncertainties DX for parameter X Based on the equation (6,

$${\delta }_{x}=\sqrt{\frac{M-1}{M}{\sum }_{i=1}^{M}{\left({x}_{i}- \bar{x}\right)}^{2}}$$

(6)

Where? \(\bar{x}\) is the mean of the parameter X on all subordinate models I,

generate cross-validation MEstimate of normalization error, i.e. MSEJayReceived from various verification periods \(j=1,\ldots ,M\) (Extended Data Fig. 1, we find the average \(\overline{\mathrm{MSE}}\) and RMSE (\(\mathrm{RMSE}=\sqrt{\overline{\mathrm{MSE}}}\)) as the best estimate of the generalization error among all subordinate models. For the decimal RMSE values, the regression residuals εThe MSE prior is smoothed with a ten-year moving average, including the edges where it introduces a slight bias.Jay is calculated during the validation period.

\(\overline{\mathrm{MSE}}\) and RMSE of GCP5 This is how the budget imbalance is calculated. The GCP data is divided into five-year blocks and the MSE is determined for each independent five-year block.

To allow a quantitative comparison of the different models and a statistical comparison with the GCP, some confidence bounds must be placed on the generalization error. Cross-validation generates a distribution of MSE values ​​(MSE).Jay) obtained for different choices of five-year verification blocks (Jay, Although the validation blocks are independent, the training data for each subjugate model overlaps strongly. Therefore, the variance of MSEJay The values ​​capture only some of the uncertainty of the normalization error. To estimate the true normalization error we use an heuristic method that relies on re-sizing the variance of the MSEJay cross-validation value38,

$$\mathrm{var}\left(\overline{\mathrm{MSE}}\right)\approximately \left(\frac{1}{M}+\frac{{n}_{2}}{{n }_{1}}\right)\mathrm{var}\left(\mathrm{MSE}_{j}\right)$$

(7)

Where? Ann2 is the length of the validation period and Ann1 is the length of the training period for the model. Using this formula, we obtain values ​​of 0.25 ± 0.08, 0.36 ± 0.12, 0.55 ± 0.31 and 0.58 ± 0.17 (1)P) for the Weighted-T model, the Tropical-T model, the ENSO model and the MSE of the GCP, respectively. We use the delta method (which involves mapping according to the first order Taylor expansion of the derivative) to estimate the corresponding RMSE values.

We perform two sensitivity tests to examine the role of volcanic activity and ocean models in our regression analysis. We repeat the regression analysis including stratospheric aerosol optical depth34 as an additional predictor to better capture the impact of volcanic eruptions on land carbon exchange. Results and regression parameters are presented in Extended Data Table 1, Although the Weighted-T model does not benefit from the additional predictor and we see little improvement in the performance of the ENSO and Tropical-T models, the Weighted-T model outperforms both. When the period of the Mount Pinatubo eruption (1991–1993) is excluded from the regression analysis, the performance of all models increases (Extended Data Table) 3) but the results remain qualitatively the same. We also repeat the regression analysis using the mean of the GCP ocean sink estimates instead of the Jose et al. Pulse-response model. This leads to a slight improvement in the RMSE of all model configurations but no change in our conclusion (Extended Data Table) 2,

To support our claim that the GCP budget imbalance and the residuals of our regression model are comparable quantities, we correct the GCP budget imbalance with the fit of the imbalance in atmospheric CO2 Concentration. After cross-validation, it yields an RMSE of 0.8 ± 0.11 which is higher than the value calculated without additional fitting correction. This suggests that atmospheric CO2 There is no predictive power for GCP budget imbalances on record and our results are not a sample of additional tuning.

Finally, we repeat the regression analysis after replacing CO with2 term for a logarithmic relation, that is, \(b\times {\rm{ln}}({\rm{CO}}_{2})\)Which does not bring any significant change in the performance of the models.

GCP Emission Verification

The following ref. 3,4We benchmark our ability to verify reported fossil fuel and land use change emissions from the GCP global carbon budget5 Using its residual imbalance as a metric of uncertainty. Budget imbalances are temporally autocorrelated and are best represented by an AR(1) procedure with an AR coefficient of 0.35, as determined by the ARMIA function of the Python statsmodel package. Monte Carlo simulation (Ann= 200,000) The GCP budget imbalance is modeled as this AR(1)-process, with a cumulative uncertainty of 4.4 GTC (95% CI) after five years, or assuming continued emissions of 10 GTC years 8.8%-1,

Leave a Reply

Your email address will not be published. Required fields are marked *