In this panel, we show histograms of the observed temperature data, for the training set and the test set.
Temperature ranges were data in the training set are scarce will lead to higher uncertainty in the estimate of the exposure response function.
In this panel, we show the time series of daily observed temperature per year, for the time period of interest.
Temperature values above a given threshold (default 25 °C) are shown in red. You can change the default threshold value in the input field.
The table shows an overview of the user inputs for the data and the regression model. You can save this settings along with the model results.
This panel displays the output of the model
summary( )
function in R.
In the plot above, we used the regression coefficients estimated for the temperature indicator to build the exposure-response function.
In the panels below, we show the regression coefficients for the other predictors in the model, in particular the autoregressive terms (left) and the effect of day of the week and holiday (right).
In this panel, we show evaluation metrics for the model fit on the training set and 1-step ahead predictions on the test set.
The evaluation metrics are computed using the root mean square error (RMSE), mean absolute error (MAE) and Akaike Information Criterion (AIC).
We show results for the reference model (with temperature as predictor), for a model without temperature, and for two naive models of last observatoon carried forrward (LOCF).
This panel shows the fit of the model on the training set.
Green represents the fitted values, while black indicate the data. Dots indicate daily values; lines indicate a weekly rolling mean as a guide for the eye.
In left panel, we show examples of model forecasts at three time points in the test set.
Dots represent the raw data. The black line indicate 1-step ahead prediction and it is shown as a reference.
The blue line indicate forecats for the next 7 days (continuous line) or 21 days (dashed line).
Shaded area indicate prediction intervals (50% and 90% PI). Prediction interval for step h assumes a point prediction on step h-1, without propagating uncertainty from one day to the next.
A method for propagating uncertainty in the prediction interval is implemented in Section 3.
In the right panel, we show forecast errors (computed as root mean squared error (RMSE))
for different forecast horizons.
In the plot on the left, we show forecast errors (computed as root mean squared error (RMSE))
for different forecast horizons (first, second or third week in the future),
for the reference model and for a model neglecting temperature as predictor. We consider here all possible forecasting windows in the test set, each one with a given strating date.
On the right, we consider a subset of forecasting windows, corresponding to the ones
where the average temperature in the 7-day period of the forecasting window is above a given threshold value.
In this panel we show forecast errors on the test set, after cross validation across years.
For each year in the data, we train the model on all years except one year,
and test the model on the remaining data. Then, we compute forecast errors on the test set,
analogously to what is done in the 'Forecast errors' tab.