Probabilistic forecasting with machine learning

If you like  Skforecast ,  help us giving a star on   GitHub! ⭐️

Probabilistic forecasting with machine learning

Joaquín Amat Rodrigo, Javier Escobar Ortiz
April, 2022 (last update November 2024)

Introduction

When trying to anticipate future values, most forecasting models try to predict what will be the most likely value. This is called point-forecasting. Although knowing in advance the expected value of a time series is useful in almost every business case, this kind of prediction does not provide any information about the confidence of the model nor the prediction uncertainty.

Probabilistic forecasting, as opposed to point-forecasting, is a family of techniques that allow for predicting the expected distribution of the outcome instead of a single future value. This type of forecasting provides much rich information since it allows for creating prediction intervals, the range of likely values where the true value may fall. More formally, a prediction interval defines the interval within which the true value of the response variable is expected to be found with a given probability.

Estimating prediction intervals in forecasting is challenging, since many well-established methods for regression and one-step-ahead forecasts are not directly applicable when predicting multiple steps ahead. Additionally, there is a trade-off between two key metrics: coverage and interval width. Ideally, we want to achieve a certain level of coverage (e.g. 80%) while keeping the prediction intervals as narrow as possible and the model's prediction error as low as possible.

There are multiple ways to estimate prediction intervals, most of which require that the residuals (errors) of the model follow a normal distribution. When this property cannot be assumed, two alternatives commonly used are bootstrapping and quantile regression. In order to illustrate how skforecast allows estimating prediction intervals for multi-step forecasting, the following examples are shown:

Warning

As Rob J Hyndman explains in his blog, in real-world problems, almost all prediction intervals are too narrow. For example, nominal 95% intervals may only provide coverage between 71% and 87%. This is a well-known phenomenon and arises because they do not account for all sources of uncertainty. With forecasting models, there are at least four sources of uncertainty:
  • The random error term
  • The parameter estimates
  • The choice of model for the historical data
  • The continuation of the historical data generating process into the future
When producing prediction intervals for time series models, generally only the first of these sources is taken into account. Therefore, it is advisable to use test data to validate the empirical coverage of the interval and not only rely on the expected one.

✎ Note

Conformal prediction is a relatively new framework that allows for the creation of confidence measures for predictions made by machine learning models. This method is on the roadmap of skforecast, but not yet available.

Prediction intervals using bootstrapped residuals

The error of one-step-ahead forecast is defined as $e_t = y_t - \hat{y}_{t|t-1}$. Assuming future errors will be like past errors, it is possible to simulate different predictions by sampling from the collection of errors previously seen in the past (i.e., the residuals) and adding them to the predictions.

Doing this repeatedly, a collection of slightly different predictions is created (possible future paths), that represent the expected variance in the forecasting process.

Finally, prediction intervals can be computed by calculating the $α/2$ and $1 − α/2$ percentiles of the simulated data at each forecasting horizon.


The main advantage of this strategy is that it only requires a single model to estimate any interval. The drawback is that, running hundreds or thousands of bootstrapping iterations, is computationally very expensive and not always workable.

Libraries

In [1]:
# Data processing
# ==============================================================================
import numpy as np
import pandas as pd
from skforecast.datasets import fetch_dataset

# Plots
# ==============================================================================
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
import plotly.graph_objects as go
import plotly.io as pio
import plotly.offline as poff
pio.templates.default = "seaborn"
pio.renderers.default = 'notebook' 
poff.init_notebook_mode(connected=True)
plt.style.use('seaborn-v0_8-darkgrid')
from skforecast.plot import plot_residuals
from skforecast.plot import plot_prediction_distribution
from pprint import pprint

# Modelling and Forecasting
# ==============================================================================
import skforecast
import sklearn
import lightgbm
from lightgbm import LGBMRegressor
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from skforecast.recursive import ForecasterRecursive
from skforecast.direct import ForecasterDirect
from skforecast.model_selection import TimeSeriesFold
from skforecast.model_selection import bayesian_search_forecaster
from skforecast.model_selection import backtesting_forecaster
from sklearn.metrics import mean_pinball_loss
from scipy.stats import norm

# Configuration
# ==============================================================================
import warnings
warnings.filterwarnings('once')

color = '\033[1m\033[38;5;208m'
print(f"{color}Version skforecast: {skforecast.__version__}")
print(f"{color}Version scikit-learn: {sklearn.__version__}")
print(f"{color}Version lightgbm: {lightgbm.__version__}")
print(f"{color}Version pandas: {pd.__version__}")
print(f"{color}Version numpy: {np.__version__}")
Version skforecast: 0.14.0
Version scikit-learn: 1.5.1
Version lightgbm: 4.4.0
Version pandas: 2.2.3
Version numpy: 2.0.2

Data

In [2]:
# Data download
# ==============================================================================
data = fetch_dataset(name='bike_sharing_extended_features')
data.head(2)
bike_sharing_extended_features
------------------------------
Hourly usage of the bike share system in the city of Washington D.C. during the
years 2011 and 2012. In addition to the number of users per hour, the dataset
was enriched by introducing supplementary features. Addition includes calendar-
based variables (day of the week, hour of the day, month, etc.), indicators for
sunlight, incorporation of rolling temperature averages, and the creation of
polynomial features generated from variable pairs. All cyclic variables are
encoded using sine and cosine functions to ensure accurate representation.
Fanaee-T,Hadi. (2013). Bike Sharing Dataset. UCI Machine Learning Repository.
https://doi.org/10.24432/C5W894.
Shape of the dataset: (17352, 90)
Out[2]:
users weather month_sin month_cos week_of_year_sin week_of_year_cos week_day_sin week_day_cos hour_day_sin hour_day_cos ... temp_roll_mean_1_day temp_roll_mean_7_day temp_roll_max_1_day temp_roll_min_1_day temp_roll_max_7_day temp_roll_min_7_day holiday_previous_day holiday_next_day temp holiday
date_time
2011-01-08 00:00:00 25.0 mist 0.5 0.866025 0.120537 0.992709 -0.781832 0.62349 0.258819 0.965926 ... 8.063334 10.127976 9.02 6.56 18.86 4.92 0.0 0.0 7.38 0.0
2011-01-08 01:00:00 16.0 mist 0.5 0.866025 0.120537 0.992709 -0.781832 0.62349 0.500000 0.866025 ... 8.029166 10.113334 9.02 6.56 18.86 4.92 0.0 0.0 7.38 0.0

2 rows × 90 columns

In [3]:
# One hot encoding of categorical variables
# ==============================================================================
encoder = ColumnTransformer(
              [('one_hot_encoder', OneHotEncoder(sparse_output=False), ['weather'])],
              remainder='passthrough',
              verbose_feature_names_out=False
          ).set_output(transform="pandas")
data = encoder.fit_transform(data)
In [4]:
# Selección de las variables exógenas
# ==============================================================================
exog_features = [
    'weather_clear', 'weather_mist', 'weather_rain', 'month_sin', 'month_cos',
    'week_of_year_sin', 'week_of_year_cos', 'week_day_sin', 'week_day_cos',
    'hour_day_sin', 'hour_day_cos', 'sunrise_hour_sin', 'sunrise_hour_cos',
    'sunset_hour_sin', 'sunset_hour_cos', 'temp'
]
data = data[['users'] + exog_features]

To facilitate the training of the models, the search for optimal hyperparameters and the evaluation of their predictive accuracy, the data are divided into three separate sets: training, validation and test.

In [5]:
# Split train-validation-test
# ==============================================================================
data = data.loc['2011-05-30 23:59:00':, :]
end_train = '2012-08-30 23:59:00'
end_validation = '2012-11-15 23:59:00'
data_train = data.loc[: end_train, :]
data_val   = data.loc[end_train:end_validation, :]
data_test  = data.loc[end_validation:, :]

print(f"Dates train      : {data_train.index.min()} --- {data_train.index.max()}  (n={len(data_train)})")
print(f"Dates validacion : {data_val.index.min()} --- {data_val.index.max()}  (n={len(data_val)})")
print(f"Dates test       : {data_test.index.min()} --- {data_test.index.max()}  (n={len(data_test)})")
Dates train      : 2011-05-31 00:00:00 --- 2012-08-30 23:00:00  (n=10992)
Dates validacion : 2012-08-31 00:00:00 --- 2012-11-15 23:00:00  (n=1848)
Dates test       : 2012-11-16 00:00:00 --- 2012-12-30 23:00:00  (n=1080)

Graphic exploration

Graphical exploration of time series can be an effective way of identifying trends, patterns, and seasonal variations. This, in turn, helps to guide the selection of the most appropriate forecasting model.

In [6]:
# Interactive plot of time series
# ==============================================================================
fig = go.Figure()
fig.add_trace(go.Scatter(x=data_train.index, y=data_train['users'], mode='lines', name='Train'))
fig.add_trace(go.Scatter(x=data_val.index, y=data_val['users'], mode='lines', name='Validation'))
fig.add_trace(go.Scatter(x=data_test.index, y=data_test['users'], mode='lines', name='Test'))
fig.update_layout(
    title  = 'Number of users',
    xaxis_title="Time",
    yaxis_title="Users",
    width=800,
    height=400,
    margin=dict(l=20, r=20, t=35, b=20),
    legend=dict(
        orientation="h",
        yanchor="top",
        y=1,
        xanchor="left",
        x=0.001
    )
)
fig.show()

Auto-correlation plots are a useful tool for identifying the order of an autoregressive model. The autocorrelation function (ACF) is a measure of the correlation between the time series and a lagged version of itself. The partial autocorrelation function (PACF) is a measure of the correlation between the time series and a lagged version of itself, controlling for the values of the time series at all shorter lags. These plots are useful for identifying the lags to be included in the autoregressive model.

In [7]:
# Autocorrelation plot
# ==============================================================================
fig, ax = plt.subplots(figsize=(6, 2))
plot_acf(data.users, ax=ax, lags=24 * 3)
plt.show()
In [8]:
# Partial autocorrelation plot
# ==============================================================================
fig, ax = plt.subplots(figsize=(6, 2))
plot_pacf(data.users, ax=ax, lags=24 * 3)
plt.show()

The autocorrelation plot demonstrates a strong correlation between the number of users in one hour and prior hours, as well as between users in one hour and the corresponding hour in preceding days. This observed correlation suggests that autoregressive models may be effective in this scenario.

In sample residuals

A recursive-multi-step forecaster is trained and its hyperparameters optimized. Then, prediction intervals based on bootstrapped residuals are estimated.

In [9]:
# Create forecaster and hyperparameters search
# ==============================================================================
# Forecaster
forecaster = ForecasterRecursive(
                 regressor = LGBMRegressor(random_state=15926, verbose=-1),
                 lags      = 7
             )

# Lags used as predictors
lags_grid = [24, 48, (1, 2, 3, 23, 24, 25, 47, 48, 49, 71, 72, 73, 364*24, 365*24)]

# Folds
cv = TimeSeriesFold(
        steps              = 24,
        initial_train_size = len(data[:end_train]),
        refit              = False,
     )

# Regressor hyperparameters search space
def search_space(trial):
    search_space  = {
        'lags'            : trial.suggest_categorical('lags', lags_grid),
        'n_estimators'    : trial.suggest_int('n_estimators', 200, 800, step=100),
        'max_depth'       : trial.suggest_int('max_depth', 3, 8, step=1),
        'min_data_in_leaf': trial.suggest_int('min_data_in_leaf', 25, 500),
        'learning_rate'   : trial.suggest_float('learning_rate', 0.01, 0.5),
        'feature_fraction': trial.suggest_float('feature_fraction', 0.5, 0.8, step=0.1),
        'max_bin'         : trial.suggest_int('max_bin', 50, 100, step=25),
        'reg_alpha'       : trial.suggest_float('reg_alpha', 0, 1, step=0.1),
        'reg_lambda'      : trial.suggest_float('reg_lambda', 0, 1, step=0.1)
    }

    return search_space

results_search, frozen_trial = bayesian_search_forecaster(
                                   forecaster    = forecaster,
                                   y             = data.loc[:end_validation, 'users'],
                                   exog          = data.loc[:end_validation, exog_features],
                                   cv            = cv,
                                   metric        = 'mean_absolute_error',
                                   search_space  = search_space,
                                   n_trials      = 20,
                                   random_state  = 123,
                                   return_best   = True,
                                   n_jobs        = 'auto',
                                   verbose       = False,
                                   show_progress = True
                               )

best_params = results_search['params'].iloc[0]
best_lags   = results_search['lags'].iloc[0]
`Forecaster` refitted using the best-found lags and parameters, and the whole data set: 
  Lags: [   1    2    3   23   24   25   47   48   49   71   72   73 8736 8760] 
  Parameters: {'n_estimators': 300, 'max_depth': 5, 'min_data_in_leaf': 25, 'learning_rate': 0.10794173876361574, 'feature_fraction': 0.6, 'max_bin': 100, 'reg_alpha': 1.0, 'reg_lambda': 1.0}
  Backtesting metric: 55.26098214742841

Once the best hyperparameters have been selected, the backtesting_forecaster() function is used to generate the prediction intervals for the entire test set.

  • The interval argument indicates the desired coverage probability of the prediction intervals. In this case, interval is set to [10, 90], which means that the prediction intervals are calculated for the 10th and 90th percentiles, resulting in a theoretical coverage probability of 80%.

  • The n_boot argument is used to specify the number of bootstrap samples to be used in estimating the prediction intervals. The larger the number of samples, the more accurate the prediction intervals will be, but the longer the calculation will take.

By default, intervals are calculated using in-sample residuals (residuals from the training set). However, this can result in intervals that are too narrow (overly optimistic).

In [10]:
# Backtesting with prediction intervals in test data using in-sample residuals
# ==============================================================================
cv = TimeSeriesFold(
        steps              = 24,
        initial_train_size = len(data.loc[:end_validation]),
        refit              = False,
    )
metric, predictions = backtesting_forecaster(
                          forecaster              = forecaster,
                          y                       = data['users'],
                          exog                    = data[exog_features],
                          cv                      = cv,
                          metric                  = 'mean_absolute_error',
                          interval                = [10, 90],
                          n_boot                  = 250,
                          use_in_sample_residuals = True,
                          use_binned_residuals    = False,
                          n_jobs                  = 'auto',
                          verbose                 = False,
                          show_progress           = True
                      )
display(metric)
predictions.head(5)
mean_absolute_error
0 50.462723
Out[10]:
pred lower_bound upper_bound
2012-11-16 00:00:00 75.957987 52.697966 100.885160
2012-11-16 01:00:00 22.895924 0.953096 56.633349
2012-11-16 02:00:00 11.105227 -12.096016 41.032015
2012-11-16 03:00:00 7.465197 -19.899672 34.476395
2012-11-16 04:00:00 6.216360 -15.973349 39.723453
In [11]:
# Function to plot predicted intervals
# ======================================================================================
def plot_predicted_intervals(
    predictions: pd.DataFrame,
    y_true: pd.DataFrame,
    target_variable: str,
    initial_x_zoom: list=None,
    title: str=None,
    xaxis_title: str=None,
    yaxis_title: str=None,
):
    """
    Plot predicted intervals vs real values

    Parameters
    ----------
    predictions : pandas DataFrame
        Predicted values and intervals.
    y_true : pandas DataFrame
        Real values of target variable.
    target_variable : str
        Name of target variable.
    initial_x_zoom : list, default `None`
        Initial zoom of x-axis, by default None.
    title : str, default `None`
        Title of the plot, by default None.
    xaxis_title : str, default `None`
        Title of x-axis, by default None.
    yaxis_title : str, default `None`
        Title of y-axis, by default None.
    
    """

    fig = go.Figure([
        go.Scatter(name='Prediction', x=predictions.index, y=predictions['pred'], mode='lines'),
        go.Scatter(name='Real value', x=y_true.index, y=y_true[target_variable], mode='lines'),
        go.Scatter(
            name='Upper Bound', x=predictions.index, y=predictions['upper_bound'],
            mode='lines', marker=dict(color="#444"), line=dict(width=0), showlegend=False
        ),
        go.Scatter(
            name='Lower Bound', x=predictions.index, y=predictions['lower_bound'],
            marker=dict(color="#444"), line=dict(width=0), mode='lines',
            fillcolor='rgba(68, 68, 68, 0.3)', fill='tonexty', showlegend=False
        )
    ])
    fig.update_layout(
        title=title,
        xaxis_title=xaxis_title,
        yaxis_title=yaxis_title,
        width=800,
        height=400,
        margin=dict(l=20, r=20, t=35, b=20),
        hovermode="x",
        xaxis=dict(range=initial_x_zoom),
        legend=dict(orientation="h", yanchor="top", y=1.1, xanchor="left", x=0.001)
    )
    fig.show()


def empirical_coverage(y, lower_bound, upper_bound):
    """
    Calculate coverage of a given interval
    """
    return np.mean(np.logical_and(y >= lower_bound, y <= upper_bound))
In [12]:
# Plot intervals (with zoom ['2012-12-01', '2012-12-20'])
# ==============================================================================
plot_predicted_intervals(
    predictions     = predictions,
    y_true          = data_test,
    target_variable = "users",
    initial_x_zoom  = ['2012-12-01', '2012-12-20'],
    title           = "Real value vs predicted in test data",
    xaxis_title     = "Date time",
    yaxis_title     = "users",
)

# Predicted interval coverage (on test data)
# ==============================================================================
coverage = empirical_coverage(
                y           = data.loc[end_validation:, 'users'],
                lower_bound = predictions["lower_bound"], 
                upper_bound = predictions["upper_bound"]
            )
print(f"Predicted interval coverage: {round(100 * coverage, 2)} %")

# Area of the interval
# ==============================================================================
area = (predictions["upper_bound"] - predictions["lower_bound"]).sum()
print(f"Area of the interval: {round(area, 2)}")
Predicted interval coverage: 58.15 %
Area of the interval: 82767.54

The prediction intervals exhibit overconfidence as they tend to be excessively narrow, resulting in a true coverage that falls below the nominal coverage. This phenomenon arises from the tendency of in-sample residuals to often overestimate the predictive capacity of the model.

Out sample residuals (non-conditioned on predicted values)

The set_out_sample_residuals() method is used to specify out-sample residuals computed with a validation set through backtesting. Once the new residuals have been added to the forecaster, set use_in_sample_residuals to False use them.

In [13]:
# Backtesting on validation data to obtain out-sample residuals
# ==============================================================================
cv = TimeSeriesFold(
        steps              = 24,
        initial_train_size = len(data.loc[:end_train]),
        refit              = False,
    )
_, predictions_val = backtesting_forecaster(
                         forecaster    = forecaster,
                         y             = data.loc[:end_validation, 'users'],
                         exog          = data.loc[:end_validation, exog_features],
                         cv            = cv,
                         metric        = 'mean_absolute_error',
                         n_jobs        = 'auto',
                         verbose       = False,
                         show_progress = True
                     )
In [14]:
# Out-sample residuals distribution
# ==============================================================================
residuals = data.loc[predictions_val.index, 'users'] - predictions_val['pred']
print(pd.Series(np.where(residuals < 0, 'negative', 'positive')).value_counts())
plt.rcParams.update({'font.size': 8})
_ = plot_residuals(residuals=residuals, figsize=(7, 4))
positive    1029
negative     819
Name: count, dtype: int64
In [15]:
# Store out-sample residuals in the forecaster
# ==============================================================================
forecaster.set_out_sample_residuals(
    y_true = data.loc[predictions_val.index, 'users'],
    y_pred = predictions_val['pred']
)    
In [16]:
# Backtesting with prediction intervals in test data using out-sample residuals
# ==============================================================================
cv = TimeSeriesFold(
        steps              = 24,
        initial_train_size = len(data.loc[:end_validation]),
        refit              = False,
    )
metric, predictions = backtesting_forecaster(
                          forecaster              = forecaster,
                          y                       = data['users'],
                          exog                    = data[exog_features],
                          cv                      = cv,
                          metric                  = 'mean_absolute_error',
                          interval                = [10, 90],
                          n_boot                  = 250,
                          use_in_sample_residuals = False, # Use out-sample residuals
                          use_binned_residuals    = False,
                          n_jobs                  = 'auto',
                          verbose                 = False,
                          show_progress           = True
                      )
predictions.head(5)
Out[16]:
pred lower_bound upper_bound
2012-11-16 00:00:00 75.957987 3.867451 179.864239
2012-11-16 01:00:00 22.895924 -17.751696 175.045007
2012-11-16 02:00:00 11.105227 -40.826370 168.447203
2012-11-16 03:00:00 7.465197 -28.237689 193.401750
2012-11-16 04:00:00 6.216360 -40.160433 184.578289
In [17]:
# Plot intervals (with zoom ['2012-12-01', '2012-12-20'])
# ==============================================================================
plot_predicted_intervals(
    predictions     = predictions,
    y_true          = data_test,
    target_variable = "users",
    initial_x_zoom  = ['2012-12-01', '2012-12-20'],
    title           = "Real value vs predicted in test data",
    xaxis_title     = "Date time",
    yaxis_title     = "users",
)

# Predicted interval coverage (on test data)
# ==============================================================================
coverage = empirical_coverage(
                y           = data.loc[end_validation:, 'users'],
                lower_bound = predictions["lower_bound"], 
                upper_bound = predictions["upper_bound"]
            )
print(f"Predicted interval coverage: {round(100*coverage, 2)} %")

# Area of the interval
# ==============================================================================
area = (predictions["upper_bound"] - predictions["lower_bound"]).sum()
print(f"Area of the interval: {round(area, 2)}")
Predicted interval coverage: 83.43 %
Area of the interval: 255879.67

The prediction intervals derived from the out-of-sample residuals are considerably wider than those based on the in-sample residuals, resulting in an empirical coverage closer to the nominal coverage. Looking at the plot, it's clear that the intervals are particularly wide at low predicted values, suggesting that the model struggles to accurately capture the uncertainty in its predictions at these lower values.

Out sample residuals (conditioned on predicted values)

The bootstrapping process assumes that the residuals are independently distributed so that they can be used independently of the predicted value. In reality, this is rarely true; in most cases, the magnitude of the residuals is correlated with the magnitude of the predicted value. In this case, for example, one would hardly expect the error to be the same when the predicted number of users is close to zero as when it is in the hundreds.

To account for the dependence between the residuals and the predicted values, skforecast allows to partition the residuals into K bins, where each bin is associated with a range of predicted values. Using this strategy, the bootstrapping process samples the residuals from different bins depending on the predicted value, which can improve the coverage of the interval while adjusting the width if necessary, allowing the model to better distribute the uncertainty of its predictions.

To enable the forecaster to bin the out-sample residuals, the predicted values are passed to the set_out_sample_residuals() method in addition to the residuals. Internally, skforecast uses a QuantileBinner class to bin data into quantile-based bins using numpy.percentile. This class is similar to KBinsDiscretizer but faster for binning data into quantile-based bins. Bin intervals are defined following the convention: bins[i-1] <= x < bins[i]. The binning process can be adjusted using the argument binner_kwargs of the Forecaster object.

In [18]:
# Create and train forecaster
# ==============================================================================
forecaster = ForecasterRecursive(
                 regressor     = LGBMRegressor(random_state=15926, verbose=-1, **best_params),
                 lags          = best_lags,
                 binner_kwargs = {'n_bins': 15}   
             )

forecaster.fit(
    y     = data.loc[:end_validation, 'users'],
    exog  = data.loc[:end_validation, exog_features]
)

During the training process, the forecaster uses the in-sample predictions to define the intervals at which the residuals are stored, depending on the predicted value to which they are related. Although not used in this example, the in-sample residuals are divided into bins and stored in the in_sample_residuals_by_bin_ attribute.

In [19]:
# Intervals of the residual bins
# ==============================================================================
pprint(forecaster.binner_intervals_)
{0.0: (-1.741708189819649, 10.317040266602518),
 1.0: (10.317040266602518, 23.397712684935296),
 2.0: (23.397712684935296, 42.81143589390928),
 3.0: (42.81143589390928, 93.70601153260277),
 4.0: (93.70601153260277, 146.00472540188062),
 5.0: (146.00472540188062, 185.98114181351224),
 6.0: (185.98114181351224, 227.94516357708295),
 7.0: (227.94516357708295, 263.17131420814536),
 8.0: (263.17131420814536, 298.7107366143719),
 9.0: (298.7107366143719, 339.1431337029709),
 10.0: (339.1431337029709, 399.04627830804475),
 11.0: (399.04627830804475, 472.88850088361374),
 12.0: (472.88850088361374, 553.7082345948482),
 13.0: (553.7082345948482, 682.5049823044178),
 14.0: (682.5049823044178, 994.5807380527069)}

Next, the out-of-sample residuals are saved within the forecaster. To manage memory efficiently, a maximum of 10,000//n_bins residuals are stored for each bin.

In [20]:
# Store out-sample residuals in the forecaster
# ==============================================================================
forecaster.set_out_sample_residuals(
    y_true = data.loc[predictions_val.index, 'users'],
    y_pred = predictions_val['pred']
)
In [21]:
# Number of residuals by bin
# ==============================================================================
for k, v in forecaster.out_sample_residuals_by_bin_.items():
    print(f" Bin {k}: n={len(v)}")
 Bin 0: n=148
 Bin 1: n=118
 Bin 2: n=107
 Bin 3: n=117
 Bin 4: n=121
 Bin 5: n=133
 Bin 6: n=116
 Bin 7: n=106
 Bin 8: n=161
 Bin 9: n=138
 Bin 10: n=126
 Bin 11: n=125
 Bin 12: n=134
 Bin 13: n=118
 Bin 14: n=80
In [22]:
# Distribution of the residual by bin
# ==============================================================================
out_sample_residuals_by_bin_df = pd.DataFrame(
    dict([(k, pd.Series(v)) for k, v in forecaster.out_sample_residuals_by_bin_.items()])
)
fig, ax = plt.subplots(figsize=(6, 3))
out_sample_residuals_by_bin_df.boxplot(ax=ax)
ax.set_title("Distribution of residuals by bin")
ax.set_xlabel("Bin")
ax.set_ylabel("Residuals");

The box plots illustrate how both the spread and magnitude of residuals vary with the predicted values. For instance, in bin 0, residuals remain within an absolute value of 100, whereas in bins above 5, they frequently exceed this threshold.

Finally, the prediction intervals for the test data are estimated using the backtesting process, with out-of-sample residuals conditioned on the predicted values.

In [23]:
# Backtesting with prediction intervals in test data using out-sample residuals
# ==============================================================================
cv = TimeSeriesFold(
        steps              = 24,
        initial_train_size = len(data.loc[:end_validation]),
        refit              = False,
    )

metric, predictions = backtesting_forecaster(
                          forecaster              = forecaster,
                          y                       = data['users'],
                          exog                    = data[exog_features],
                          cv                      = cv,
                          metric                  = 'mean_absolute_error',
                          interval                = [10, 90],
                          n_boot                  = 250,
                          use_in_sample_residuals = False, # Use out-sample residuals
                          use_binned_residuals    = True,  # Use binned residuals
                          n_jobs                  = 'auto',
                          verbose                 = False,
                          show_progress           = True
                      )
predictions.head(5)
Out[23]:
pred lower_bound upper_bound
2012-11-16 00:00:00 75.957987 42.852151 153.661841
2012-11-16 01:00:00 22.895924 13.017612 90.814613
2012-11-16 02:00:00 11.105227 5.301970 43.867037
2012-11-16 03:00:00 7.465197 4.394094 23.024220
2012-11-16 04:00:00 6.216360 3.496328 19.235150
In [24]:
# Plot intervals (with zoom ['2012-12-01', '2012-12-20'])
# ==============================================================================
plot_predicted_intervals(
    predictions     = predictions,
    y_true          = data_test,
    target_variable = "users",
    initial_x_zoom  = ['2012-12-01', '2012-12-20'],
    title           = "Real value vs predicted in test data",
    xaxis_title     = "Date time",
    yaxis_title     = "users",
)

# Predicted interval coverage (on test data)
# ==============================================================================
coverage = empirical_coverage(
                y           = data.loc[end_validation:, 'users'],
                lower_bound = predictions["lower_bound"], 
                upper_bound = predictions["upper_bound"]
            )
print(f"Predicted interval coverage: {round(100*coverage, 2)} %")

# Area of the interval
# ==============================================================================
area = (predictions["upper_bound"] - predictions["lower_bound"]).sum()
print(f"Area of the interval: {round(area, 2)}")
Predicted interval coverage: 78.89 %
Area of the interval: 209530.0

When using out-of-sample residuals conditioned on the predicted value, the interval has a coverage close to the expected value (80%) while reducing its width. The model is able to better distribute the uncertainty in its predictions.

Predict bootstraping, quantile and distribution

The previous sections have demonstrated the use of the backtesting process to estimate the prediction interval over a given period of time. The goal is to mimic the behavior of the model in production by running predictions at regular intervals, incrementally updating the input data.

Alternatively, it is possible to run a single prediction that forecasts N steps ahead without going through the entire backtesting process. In such cases, skforecast provides four different methods: predict_bootstrapping, predict_interval, predict_quantile and predict_distribution.

Predict Bootstraping

The predict_bootstrapping method performs the n_boot bootstrapping iterations that generate the alternative prediction paths. These are the underlying values used to compute the intervals, quantiles, and distributions.

In [25]:
# Fit forecaster
# ==============================================================================
forecaster.fit(
    y     = data.loc[:end_validation, 'users'],
    exog  = data.loc[:end_validation, exog_features]
)
In [26]:
# Predict 10 different forecasting sequences of 7 steps each using bootstrapping
# ==============================================================================
boot_predictions = forecaster.predict_bootstrapping(
                       exog   = data_test[exog_features],
                       steps  = 7,
                       n_boot = 25
                   )
boot_predictions
Out[26]:
pred_boot_0 pred_boot_1 pred_boot_2 pred_boot_3 pred_boot_4 pred_boot_5 pred_boot_6 pred_boot_7 pred_boot_8 pred_boot_9 ... pred_boot_15 pred_boot_16 pred_boot_17 pred_boot_18 pred_boot_19 pred_boot_20 pred_boot_21 pred_boot_22 pred_boot_23 pred_boot_24
2012-11-16 00:00:00 75.072526 67.917014 90.497835 72.887786 69.451738 65.636227 67.752751 74.731177 87.611129 82.144507 ... 90.132866 136.833471 70.875736 66.419248 66.151284 74.751401 48.791706 68.656787 83.055140 68.128650
2012-11-16 01:00:00 33.217798 39.088561 53.430776 8.530052 31.098442 19.103678 25.434839 39.863960 53.959646 -0.065547 ... 77.283496 103.139985 16.806962 24.998564 16.564876 8.105737 25.639537 47.999683 21.557125 21.669114
2012-11-16 02:00:00 30.117753 8.819270 21.098243 6.527203 -2.097282 11.071403 8.829349 40.070996 9.417611 -24.525254 ... 40.248554 47.861372 41.019650 -5.893293 3.011746 -13.445515 45.709909 -11.076396 24.972022 20.706668
2012-11-16 03:00:00 -2.370557 2.628568 8.867311 -9.985030 -1.431195 24.763869 2.948100 -0.293142 8.415971 100.438948 ... 1.438511 4.828710 21.165506 7.745074 -24.641198 4.783921 60.887626 17.057142 -9.878423 -22.821821
2012-11-16 04:00:00 33.421322 -5.012262 18.803097 25.102996 32.054605 14.578793 52.905294 -41.144396 16.813604 107.710606 ... -28.544466 1.939137 -11.529134 -11.649420 43.438586 -0.654120 23.865921 7.612053 5.411233 -1.574533
2012-11-16 05:00:00 81.259318 58.335735 23.256711 51.950788 51.408044 53.162599 62.032153 52.793819 42.212314 106.453046 ... 44.910476 -2.765791 15.125106 -17.912916 96.705426 30.322670 36.136752 34.202833 38.794849 35.524510
2012-11-16 06:00:00 195.059101 195.716369 125.295581 197.557367 171.269419 170.727804 175.102145 160.966413 157.567075 155.195939 ... 152.996113 123.790988 98.159621 100.287313 192.879471 125.427235 89.445302 129.648327 178.383805 136.410872

7 rows × 25 columns

A ridge plot is a useful way to visualize the uncertainty of a forecasting model. This plot estimates a kernel density for each step by using the bootstrapped predictions.

In [27]:
# Ridge plot of bootstrapping predictions
# ==============================================================================
_ = plot_prediction_distribution(boot_predictions, figsize=(7, 4))

Predict Interval

In most cases, the user is interested in a specific interval rather than the entire bootstrapping simulation matrix. To address this need, skforecast provides the predict_interval method. This method internally uses predict_bootstrapping to obtain the bootstrapping matrix and estimates the upper and lower quantiles for each step, thus providing the user with the desired prediction intervals.

In [28]:
# Predict intervals for next 7 steps, quantiles 10th and 90th
# ==============================================================================
predictions = forecaster.predict_interval(
                  exog     = data_test[exog_features],
                  steps    = 7,
                  interval = [10, 90],
                  n_boot   = 150
              )
predictions
Out[28]:
pred lower_bound upper_bound
2012-11-16 00:00:00 75.957987 55.871660 106.509594
2012-11-16 01:00:00 22.895924 2.862862 56.508504
2012-11-16 02:00:00 11.105227 -13.861631 41.666618
2012-11-16 03:00:00 7.465197 -14.729066 40.250606
2012-11-16 04:00:00 6.216360 -16.066013 43.949188
2012-11-16 05:00:00 31.850292 10.522605 66.098993
2012-11-16 06:00:00 145.218401 96.256977 178.714044

Predict Quantile

This method operates identically to predict_interval, with the added feature of enabling users to define a specific list of quantiles for estimation at each step. It's important to remember that these quantiles should be specified within the range of 0 to 1.

In [29]:
# Predict quantiles for next 7 steps, quantiles 5th, 25th, 75th and 95th
# ==============================================================================
predictions = forecaster.predict_quantiles(
                  exog      = data_test[exog_features],
                  steps     = 7,
                  n_boot    = 150,
                  quantiles = [0.05, 0.25, 0.75, 0.95],
              )
predictions
Out[29]:
q_0.05 q_0.25 q_0.75 q_0.95
2012-11-16 00:00:00 48.288075 67.642808 90.037329 118.281854
2012-11-16 01:00:00 -3.616814 16.409867 37.915283 64.523189
2012-11-16 02:00:00 -20.478988 -2.408681 22.836308 58.893143
2012-11-16 03:00:00 -28.116105 -2.513015 23.698522 51.932897
2012-11-16 04:00:00 -27.432799 -4.066136 22.940893 51.227532
2012-11-16 05:00:00 -4.006249 21.569188 49.501503 76.194038
2012-11-16 06:00:00 83.860754 114.212367 156.516781 189.210857

Predict Distribution

The intervals estimated so far are distribution-free, which means that no assumptions are made about a particular distribution. The predict_dist method in skforecast allows fitting a parametric distribution to the bootstrapped prediction samples obtained with predict_bootstrapping. This is useful when there is reason to believe that the forecast errors follow a particular distribution, such as the normal distribution or the student's t-distribution. The predict_dist method allows the user to specify any continuous distribution from the scipy.stats module.

In [30]:
# Predict the parameters of a normal distribution for the next 7 steps
# ==============================================================================
predictions = forecaster.predict_dist(
                  exog         = data_test[exog_features],
                  steps        = 7,
                  n_boot       = 150,
                  distribution = norm
              )
predictions
Out[30]:
loc scale
2012-11-16 00:00:00 79.436270 21.590643
2012-11-16 01:00:00 27.275039 26.421942
2012-11-16 02:00:00 12.963605 25.253451
2012-11-16 03:00:00 9.804574 25.410589
2012-11-16 04:00:00 10.522987 25.929452
2012-11-16 05:00:00 34.254031 25.946783
2012-11-16 06:00:00 135.548288 36.410460

Prediction intervals using quantile regression models

As opposed to ordinal linear regression, which is intended to estimate the conditional mean of the response variable given certain values of the predictor variables, quantile regression aims at estimating the conditional quantiles of the response variable. For a continuous distribution function, the $\alpha$-quantile $Q_{\alpha}(x)$ is defined such that the probability of $Y$ being smaller than $Q_{\alpha}(x)$ is, for a given $X=x$, equal to $\alpha$. For example, 36% of the population values are lower than the quantile $Q=0.36$. The most known quantile is the 50%-quantile, more commonly called the median.

By combining the predictions of two quantile regressors, it is possible to build an interval. Each model estimates one of the limits of the interval. For example, the models obtained for $Q = 0.1$ and $Q = 0.9$ produce an 80% prediction interval (90% - 10% = 80%).

Several machine learning algorithms are capable of modeling quantiles. Some of them are:

Just as the squared-error loss function is used to train models that predict the mean value, a specific loss function is needed in order to train models that predict quantiles. The most common metric used for quantile regression is calles quantile loss or pinball loss:

$$\text{pinball}(y, \hat{y}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \alpha \max(y_i - \hat{y}_i, 0) + (1 - \alpha) \max(\hat{y}_i - y_i, 0)$$

where $\alpha$ is the target quantile, $y$ the real value and $\hat{y}$ the quantile prediction.

It can be seen that loss differs depending on the evaluated quantile. The higher the quantile, the more the loss function penalizes underestimates, and the less it penalizes overestimates. As with MSE and MAE, the goal is to minimize its values (the lower loss, the better).

Two disadvantages of quantile regression, compared to the bootstrap approach to prediction intervals, are that each quantile needs its regressor and quantile regression is not available for all types of regression models. However, once the models are trained, the inference is much faster since no iterative process is needed.

This type of prediction intervals can be easily estimated using quantile regressor inside a Forecaster object.

Warning

Forecasters of type ForecasterDirect are slower than ForecasterRecursiveRecursive because they require training one model per step. Although they can achieve better performance, their scalability is an important limitation when many steps need to be predicted. To limit the time required to run the following examples, the data is aggregated from hourly frequency to daily frequency and only 7 steps ahead (one week) are predicted.

Data

In [31]:
# Data download
# ==============================================================================
data = fetch_dataset(name='bike_sharing_extended_features', verbose=False)
In [32]:
# Aggregate data to daily frequency
# ==============================================================================
data = (
    data
    .resample(rule="D", closed="left", label="right")
    .agg({"users": "sum"})
)
In [33]:
# Split train-validation-test
# ==============================================================================
end_train = '2012-05-31 23:59:00'
end_validation = '2012-09-15 23:59:00'
data_train = data.loc[: end_train, :]
data_val   = data.loc[end_train:end_validation, :]
data_test  = data.loc[end_validation:, :]
print(f"Dates train      : {data_train.index.min()} --- {data_train.index.max()}  (n={len(data_train)})")
print(f"Dates validacion : {data_val.index.min()} --- {data_val.index.max()}  (n={len(data_val)})")
print(f"Dates test       : {data_test.index.min()} --- {data_test.index.max()}  (n={len(data_test)})")
Dates train      : 2011-01-09 00:00:00 --- 2012-05-31 00:00:00  (n=509)
Dates validacion : 2012-06-01 00:00:00 --- 2012-09-15 00:00:00  (n=107)
Dates test       : 2012-09-16 00:00:00 --- 2012-12-31 00:00:00  (n=107)

Quantile regresion models

An 80% prediction interval is estimated for 7 steps-ahead predictions using quantile regression. A LightGBM gradient boosting model is trained in this example, however, the reader may use any other model just replacing the definition of the regressor.

In [34]:
# Create forecasters: one for each limit of the interval
# ==============================================================================
# The forecasters obtained for alpha=0.1 and alpha=0.9 produce a 80% confidence
# interval (90% - 10% = 80%).

# Forecaster for quantile 10%
forecaster_q10 = ForecasterDirect(
                     regressor = LGBMRegressor(
                                     objective    = 'quantile',
                                     metric       = 'quantile',
                                     alpha        = 0.1,
                                     random_state = 15926,
                                     verbose      = -1
                                     
                                 ),
                     lags  = 7,
                     steps = 7
                 )
# Forecaster for quantile 90%
forecaster_q90 = ForecasterDirect(
                     regressor = LGBMRegressor(
                                     objective    = 'quantile',
                                     metric       = 'quantile',
                                     alpha        = 0.9,
                                     random_state = 15926,
                                     verbose      = -1
                                     
                                 ),
                     lags  = 7,
                     steps = 7
                 )

When validating a quantile regression model, a custom metric must be provided depending on the quantile being estimated.

In [35]:
# Loss function for each quantile (pinball_loss)
# ==============================================================================
def mean_pinball_loss_q10(y_true, y_pred):
    """
    Pinball loss for quantile 10.
    """
    return mean_pinball_loss(y_true, y_pred, alpha=0.1)


def mean_pinball_loss_q90(y_true, y_pred):
    """
    Pinball loss for quantile 90.
    """
    return mean_pinball_loss(y_true, y_pred, alpha=0.9)
In [36]:
# Bayesian search of hyper-parameters and lags for each quantile forecaster
# ==============================================================================
def search_space(trial):
    search_space  = {
        'n_estimators'  : trial.suggest_int('n_estimators', 100, 500, step=50),
        'max_depth'     : trial.suggest_int('max_depth', 3, 10, step=1),
        'learning_rate' : trial.suggest_float('learning_rate', 0.01, 0.1)
    }

    return search_space

cv = TimeSeriesFold(
        steps              = 7,
        initial_train_size = len(data[:end_train]),
        refit              = False,
    )

results_grid_q10 = bayesian_search_forecaster(
                       forecaster     = forecaster_q10,
                       y              = data.loc[:end_validation, 'users'],
                       cv             = cv,
                       metric         = mean_pinball_loss_q10,
                       search_space   = search_space,
                       n_trials       = 10,
                       random_state   = 123,
                       return_best    = True,
                       n_jobs         = 'auto',
                       verbose        = False,
                       show_progress  = True
                   )

results_grid_q90 = bayesian_search_forecaster(
                       forecaster    = forecaster_q90,
                       y             = data.loc[:end_validation, 'users'],
                       cv            = cv,
                       metric        = mean_pinball_loss_q90,
                       search_space  = search_space,
                       n_trials      = 10,
                       random_state  = 123,
                       return_best   = True,
                       n_jobs        = 'auto',
                       verbose       = False,
                       show_progress = True
                   )
`Forecaster` refitted using the best-found lags and parameters, and the whole data set: 
  Lags: [1 2 3 4 5 6 7] 
  Parameters: {'n_estimators': 250, 'max_depth': 3, 'learning_rate': 0.04582398297973883}
  Backtesting metric: 222.8916557556898
`Forecaster` refitted using the best-found lags and parameters, and the whole data set: 
  Lags: [1 2 3 4 5 6 7] 
  Parameters: {'n_estimators': 400, 'max_depth': 4, 'learning_rate': 0.02579065805327433}
  Backtesting metric: 164.5817884709446

Once the best hyper-parameters have been found for each forecaster, a backtesting process is applied again using the test data.

In [37]:
# Backtesting on test data
# ==============================================================================
cv = TimeSeriesFold(
        steps              = 7,
        initial_train_size = len(data.loc[:end_validation]),
        refit              = False,
    )
metric_q10, predictions_q10 = backtesting_forecaster(
                                  forecaster          = forecaster_q10,
                                  y                   = data['users'],
                                  cv                  = cv,
                                  metric              = mean_pinball_loss_q10,
                                  n_jobs              = 'auto',
                                  verbose             = False,
                                  show_progress       = True
                              )

metric_q90, predictions_q90 = backtesting_forecaster(
                                  forecaster          = forecaster_q90,
                                  y                   = data['users'],
                                  cv                  = cv,
                                  metric              = mean_pinball_loss_q90,
                                  n_jobs              = 'auto',
                                  verbose             = False,
                                  show_progress       = True
                              )
In [38]:
# Plot
# ==============================================================================
fig = go.Figure([
    go.Scatter(name='Real value', x=data_test.index, y=data_test['users'], mode='lines'),
    go.Scatter(
        name='Upper Bound', x=predictions_q90.index, y=predictions_q90['pred'],
        mode='lines', marker=dict(color="#444"), line=dict(width=0), showlegend=False
    ),
    go.Scatter(
        name='Lower Bound', x=predictions_q10.index, y=predictions_q10['pred'],
        marker=dict(color="#444"), line=dict(width=0), mode='lines',
        fillcolor='rgba(68, 68, 68, 0.3)', fill='tonexty', showlegend=False
    )
])
fig.update_layout(
    title="Real value vs predicted in test data",
    xaxis_title="Date time",
    yaxis_title="users",
    width=800,
    height=400,
    margin=dict(l=20, r=20, t=35, b=20),
    hovermode="x",
    legend=dict(orientation="h", yanchor="top", y=1.1, xanchor="left", x=0.001)
)
fig.show()
In [39]:
# Predicted interval coverage (on test data)
# ==============================================================================
coverage = empirical_coverage(
                y           = data.loc[end_validation:, 'users'],
                lower_bound = predictions_q10["pred"], 
                upper_bound = predictions_q90["pred"]
            )
print(f"Predicted interval coverage: {round(100 * coverage, 2)} %")

# Area of the interval
# ==============================================================================
area = (predictions_q90["pred"] - predictions_q10["pred"]).sum()
print(f"Area of the interval: {round(area, 2)}")
Predicted interval coverage: 49.53 %
Area of the interval: 207151.73

In this use case, the quantile forecasting strategy does not achieve empirical coverage close to the expected coverage (80 percent).

Session information

In [40]:
import session_info
session_info.show(html=False)
-----
lightgbm            4.4.0
matplotlib          3.9.2
numpy               2.0.2
optuna              3.6.1
pandas              2.2.3
plotly              5.24.1
scipy               1.14.1
session_info        1.0.0
skforecast          0.14.0
sklearn             1.5.1
statsmodels         0.14.3
-----
IPython             8.27.0
jupyter_client      8.6.3
jupyter_core        5.7.2
notebook            6.4.12
-----
Python 3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:27:27) [GCC 11.2.0]
Linux-5.15.0-1072-aws-x86_64-with-glibc2.31
-----
Session information updated at 2024-11-14 22:31

Citation

How to cite this document

If you use this document or any part of it, please acknowledge the source, thank you!

Probabilistic forecasting with machine learning by Joaquín Amat Rodrigo and Javier Escobar Ortiz, available under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0 DEED) at https://cienciadedatos.net/documentos/py42-probabilistic-forecasting.html

How to cite skforecast

If you use skforecast for a publication, we would appreciate it if you cite the published software.

Zenodo:

Amat Rodrigo, Joaquin, & Escobar Ortiz, Javier. (2024). skforecast (v0.14.0). Zenodo. https://doi.org/10.5281/zenodo.8382788

APA:

Amat Rodrigo, J., & Escobar Ortiz, J. (2024). skforecast (Version 0.14.0) [Computer software]. https://doi.org/10.5281/zenodo.8382788

BibTeX:

@software{skforecast, author = {Amat Rodrigo, Joaquin and Escobar Ortiz, Javier}, title = {skforecast}, version = {0.14.0}, month = {11}, year = {2024}, license = {BSD-3-Clause}, url = {https://skforecast.org/}, doi = {10.5281/zenodo.8382788} }


Did you like the article? Your support is important

Website maintenance has high cost, your contribution will help me to continue generating free educational content. Many thanks! 😊

Creative Commons Licence

This work by Joaquín Amat Rodrigo and Javier Escobar Ortiz is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.

Allowed:

  • Share: copy and redistribute the material in any medium or format.

  • Adapt: remix, transform, and build upon the material.

Under the following terms:

  • Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

  • NonCommercial: You may not use the material for commercial purposes.

  • ShareAlike: If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.