If you like Skforecast , help us giving a star on GitHub! ⭐️
More about forecasting
In machine learning, stacking is an ensemble technique that combines multiple models to reduce their biases and improve predictive performance. More specifically, the predictions of each model (base models) are stacked and used as input to a final model (meta model) to compute the prediction.
Stacking is effective because it leverages the strengths of different algorithms and attempts to mitigate their individual weaknesses. By combining several models, it can capture complex patterns in the data and improve prediction accuracy.
However, stacking can be computationally expensive and requires careful tuning to avoid overfitting. To this end, it is highly recommended to train the final estimator through cross-validation. In addition, obtaining diverse and well-performing base models is critical to the success of the stacking technique.
The following example shows how to use scikit-learn and skforecast to create a forecasting model that combines several individual regressors to achieve better results.
Libraries used in this document.
# Data processing
# ==============================================================================
import numpy as np
import pandas as pd
# Plots
# ==============================================================================
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.io as pio
pio.templates.default = "seaborn"
plt.style.use('seaborn-v0_8-darkgrid')
# Modelling and Forecasting
# ==============================================================================
from lightgbm import LGBMRegressor
from sklearn.linear_model import Ridge
from sklearn.ensemble import StackingRegressor
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler
from skforecast.ForecasterAutoreg import ForecasterAutoreg
from skforecast.model_selection import grid_search_forecaster
from skforecast.model_selection import backtesting_forecaster
from skforecast.datasets import fetch_dataset
# Configuration warnings
# ==============================================================================
import warnings
The data in this document represent monthly fuel consumption in Spain from 1969-01-01 to 2022-08-01. The goal is to create a model capable of forecasting the consumption over the next 12 month.
# Downloading data
# ==============================================================================
data = fetch_dataset(name = 'fuel_consumption')
data = data.loc[:"2019-01-01", ['Gasolinas']]
data = data.rename(columns = {'Gasolinas':'consumption'})
data.index.name = 'date'
data['consumption'] = data['consumption']/100000
data.head(3)
In addition to the past values of the series (lags), an additional variable indicating the month of the year is added. This variable is included in the model to capture the seasonality of the series.
# Calendar features
# ==============================================================================
data['month_of_year'] = data.index.month
data.head(3)
To facilitate the training of the models, the search for optimal hyperparameters, and the evaluation of their predictive accuracy, the data are divided into three separate sets: training, validation, and test.
# Split train-validation-test
# ==============================================================================
end_train = '2007-12-01 23:59:00'
end_validation = '2012-12-01 23:59:00'
data_train = data.loc[: end_train, :]
data_val = data.loc[end_train:end_validation, :]
data_test = data.loc[end_validation:, :]
print(f"Dates train : {data_train.index.min()} --- {data_train.index.max()} (n={len(data_train)})")
print(f"Dates validacion : {data_val.index.min()} --- {data_val.index.max()} (n={len(data_val)})")
print(f"Dates test : {data_test.index.min()} --- {data_test.index.max()} (n={len(data_test)})")
# Interactive plot of time series
# ==============================================================================
data.loc[:end_train, 'partition'] = 'train'
data.loc[end_train:end_validation, 'partition'] = 'validation'
data.loc[end_validation:, 'partition'] = 'test'
fig = px.line(
data_frame = data.reset_index(),
x = 'date',
y = 'consumption',
color = 'partition',
title = 'Fuel consumption',
width = 700,
height = 350,
)
fig.update_layout(
width = 700,
height = 350,
margin=dict(l=20, r=20, t=35, b=20),
legend=dict(
orientation="h",
yanchor="top",
y=1,
xanchor="left",
x=0.001
)
)
fig.show()
data=data.drop(columns='partition')
First, two individual models are trained separately - a linear regression model and a gradient boosting model - and their performance is evaluated on the test set.
# Create forecaster
# ==============================================================================
params_lgbm = {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 500}
forecaster = ForecasterAutoreg(
regressor = LGBMRegressor(random_state=123, **params_lgbm),
lags = 12
)
# Backtesting on test data
# ==============================================================================
metric, predictions = backtesting_forecaster(
forecaster = forecaster,
y = data['consumption'],
exog = data['month_of_year'],
initial_train_size = len(data.loc[:end_validation]),
fixed_train_size = False,
steps = 12,
refit = False,
metric = 'mean_squared_error',
n_jobs = 'auto',
verbose = False
)
print(f"Backtest error: {metric:.2f}")
# Create forecaster
# ==============================================================================
params_ridge = {'alpha': 0.001}
forecaster = ForecasterAutoreg(
regressor = Ridge(random_state=123, **params_ridge),
lags = 12,
transformer_y = StandardScaler()
)
# Backtesting on test data
# ==============================================================================
metric, predictions = backtesting_forecaster(
forecaster = forecaster,
y = data['consumption'],
exog = data['month_of_year'],
initial_train_size = len(data.loc[:end_validation]),
fixed_train_size = False,
steps = 12,
refit = False,
metric = 'mean_squared_error',
n_jobs = 'auto',
verbose = False
)
print(f"Backtest error: {metric:.2f}")
With scikit-learn it is very easy to combine multiple regressors thanks to its StackingRegressor class.
The estimators
parameter corresponds to the list of the estimators (base learners) which are stacked together in parallel on the input data. It should be given as a list of names and estimators. The final_estimator
(meta model) will use the predictions of the estimators as input.
# Create stacking regressor
# ==============================================================================
estimators = [
('ridge', Ridge(random_state=123, **params_ridge)),
('lgbm', LGBMRegressor(random_state=123, **params_lgbm)),
]
stacking_regressor = StackingRegressor(
estimators = estimators,
final_estimator = Ridge(),
cv = KFold(n_splits=10, shuffle=False)
)
stacking_regressor
# Create forecaster
# ==============================================================================
forecaster = ForecasterAutoreg(
regressor = stacking_regressor,
lags = 12
)
# Backtesting on test data
# ==============================================================================
metric, predictions = backtesting_forecaster(
forecaster = forecaster,
y = data['consumption'],
exog = data['month_of_year'],
initial_train_size = len(data.loc[:end_validation]),
steps = 12,
refit = False,
metric = 'mean_squared_error',
n_jobs = 'auto',
verbose = False
)
print(f"Backtest error: {metric:.2f}")
The results obtained by stacking the two models - the linear model and the gradient boosting model - are better than the results obtained by each model separately.
When using StackingRegressor
, the hyperparameters of individual regressors must be prefixed with the name of the regressor followed by two underlines. For example, the hyperparameter alpha
of the Ridge regressor must be specified as ridge__alpha
. The hyperparameter of the final estimator must be specified with the final_estimator__
prefix.
# Grid search of hyperparameters and lags
# ==============================================================================
param_grid = {
'ridge__alpha': [0.001, 0.01, 0.1, 1, 10],
'lgbm__n_estimators': [100, 500],
'lgbm__max_depth': [3, 5, 10],
'lgbm__learning_rate': [0.01, 0.1]
}
# Lags used as predictors
lags_grid = [24]
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_validation, 'consumption'],
exog = data.loc[:end_validation, 'month_of_year'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = False,
metric = 'mean_squared_error',
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
n_jobs = 'auto',
verbose = False
)
results_grid.head()
Once the best hyperparameters have been determined for each regressor in the ensemble, the test error is computed through back-testing.
# Backtesting the best model on test data
# ==============================================================================
metric, predictions = backtesting_forecaster(
forecaster = forecaster,
y = data['consumption'],
exog = data['month_of_year'],
initial_train_size = len(data.loc[:end_validation]),
fixed_train_size = False,
steps = 12,
refit = False,
metric = 'mean_squared_error',
n_jobs = 'auto',
verbose = False
)
print(f"Backtest error: {metric:.2f}")
When a regressor of type StackingRegressor
is used as a regressor in a forecaster, its get_feature_importances
method will not work. This is because objects of type StackingRegressor
do not have either the feature_importances
or coef_
attribute. Instead, it is necessary to inspect each of the regressors that are part of the stacking.
# Feature importances for each regressor in the stacking
# ==============================================================================
if forecaster.regressor.__class__.__name__ == 'StackingRegressor':
importancia_pred = []
for regressor in forecaster.regressor.estimators_:
try:
importancia = pd.DataFrame(
data = {
'feature': forecaster.regressor.feature_names_in_,
f'importance_{type(regressor).__name__}': regressor.coef_,
f'importance_abs_{type(regressor).__name__}': np.abs(regressor.coef_)
}
).set_index('feature')
except:
importancia = pd.DataFrame(
data = {
'feature': forecaster.regressor.feature_names_in_,
f'importance_{type(regressor).__name__}': regressor.feature_importances_,
f'importance_abs_{type(regressor).__name__}': np.abs(regressor.feature_importances_)
}
).set_index('feature')
importancia_pred.append(importancia)
importancia_pred = pd.concat(importancia_pred, axis=1)
else:
importancia_pred = forecaster.get_feature_importances()
importancia_pred['importance_abs'] = importancia_pred['importance'].abs()
importancia_pred = importancia_pred.sort_values(by='importance_abs', ascending=False)
importancia_pred.head(5)
import session_info
session_info.show(html=False)
How to cite this document
If you use this document or any part of it, please acknowledge the source, thank you!
Stacking ensemble of machine learning models to improve forecasting by Joaquín Amat Rodrigo and Javier Escobar Ortiz, available under a Attribution 4.0 International (CC BY 4.0) at https://www.cienciadedatos.net/documentos/py52-stacking-ensemble-models-forecasting.html
How to cite skforecast
If you use skforecast for a scientific publication, we would appreciate it if you cite the published software.
Zenodo:
Amat Rodrigo, Joaquin, & Escobar Ortiz, Javier. (2023). skforecast (v0.11.0). Zenodo. https://doi.org/10.5281/zenodo.8382787
APA:
Amat Rodrigo, J., & Escobar Ortiz, J. (2023). skforecast (Version 0.11.0) [Computer software]. https://doi.org/10.5281/zenodo.8382787
BibTeX:
@software{skforecast, author = {Amat Rodrigo, Joaquin and Escobar Ortiz, Javier}, title = {skforecast}, version = {0.11.0}, month = {9}, year = {2023}, license = {BSD-3-Clause}, url = {https://skforecast.org/}, doi = {10.5281/zenodo.8382787} }
Did you like the article? Your support is important
Website maintenance has high cost, your contribution will help me to continue generating free educational content. Many thanks! 😊
This work by Joaquín Amat Rodrigo and Javier Escobar Ortiz is licensed under a Creative Commons Attribution 4.0 International License.