Global forecasting models: multiple time series forecasting with skforecast

If you like  Skforecast ,  help us giving a star on   GitHub! ⭐️

Global Forecasting Models: Modeling Multiple Time Series with Machine Learning

Joaquín Amat Rodrigo, Javier Escobar Ortiz
October 2022 (last update November 2024)

Introduction

In Single-Series Modeling (Local Forecasting Model), each time series is analyzed individually, modeled as a combination of its own lags and, optionally, exogenous variables. This approach provides detailed insights specific to each series but can become impractical for scaling when dealing with a large number of time series. In contrast, Multi-Series Modeling (Global Forecasting Model) involves building a unified predictive model that learns multiple time series simultaneously. It attempts to capture common dynamics that influence the series as a whole, thereby reducing noise from individual series. This approach is computationally efficient, easy to maintain, and can yield more robust generalizations across time series, albeit potentially at the cost of sacrificing some individual insights. Two strategies of global forecasting models can be distinguished: Independent multi-series and dependent multi-series.

Advantages of multi-series:

  • It is easier to maintain and monitor a single model than several.

  • Since all time series are combined during training, the model has a higher learning capacity even if the series are short.

  • By combining multiple time series, the model can learn more generalizable patterns.

Limitations of multi-series:

  • If the series do not follow the same internal dynamics, the model may learn a pattern that does not represent any of them.

  • The series may mask each other, so the model may not predict all of them with the same performance.

  • It is more computationally demanding (time and resources) to train and backtest a big model than several small ones.

Dependent multi-series forecasting (Multivariate forecasting)

In dependent multi-series forecasting (multivariate time series), all series are modeled together in a single model, considering that each time series depends not only on its past values but also on the past values of the other series. The forecaster is expected not only to learn the information of each series separately but also to relate them. An example is the measurements made by all the sensors (flow, temperature, pressure...) installed on an industrial machine such as a compressor.

Multiple time series with equal length

In this first example, multiple time series with the same length are used. The goal is to compare the forecasting results of a global model with those of an individual model for each series when forecasting the next 7 days of sales for 50 different items in a store using the 5 years of available history. The data has been obtained from the Store Item Demand Forecasting Challenge. This dataset contains 913,000 sales transactions from 01/01/2013 to 31/12/2017 for 50 products (SKU) in 10 stores.

Libraries

In [56]:
# Data manipulation
# ==============================================================================
import numpy as np
import pandas as pd

# Plots
# ==============================================================================
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm

# Modelling and Forecasting
# ==============================================================================
import sklearn
import skforecast
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import RFECV
from skforecast.recursive import ForecasterRecursiveMultiSeries
from skforecast.recursive import ForecasterRecursive
from skforecast.model_selection import TimeSeriesFold, OneStepAheadFold
from skforecast.model_selection import backtesting_forecaster
from skforecast.model_selection import bayesian_search_forecaster
from skforecast.model_selection import backtesting_forecaster_multiseries
from skforecast.model_selection import bayesian_search_forecaster_multiseries
from skforecast.feature_selection import select_features_multiseries
from skforecast.preprocessing import RollingFeatures
from skforecast.exceptions import OneStepAheadValidationWarning
from skforecast.plot import set_dark_theme
from skforecast.preprocessing import series_long_to_dict
from skforecast.preprocessing import exog_long_to_dict

# Warnings configuration
# ==============================================================================
import warnings

color = '\033[1m\033[38;5;208m'
print(f"{color}Version skforecast: {skforecast.__version__}")
print(f"{color}Version scikit-learn: {sklearn.__version__}")
print(f"{color}Version pandas: {pd.__version__}")
print(f"{color}Version numpy: {np.__version__}")
Version skforecast: 0.14.0
Version scikit-learn: 1.5.1
Version pandas: 2.2.3
Version numpy: 2.0.2

Data

In [57]:
# Data loading
# ======================================================================================
data = pd.read_csv('./train_stores_kaggle.csv')
display(data)
print(f"Shape: {data.shape}")
date store item sales
0 2013-01-01 1 1 13
1 2013-01-02 1 1 11
2 2013-01-03 1 1 14
3 2013-01-04 1 1 13
4 2013-01-05 1 1 10
... ... ... ... ...
912995 2017-12-27 10 50 63
912996 2017-12-28 10 50 59
912997 2017-12-29 10 50 74
912998 2017-12-30 10 50 62
912999 2017-12-31 10 50 82

913000 rows × 4 columns

Shape: (913000, 4)
In [58]:
# Data preprocessing
# ======================================================================================
selected_store = 2
selected_items = data.item.unique()
data = data[(data['store'] == selected_store) & (data['item'].isin(selected_items))].copy()
data['date'] = pd.to_datetime(data['date'], format='%Y-%m-%d')
data = pd.pivot_table(
           data    = data,
           values  = 'sales',
           index   = 'date',
           columns = 'item'
       )
data.columns.name = None
data.columns = [f"item_{col}" for col in data.columns]
data = data.asfreq('1D')
data = data.sort_index()
data.head(4)
Out[58]:
item_1 item_2 item_3 item_4 item_5 item_6 item_7 item_8 item_9 item_10 ... item_41 item_42 item_43 item_44 item_45 item_46 item_47 item_48 item_49 item_50
date
2013-01-01 12.0 41.0 19.0 21.0 4.0 34.0 39.0 49.0 28.0 51.0 ... 11.0 25.0 36.0 12.0 45.0 43.0 12.0 45.0 29.0 43.0
2013-01-02 16.0 33.0 32.0 14.0 6.0 40.0 47.0 42.0 21.0 56.0 ... 19.0 21.0 35.0 25.0 50.0 52.0 13.0 37.0 25.0 57.0
2013-01-03 16.0 46.0 26.0 12.0 12.0 41.0 43.0 46.0 29.0 46.0 ... 23.0 20.0 52.0 18.0 56.0 30.0 5.0 45.0 30.0 45.0
2013-01-04 20.0 50.0 34.0 17.0 16.0 41.0 44.0 55.0 32.0 56.0 ... 15.0 28.0 50.0 24.0 57.0 46.0 19.0 32.0 20.0 45.0

4 rows × 50 columns

The dataset is divided into 3 partitions: one for training, one for validation, and one for testing.

In [59]:
# Split data into train-validation-test
# ======================================================================================
end_train = '2016-05-31 23:59:00'
end_val = '2017-05-31 23:59:00'
data_train = data.loc[:end_train, :].copy()
data_val   = data.loc[end_train:end_val, :].copy()
data_test  = data.loc[end_val:, :].copy()
print(f"Train dates      : {data_train.index.min()} --- {data_train.index.max()}  (n={len(data_train)})")
print(f"Validation dates : {data_val.index.min()} --- {data_val.index.max()}  (n={len(data_val)})")
print(f"Test dates       : {data_test.index.min()} --- {data_test.index.max()}  (n={len(data_test)})")
Train dates      : 2013-01-01 00:00:00 --- 2016-05-31 00:00:00  (n=1247)
Validation dates : 2016-06-01 00:00:00 --- 2017-05-31 00:00:00  (n=365)
Test dates       : 2017-06-01 00:00:00 --- 2017-12-31 00:00:00  (n=214)

Four of the series are plotted to understand their trends and patterns. The reader is strongly encouraged to plot several more to gain an in-depth understanding of the series.

In [60]:
# Plot time series
# ======================================================================================
set_dark_theme()
fig, axs = plt.subplots(4, 1, figsize=(7, 5), sharex=True)
data.iloc[:, :4].plot(
    legend   = True,
    subplots = True, 
    title    = 'Sales of store 2',
    ax       = axs, 
)
for ax in axs:
    ax.axvline(pd.to_datetime(end_train) , color='white', linestyle='--', linewidth=1.5)
    ax.axvline(pd.to_datetime(end_val) , color='white', linestyle='--', linewidth=1.5)
fig.tight_layout()
plt.show()

Individual Forecaster for each product

A separate Gradient Boosting Machine (GBM) model is trained for each item, using sales over the last 14 days, as well as the average, maximum and minimum sales over the last 7 days as predictive features. The performance of the model over the next 7 days is evaluated using backtesting with the Mean Absolute Error (MAE) as the evaluation metric. Finally, the performance of these individual models is compared with that of a global model trained on all series.

In [61]:
# Train and backtest a model for each item: ForecasterAutoreg
# ======================================================================================
items = []
mae_values = []
predictions = {}

for i, item in enumerate(tqdm(data.columns)):
    # Define forecaster
    window_features = RollingFeatures(stats=['mean', 'min', 'max'], window_sizes=7)
    forecaster = ForecasterRecursive(
                     regressor       = HistGradientBoostingRegressor(random_state=8523),
                     lags            = 14,
                     window_features = window_features
                 )
    # Backtesting forecaster
    cv = TimeSeriesFold(
            steps              = 7,
            initial_train_size = len(data_train) + len(data_val),
            refit              = False,
         )
    metric, preds = backtesting_forecaster(
                        forecaster    = forecaster,
                        y             = data[item],
                        cv            = cv,
                        metric        = 'mean_absolute_error',
                        verbose       = False,
                        show_progress = False
                    )
    items.append(item)
    mae_values.append(metric.at[0, 'mean_absolute_error'])
    predictions[item] = preds

# Results
uni_series_mae = pd.Series(
                     data  = mae_values,
                     index = items,
                     name  = 'uni_series_mae'
                 )
uni_series_mae.head()
Out[61]:
item_1    6.004406
item_2    9.994352
item_3    8.652751
item_4    5.528955
item_5    5.096925
Name: uni_series_mae, dtype: float64

Global model

A global model is trained on all series simultaneously and evaluated using the same backtesting process as the individual models.

In [33]:
# Train and backtest a model for all items: ForecasterAutoregMultiSeries
# ======================================================================================
items = list(data.columns)

# Define forecaster
window_features = RollingFeatures(stats=['mean', 'min', 'max'], window_sizes=7)
forecaster_ms = ForecasterRecursiveMultiSeries(
                    regressor          = HistGradientBoostingRegressor(random_state=8523),
                    lags               = 14,
                    encoding           = 'ordinal',
                    transformer_series = StandardScaler(),
                    window_features    = window_features,
                )
# Backtesting forecaster for all items
cv = TimeSeriesFold(
        steps              = 7,
        initial_train_size = len(data_train) + len(data_val),
        refit              = False,
     )
multi_series_mae, predictions_ms = backtesting_forecaster_multiseries(
                                       forecaster            = forecaster_ms,
                                       series                = data,
                                       levels                = items,
                                       cv                    = cv,
                                       metric                = 'mean_absolute_error',
                                       add_aggregated_metric = False,
                                       verbose               = False,
                                       show_progress         = True  
                                   )  
# Results
display(multi_series_mae.head(3))
print('')
display(predictions_ms.head(3))
levels mean_absolute_error
0 item_1 5.521236
1 item_2 9.245126
2 item_3 7.301948

item_1 item_2 item_3 item_4 item_5 item_6 item_7 item_8 item_9 item_10 ... item_41 item_42 item_43 item_44 item_45 item_46 item_47 item_48 item_49 item_50
2017-06-01 35.106861 90.367217 60.613740 34.796786 30.344390 94.850880 92.222349 125.069804 85.972618 121.856981 ... 36.761204 58.301875 79.983903 43.559158 127.045573 98.774219 37.078906 77.610719 49.004847 109.749311
2017-06-02 38.663113 101.583964 63.187115 35.903646 31.293313 100.855224 97.763251 134.804295 92.259393 121.838499 ... 40.470627 59.746983 85.748729 50.961104 137.746852 102.729668 42.124969 83.469452 53.310940 116.044636
2017-06-03 38.264879 103.619330 65.602129 40.237811 33.283988 107.100882 101.806627 144.945584 94.686035 133.521963 ... 41.592844 64.585156 90.099446 51.959348 141.979401 107.639900 41.524414 89.363027 46.842094 117.288006

3 rows × 50 columns

Comparison

The mean absolute error (MAE) for each item is calculated using the individual and global models. The results are compared to determine which model performs better.

In [34]:
# Difference of backtesting metric for each item
# ======================================================================================
multi_series_mae = multi_series_mae.set_index('levels')
multi_series_mae.columns = ['multi_series_mae']
results = pd.concat((uni_series_mae, multi_series_mae), axis = 1)
results['improvement'] = results.eval('uni_series_mae - multi_series_mae')
results['improvement_(%)'] = 100 * results.eval('(uni_series_mae - multi_series_mae) / uni_series_mae')
results = results.round(2)
results.style.bar(subset=['improvement_(%)'], align='mid', color=['#d65f5f', '#5fba7d'])
Out[34]:
  uni_series_mae multi_series_mae improvement improvement_(%)
item_1 6.000000 5.520000 0.480000 8.050000
item_2 9.990000 9.250000 0.750000 7.500000
item_3 8.650000 7.300000 1.350000 15.610000
item_4 5.530000 5.030000 0.500000 8.960000
item_5 5.100000 4.660000 0.440000 8.630000
item_6 10.830000 9.750000 1.080000 9.960000
item_7 10.580000 9.830000 0.750000 7.120000
item_8 11.810000 10.420000 1.400000 11.830000
item_9 9.420000 8.690000 0.730000 7.760000
item_10 11.640000 10.330000 1.300000 11.210000
item_11 11.520000 10.430000 1.090000 9.450000
item_12 11.960000 10.890000 1.080000 8.990000
item_13 12.130000 11.360000 0.760000 6.300000
item_14 10.350000 9.560000 0.790000 7.610000
item_15 12.460000 11.640000 0.820000 6.620000
item_16 6.000000 5.920000 0.090000 1.480000
item_17 7.460000 7.190000 0.270000 3.560000
item_18 12.690000 12.120000 0.570000 4.490000
item_19 7.720000 7.370000 0.360000 4.600000
item_20 8.250000 7.830000 0.420000 5.080000
item_21 8.580000 8.060000 0.520000 6.080000
item_22 11.790000 10.710000 1.080000 9.150000
item_23 7.490000 6.770000 0.720000 9.600000
item_24 10.470000 9.860000 0.610000 5.850000
item_25 12.950000 11.650000 1.310000 10.100000
item_26 9.230000 8.590000 0.640000 6.930000
item_27 5.480000 5.170000 0.310000 5.680000
item_28 12.590000 12.080000 0.510000 4.020000
item_29 10.980000 10.180000 0.810000 7.330000
item_30 8.430000 7.890000 0.530000 6.290000
item_31 10.530000 10.070000 0.460000 4.420000
item_32 9.710000 9.160000 0.540000 5.580000
item_33 9.740000 9.320000 0.420000 4.270000
item_34 6.340000 5.930000 0.410000 6.450000
item_35 11.200000 10.180000 1.020000 9.120000
item_36 12.000000 10.620000 1.380000 11.520000
item_37 6.510000 6.110000 0.400000 6.150000
item_38 11.620000 11.130000 0.500000 4.270000
item_39 8.340000 7.370000 0.970000 11.680000
item_40 7.100000 6.630000 0.470000 6.650000
item_41 5.670000 5.220000 0.450000 7.950000
item_42 7.440000 6.940000 0.500000 6.680000
item_43 8.620000 8.570000 0.050000 0.540000
item_44 6.980000 6.410000 0.570000 8.190000
item_45 12.720000 11.790000 0.930000 7.310000
item_46 10.350000 9.890000 0.460000 4.450000
item_47 5.500000 4.980000 0.520000 9.500000
item_48 9.270000 8.190000 1.090000 11.740000
item_49 6.300000 5.960000 0.340000 5.350000
item_50 11.860000 10.610000 1.250000 10.510000
In [35]:
# Average improvement for all items
# ======================================================================================
results[['improvement', 'improvement_(%)']].agg(['mean', 'min', 'max'])
Out[35]:
improvement improvement_(%)
mean 0.696 7.3634
min 0.050 0.5400
max 1.400 15.6100
In [36]:
# Number of series with positive and negative improvement
# ======================================================================================
pd.Series(np.where(results['improvement_(%)'] < 0, 'negative', 'positive')).value_counts()
Out[36]:
positive    50
Name: count, dtype: int64

The global model achieves an average improvement of 7.4% compared to using an individual model for each series. For all series, the prediction error evaluated by backtesting is lower when the global model is used. This use case demonstrates that a multi-series model can have advantages over multiple individual models when forecasting time series that follow similar dynamics. In addition to the potential improvements in forecasting, it is also important to consider the benefit of having only one model to maintain and the speed of training and prediction.

⚠ Warning

This comparison was made without optimizing the model hyperparameters. See the Case study 3: Hyperparameter tuning and lags selection section to verify that the conclusions hold when the models are tuned with the best combination of hyperparameters and lags.

Time Series with different lengths and different exogenous variables

When faced with a multi-series forecasting problem, it is common for the series to have varying lengths due to differences in the starting times of data recording. To address this scenario, the ForecasterRecursiveMultiSeries allow the simultaneous modeling of time series of different lengths and using different exogenous variables.

  • When the modeled series have different lengths, they must be stored in a Python dictionary. The keys of the dictionary are the names of the series and the values are the series themselves. All series must be of type pandas.Series, have a datetime index and have the same frequency.

Series values Allowed
[NaN, NaN, NaN, NaN, 4, 5, 6, 7, 8, 9] ✔️
[0, 1, 2, 3, 4, 5, 6, 7, 8, NaN] ✔️
[0, 1, 2, 3, 4, NaN, 6, 7, 8, 9] ✔️
[NaN, NaN, 2, 3, 4, NaN, 6, 7, 8, 9] ✔️



  • When different exogenous variables are used for each series, or if the exogenous variables are the same but have different values for each series, they must be stored in a dictionary. The keys of the dictionary are the names of the series and the values are the exogenous variables themselves. All exogenous variables must be of type pandas.DataFrame or pandas.Series.

Data

The data for this example is stored in "long format" in a single DataFrame. The series_id column identifies the series to which each observation belongs. The timestamp column contains the date of the observation, and the value column contains the value of the series at that date. Each time series is of a different length. The exogenous variables are stored in a separate DataFrame, also in "long format". The column series_id identifies the series to which each observation belongs. The column timestamp contains the date of the observation, and the remaining columns contain the values of the exogenous variables at that date.

In [37]:
# Load time series of multiple lengths and exogenous variables
# ==============================================================================
series = pd.read_csv(
    'https://raw.githubusercontent.com/skforecast/skforecast-datasets/main/data/demo_multi_series.csv'
)
exog = pd.read_csv(
    'https://raw.githubusercontent.com/skforecast/skforecast-datasets/main/data/demo_multi_series_exog.csv'
)
series['timestamp'] = pd.to_datetime(series['timestamp'])
exog['timestamp'] = pd.to_datetime(exog['timestamp'])
display(series.head())
print("")
display(exog.head())
series_id timestamp value
0 id_1000 2016-01-01 1012.500694
1 id_1000 2016-01-02 1158.500099
2 id_1000 2016-01-03 983.000099
3 id_1000 2016-01-04 1675.750496
4 id_1000 2016-01-05 1586.250694

series_id timestamp sin_day_of_week cos_day_of_week air_temperature wind_speed
0 id_1000 2016-01-01 -0.433884 -0.900969 6.416639 4.040115
1 id_1000 2016-01-02 -0.974928 -0.222521 6.366474 4.530395
2 id_1000 2016-01-03 -0.781831 0.623490 6.555272 3.273064
3 id_1000 2016-01-04 0.000000 1.000000 6.704778 4.865404
4 id_1000 2016-01-05 0.781831 0.623490 2.392998 5.228913

When series have different lengths, the data must be transformed into a dictionary. The keys of the dictionary are the names of the series and the values are the series themselves. To do this, the series_long_to_dict function is used, which takes the DataFrame in "long format" and returns a dict of series.

Similarly, when the exogenous variables are different (values or variables) for each series, the data must be transformed into a dictionary. The keys of the dictionary are the names of the series and the values are the exogenous variables themselves. The exog_long_to_dict function is used, which takes the DataFrame in "long format" and returns a dict of exogenous variables.

In [38]:
# Transform series and exog to dictionaries
# ==============================================================================
series_dict = series_long_to_dict(
    data      = series,
    series_id = 'series_id',
    index     = 'timestamp',
    values    = 'value',
    freq      = 'D'
)

exog_dict = exog_long_to_dict(
    data      = exog,
    series_id = 'series_id',
    index     = 'timestamp',
    freq      = 'D'
)
/home/ubuntu/anaconda3/envs/skforecast_14_py12/lib/python3.12/site-packages/skforecast/preprocessing/preprocessing.py:299: MissingValuesWarning: Series 'id_1003' is incomplete. NaNs have been introduced after setting the frequency. 
 You can suppress this warning using: warnings.simplefilter('ignore', category=MissingValuesWarning)
  warnings.warn(

Some exogenous variables are omitted for series 1 and 3 to illustrate that different exogenous variables can be used for each series.

In [39]:
# Drop some exogenous variables for series 'id_1000' and 'id_1003'
# ==============================================================================
exog_dict['id_1000'] = exog_dict['id_1000'].drop(columns=['air_temperature', 'wind_speed'])
exog_dict['id_1003'] = exog_dict['id_1003'].drop(columns=['cos_day_of_week'])
In [40]:
# Partition data in train and test
# ==============================================================================
end_train = '2016-07-31 23:59:00'
series_dict_train = {k: v.loc[: end_train,] for k, v in series_dict.items()}
exog_dict_train   = {k: v.loc[: end_train,] for k, v in exog_dict.items()}
series_dict_test  = {k: v.loc[end_train:,] for k, v in series_dict.items()}
exog_dict_test    = {k: v.loc[end_train:,] for k, v in exog_dict.items()}
In [41]:
# Plot series
# ==============================================================================
set_dark_theme()
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
fig, axs = plt.subplots(5, 1, figsize=(8, 4), sharex=True)
for i, s in enumerate(series_dict.values()):
    axs[i].plot(s, label=s.name, color=colors[i])
    axs[i].legend(loc='upper right', fontsize=8)
    axs[i].tick_params(axis='both', labelsize=8)
    axs[i].axvline(pd.to_datetime(end_train), color='white', linestyle='--', linewidth=1) # End train
In [42]:
# Description of each series
# ==============================================================================
for k in series_dict.keys():
    print(f"{k}:")
    try:
        print(
            f"\tTrain: len={len(series_dict_train[k])}, {series_dict_train[k].index[0]}"
            f" --- {series_dict_train[k].index[-1]} "
            f" (missing={series_dict_train[k].isnull().sum()})"
        )
    except:
        print(f"\tTrain: len=0")
    try:
        print(
            f"\tTest : len={len(series_dict_test[k])}, {series_dict_test[k].index[0]}"
            f" --- {series_dict_test[k].index[-1]} "
            f" (missing={series_dict_test[k].isnull().sum()})"
        )
    except:
        print(f"\tTest : len=0")
id_1000:
	Train: len=213, 2016-01-01 00:00:00 --- 2016-07-31 00:00:00  (missing=0)
	Test : len=153, 2016-08-01 00:00:00 --- 2016-12-31 00:00:00  (missing=0)
id_1001:
	Train: len=30, 2016-07-02 00:00:00 --- 2016-07-31 00:00:00  (missing=0)
	Test : len=153, 2016-08-01 00:00:00 --- 2016-12-31 00:00:00  (missing=0)
id_1002:
	Train: len=183, 2016-01-01 00:00:00 --- 2016-07-01 00:00:00  (missing=0)
	Test : len=0
id_1003:
	Train: len=213, 2016-01-01 00:00:00 --- 2016-07-31 00:00:00  (missing=73)
	Test : len=153, 2016-08-01 00:00:00 --- 2016-12-31 00:00:00  (missing=73)
id_1004:
	Train: len=91, 2016-05-02 00:00:00 --- 2016-07-31 00:00:00  (missing=0)
	Test : len=31, 2016-08-01 00:00:00 --- 2016-08-31 00:00:00  (missing=0)
In [43]:
# Exogenous variables for each series
# ==============================================================================
for k in series_dict.keys():
    print(f"{k}:")
    try:
        print(f"\t{exog_dict[k].columns.to_list()}")
    except:
        print(f"\tNo exogenous variables")
id_1000:
	['sin_day_of_week', 'cos_day_of_week']
id_1001:
	['sin_day_of_week', 'cos_day_of_week', 'air_temperature', 'wind_speed']
id_1002:
	['sin_day_of_week', 'cos_day_of_week', 'air_temperature', 'wind_speed']
id_1003:
	['sin_day_of_week', 'air_temperature', 'wind_speed']
id_1004:
	['sin_day_of_week', 'cos_day_of_week', 'air_temperature', 'wind_speed']

Global model

In [44]:
# Fit forecaster
# ==============================================================================
regressor = HistGradientBoostingRegressor(random_state=123, max_depth=5)
window_features = RollingFeatures(stats=['mean', 'min', 'max'], window_sizes=7)
forecaster = ForecasterRecursiveMultiSeries(
                regressor          = regressor,
                lags               = 14,
                window_features    = window_features,
                encoding           = "ordinal",
                dropna_from_series = False
            )

forecaster.fit(series=series_dict_train, exog=exog_dict_train, suppress_warnings=True)
forecaster
Out[44]:

ForecasterRecursiveMultiSeries

General Information
  • Regressor: HistGradientBoostingRegressor(max_depth=5, random_state=123)
  • Lags: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
  • Window features: ['roll_mean_7', 'roll_min_7', 'roll_max_7']
  • Window size: 14
  • Series encoding: ordinal
  • Exogenous included: True
  • Weight function included: False
  • Series weights: None
  • Differentiation order: None
  • Creation date: 2024-11-06 14:48:15
  • Last fit date: 2024-11-06 14:48:16
  • Skforecast version: 0.14.0
  • Python version: 3.12.5
  • Forecaster id: None
Exogenous Variables
    sin_day_of_week, cos_day_of_week, air_temperature, wind_speed
Data Transformations
  • Transformer for series: None
  • Transformer for exog: None
Training Information
  • Series names (levels): id_1000, id_1001, id_1002, id_1003, id_1004
  • Training range: 'id_1000': ['2016-01-01', '2016-07-31'], 'id_1001': ['2016-07-02', '2016-07-31'], 'id_1002': ['2016-01-01', '2016-07-01'], 'id_1003': ['2016-01-01', '2016-07-31'], 'id_1004': ['2016-05-02', '2016-07-31']
  • Training index type: DatetimeIndex
  • Training index frequency: D
Regressor Parameters
    {'categorical_features': 'warn', 'early_stopping': 'auto', 'interaction_cst': None, 'l2_regularization': 0.0, 'learning_rate': 0.1, 'loss': 'squared_error', 'max_bins': 255, 'max_depth': 5, 'max_features': 1.0, 'max_iter': 100, 'max_leaf_nodes': 31, 'min_samples_leaf': 20, 'monotonic_cst': None, 'n_iter_no_change': 10, 'quantile': None, 'random_state': 123, 'scoring': 'loss', 'tol': 1e-07, 'validation_fraction': 0.1, 'verbose': 0, 'warm_start': False}
Fit Kwargs
    {}

🛈 API Reference    🗎 User Guide

Only series whose last window of data ends at the same datetime index can be predicted together. If levels = None, series that do not reach the maximum index are excluded from prediction. In this example, series 'id_1002' is excluded.

In [45]:
# Predict
# ==============================================================================
predictions = forecaster.predict(steps=5, exog=exog_dict_test, suppress_warnings=True)
predictions
Out[45]:
id_1000 id_1001 id_1003 id_1004
2016-08-01 1433.494674 3068.244797 2748.768695 7763.964965
2016-08-02 1465.937652 3468.972018 2022.956989 8734.459604
2016-08-03 1407.568704 3475.785941 1860.174602 9111.776904
2016-08-04 1355.034624 3356.315154 1823.007406 8815.493044
2016-08-05 1298.820257 3325.735999 1815.224166 8664.891475

Backtesting

When series have different lengths, the backtesting process only returns predictions for the date-times that are present in the series.

In [46]:
# Backtesting
# ==============================================================================
cv = TimeSeriesFold(
        steps              = 24,
        initial_train_size = len(series_dict_train["id_1000"]),
        refit              = False,
     )

metrics_levels, backtest_predictions = backtesting_forecaster_multiseries(
    forecaster            = forecaster,
    series                = series_dict,
    exog                  = exog_dict,
    cv                    = cv,
    metric                = "mean_absolute_error",
    add_aggregated_metric = False,
    n_jobs                = "auto",
    verbose               = True,
    show_progress         = True,
    suppress_warnings     = True
)

display(metrics_levels)
print("")
display(backtest_predictions)
Information of folds
--------------------
Number of observations used for initial training: 213
Number of observations used for backtesting: 153
    Number of folds: 7
    Number skipped folds: 0 
    Number of steps per fold: 24
    Number of steps to exclude between last observed data (last window) and predictions (gap): 0
    Last fold only includes 9 observations.

Fold: 0
    Training:   2016-01-01 00:00:00 -- 2016-07-31 00:00:00  (n=213)
    Validation: 2016-08-01 00:00:00 -- 2016-08-24 00:00:00  (n=24)
Fold: 1
    Training:   No training in this fold
    Validation: 2016-08-25 00:00:00 -- 2016-09-17 00:00:00  (n=24)
Fold: 2
    Training:   No training in this fold
    Validation: 2016-09-18 00:00:00 -- 2016-10-11 00:00:00  (n=24)
Fold: 3
    Training:   No training in this fold
    Validation: 2016-10-12 00:00:00 -- 2016-11-04 00:00:00  (n=24)
Fold: 4
    Training:   No training in this fold
    Validation: 2016-11-05 00:00:00 -- 2016-11-28 00:00:00  (n=24)
Fold: 5
    Training:   No training in this fold
    Validation: 2016-11-29 00:00:00 -- 2016-12-22 00:00:00  (n=24)
Fold: 6
    Training:   No training in this fold
    Validation: 2016-12-23 00:00:00 -- 2016-12-31 00:00:00  (n=9)

levels mean_absolute_error
0 id_1000 164.959423
1 id_1001 1055.559754
2 id_1002 NaN
3 id_1003 235.663130
4 id_1004 968.459237

id_1000 id_1001 id_1003 id_1004
2016-08-01 1433.494674 3068.244797 2748.768695 7763.964965
2016-08-02 1465.937652 3468.972018 2022.956989 8734.459604
2016-08-03 1407.568704 3475.785941 1860.174602 9111.776904
2016-08-04 1355.034624 3356.315154 1823.007406 8815.493044
2016-08-05 1298.820257 3325.735999 1815.224166 8664.891475
... ... ... ... ...
2016-12-27 1712.239821 1083.974921 2069.136527 NaN
2016-12-28 1622.091030 1015.958815 1984.614144 NaN
2016-12-29 1555.862159 1085.108004 1909.150202 NaN
2016-12-30 1513.901694 1114.592910 1965.060657 NaN
2016-12-31 1459.122750 1001.655092 1969.768680 NaN

153 rows × 4 columns

In [47]:
# Plot backtesting predictions
# ==============================================================================
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
fig, axs = plt.subplots(5, 1, figsize=(8, 4), sharex=True)
for i, s in enumerate(series_dict.keys()):
    axs[i].plot(series_dict[s], label=series_dict[s].name, color=colors[i])
    axs[i].axvline(pd.to_datetime(end_train), color='white', linestyle='--', linewidth=1)
    try:
        axs[i].plot(backtest_predictions[s], label='prediction', color="white")
    except:
        pass
    axs[i].legend(loc='upper left', fontsize=8)
    axs[i].tick_params(axis='both', labelsize=8)

By allowing the modeling of time series of different lengths and with different exogenous variables, the ForecasterRecursiveMultiSeries class provides a flexible and powerful tool for using all available information to train the forecasting models.

Hyperparameter tuning and lags selection

In first example of this document, the comparison between forecasters was done without optimizing the hyperparameters of the regressors. To make a fair comparison, a grid search strategy is used in order to select the best configuration for each forecaster. See more information on hyperparameter tuning and lags selection.

🖉 Note

Hyperparameter tuning for several models can be computationally expensive. In order to speed up the process, the evaluation of each candidate configuration is done using *one-step-ahead* instead of backtesting. For more details of the advantages and limitations of this approach, see the [One-step-ahead validation](https://skforecast.org/latest/user_guides/hyperparameter-tuning-and-lags-selection#one-step-ahead-validation.html).
In [ ]:
# Hyperparameter search for each single series model
# ======================================================================================
items = []
mae_values  = []

def search_space(trial):
    search_space  = {
        'lags'          : trial.suggest_categorical('lags', [7, 14]),
        'max_iter'      : trial.suggest_int('max_iter', 100, 500),
        'max_depth'     : trial.suggest_int('max_depth', 5, 10),
        'learning_rate' : trial.suggest_float('learning_rate', 0.01, 0.1)
    } 

    return search_space

for item in tqdm(data.columns):

    window_features = RollingFeatures(stats=['mean', 'min', 'max'], window_sizes=7)
    forecaster = ForecasterRecursive(
                    regressor       = HistGradientBoostingRegressor(random_state=123),
                    lags            = 14,
                    window_features = window_features
                )
    
    cv_search = OneStepAheadFold(initial_train_size = len(data_train))
    
    warnings.simplefilter('ignore', category=OneStepAheadValidationWarning)
    results_bayesian = bayesian_search_forecaster(
                        forecaster    = forecaster,
                        y             = data.loc[:end_val, item],
                        cv            = cv_search,
                        search_space  = search_space,
                        n_trials      = 20,
                        metric        = 'mean_absolute_error',
                        return_best   = True,
                        verbose       = False,
                        show_progress = False 
                    )
    
    cv_backtesting = TimeSeriesFold(
                        steps              = 7,
                        initial_train_size = len(data_train) + len(data_val),
                        refit              = False,
                        )
    metric, preds = backtesting_forecaster(
                        forecaster    = forecaster,
                        y             = data[item],
                        cv            = cv_backtesting,
                        metric        = 'mean_absolute_error',
                        verbose       = False,
                        show_progress = False
                    )

    items.append(item)
    mae_values.append(metric.at[0, 'mean_absolute_error'])

uni_series_mae = pd.Series(
                    data  = mae_values,
                    index = items,
                    name  = 'uni_series_mae'
                )
In [67]:
# Hyperparameter search for the multi-series model and backtesting for each item
# ======================================================================================
def search_space(trial):
    search_space  = {
        'lags'          : trial.suggest_categorical('lags', [7, 14]),
        'max_iter'      : trial.suggest_int('max_iter', 100, 500),
        'max_depth'     : trial.suggest_int('max_depth', 5, 10),
        'learning_rate' : trial.suggest_float('learning_rate', 0.01, 0.1)
    } 

    return search_space

window_features = RollingFeatures(stats=['mean', 'min', 'max'], window_sizes=7)
forecaster_ms = ForecasterRecursiveMultiSeries(
                    regressor          = HistGradientBoostingRegressor(random_state=123),
                    lags               = 14,
                    window_features    = window_features,
                    transformer_series = StandardScaler(),
                    encoding           = 'ordinal'
                )

warnings.simplefilter('ignore', category=OneStepAheadValidationWarning)
results_bayesian_ms = bayesian_search_forecaster_multiseries(
                        forecaster    = forecaster_ms,
                        series        = data.loc[:end_val, :],
                        levels        = None, # Si es None se seleccionan todos los niveles
                        cv            = cv_search,
                        search_space  = search_space,
                        n_trials      = 20,
                        metric        = 'mean_absolute_error',
                        return_best   = True,
                        verbose       = False,
                        show_progress = False 
                    )      

multi_series_mae, predictions_ms = backtesting_forecaster_multiseries(
                                       forecaster            = forecaster_ms,
                                       series                = data,
                                       levels                = None, # Si es None se seleccionan todos los niveles
                                       cv                    = cv_backtesting,
                                       metric                = 'mean_absolute_error',
                                       add_aggregated_metric = False,
                                       verbose               = False
                                   )
`Forecaster` refitted using the best-found lags and parameters, and the whole data set: 
  Lags: [ 1  2  3  4  5  6  7  8  9 10 11 12 13 14] 
  Parameters: {'max_iter': 283, 'max_depth': 9, 'learning_rate': 0.03516685613423155}
  Backtesting metric: 8.048688486618888
  Levels: ['item_1', 'item_2', 'item_3', 'item_4', 'item_5', 'item_6', 'item_7', 'item_8', 'item_9', 'item_10', 'item_11', 'item_12', 'item_13', 'item_14', 'item_15', 'item_16', 'item_17', 'item_18', 'item_19', 'item_20', 'item_21', 'item_22', 'item_23', 'item_24', 'item_25', 'item_26', 'item_27', 'item_28', 'item_29', 'item_30', 'item_31', 'item_32', 'item_33', 'item_34', 'item_35', 'item_36', 'item_37', 'item_38', 'item_39', 'item_40', 'item_41', 'item_42', 'item_43', 'item_44', 'item_45', 'item_46', 'item_47', 'item_48', 'item_49', 'item_50']

In [68]:
# Difference in backtesting metric for each item
# ======================================================================================
multi_series_mae = multi_series_mae.set_index('levels')
multi_series_mae.columns = ['multi_series_mae']
results = pd.concat((uni_series_mae, multi_series_mae), axis = 1)
results['improvement'] = results.eval('uni_series_mae - multi_series_mae')
results['improvement_(%)'] = 100 * results.eval('(uni_series_mae - multi_series_mae) / uni_series_mae')
results = results.round(2)

# Average improvement for all items
# ======================================================================================
results[['improvement', 'improvement_(%)']].agg(['mean', 'min', 'max'])
Out[68]:
improvement improvement_(%)
mean 0.716 7.5988
min 0.070 1.2100
max 1.490 15.5600
In [69]:
# Number of series with positive and negative improvement
# ======================================================================================
pd.Series(np.where(results['improvement_(%)'] < 0, 'negative', 'positive')).value_counts()
Out[69]:
positive    50
Name: count, dtype: int64

After identifying the combination of lags and hyperparameters that achieve the best predictive performance for each forecaster, more single-series models have achieved higher predictive ability. Even so, the multi-series model provides better results for most of the items.

Feature Selection

Feature selection is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons: to simplify models to make them easier to interpret, to reduce training time, to avoid the curse of dimensionality, to improve generalization by reducing overfitting (formally, variance reduction), and others.

Skforecast is compatible with the feature selection methods implemented in the scikit-learn library. There are several methods for feature selection, but the most common are:

  • Recursive feature elimination (RFE)

  • Sequential Feature Selection (SFS)

  • Feature selection based on threshold (SelectFromModel)

💡 Tip

Feature selection is a powerful tool for improving the performance of machine learning models. However, it is computationally expensive and can be time-consuming. Since the goal is to find the best subset of features, not the best model, it is not necessary to use the entire data set or a highly complex model. Instead, it is recommended to use a small subset of the data and a simple model. Once the best subset of features has been identified, the model can then be trained using the entire dataset and a more complex configuration.

Weights in multi-series

The weights are used to control the influence that each observation has on the training of the model. ForecasterAutoregMultiseries accepts two types of weights:

  • series_weights controls the relative importance of each series. If a series has twice as much weight as the others, the observations of that series influence the training twice as much. The higher the weight of a series relative to the others, the more the model will focus on trying to learn that series.

  • weight_func controls the relative importance of each observation according to its index value. For example, a function that assigns a lower weight to certain dates.

If the two types of weights are indicated, they are multiplied to create the final weights as shown in the figure. The resulting sample_weight cannot have negative values.

Learn more about weights in multi-series forecasting and weighted time series forecasting with skforecast.

In this example, item_1 has higher relative importance among series (it weighs 3 times more than the rest of the series), and observations between '2013-12-01' and '2014-01-31' are considered non-representative and a weight of 0 is applied to them.

In [72]:
# Weights in ForecasterAutoregMultiSeries
# ======================================================================================
# Weights for each series
series_weights = {'item_1': 3.0} # Series not presented in the dict will have weight 1

# Weights for each index
def custom_weights(index):
    """
    Return 0 if index is between '2013-12-01' and '2014-01-31', 1 otherwise.
    """
    weights = np.where(
                  (index >= '2013-12-01') & (index <= '2014-01-31'),
                   0,
                   1
              )
    
    return weights

forecaster = ForecasterRecursiveMultiSeries(
                 regressor          = HistGradientBoostingRegressor(random_state=123),
                 lags               = 14,
                 transformer_series = StandardScaler(),
                 encoding           = 'ordinal',
                 transformer_exog   = None,
                 weight_func        = custom_weights, 
                 series_weights     = series_weights
             )

forecaster.fit(series=data)
forecaster.predict(steps=7).head(3)
/home/ubuntu/anaconda3/envs/skforecast_14_py12/lib/python3.12/site-packages/skforecast/recursive/_forecaster_recursive_multiseries.py:1383: IgnoredArgumentWarning: {'item_45', 'item_38', 'item_4', 'item_32', 'item_37', 'item_2', 'item_33', 'item_47', 'item_36', 'item_22', 'item_50', 'item_6', 'item_35', 'item_43', 'item_17', 'item_41', 'item_3', 'item_26', 'item_16', 'item_40', 'item_25', 'item_30', 'item_13', 'item_27', 'item_46', 'item_14', 'item_20', 'item_10', 'item_42', 'item_49', 'item_48', 'item_19', 'item_31', 'item_11', 'item_24', 'item_5', 'item_12', 'item_18', 'item_34', 'item_21', 'item_9', 'item_39', 'item_8', 'item_7', 'item_28', 'item_44', 'item_29', 'item_23', 'item_15'} not present in `series_weights`. A weight of 1 is given to all their samples. 
 You can suppress this warning using: warnings.simplefilter('ignore', category=IgnoredArgumentWarning)
  warnings.warn(
Out[72]:
item_1 item_10 item_11 item_12 item_13 item_14 item_15 item_16 item_17 item_18 ... item_46 item_47 item_48 item_49 item_5 item_50 item_6 item_7 item_8 item_9
2018-01-01 20.583876 61.580282 62.323468 62.581761 79.205140 56.450206 85.846570 22.718565 32.502652 79.377183 ... 53.421299 22.260726 47.144927 28.682841 22.372861 63.199237 53.271716 57.232493 68.655821 41.978625
2018-01-02 22.501538 71.421524 75.349079 75.283254 89.691063 59.121550 90.118244 24.848422 36.265055 87.467801 ... 62.983381 26.943935 51.961949 34.340793 19.502197 63.945234 57.812538 61.734525 82.100161 49.326013
2018-01-03 20.907183 73.185299 76.323151 75.547258 83.861397 63.508950 95.531382 25.331899 33.468072 90.110389 ... 62.890533 25.834892 52.942216 32.730445 18.498621 71.040223 61.123112 63.709698 84.395482 50.310361

3 rows × 50 columns

🖉 Note

A dictionary can be passed to `weight_func` to apply different functions for each series. If a series is not presented in the dictionary, it will have weight 1.

Conclusions

This use case shows that a multi-series model may have advantages over multiple individual models when forecasting time series that follow similar dynamics. Beyond the potential improvements in forecasting, it is also important to take into consideration the benefit of having only one model to maintain.

Session information

In [73]:
import session_info
session_info.show(html=False)
-----
matplotlib          3.9.2
numpy               2.0.2
pandas              2.2.3
session_info        1.0.0
skforecast          0.14.0
sklearn             1.5.1
tqdm                4.66.5
-----
IPython             8.27.0
jupyter_client      8.6.3
jupyter_core        5.7.2
notebook            6.4.12
-----
Python 3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:27:27) [GCC 11.2.0]
Linux-5.15.0-1071-aws-x86_64-with-glibc2.31
-----
Session information updated at 2024-11-06 15:29

Citation

How to cite this document

If you use this document or any part of it, please acknowledge the source, thank you!

Global Forecasting Models: Modeling Multiple Time Series with Machine Learning by Joaquín Amat Rodrigo and Javier Escobar Ortiz, available under a CC BY-NC-SA 4.0 at https://www.cienciadedatos.net/documentos/py44-multi-series-forecasting-skforecast.html

How to cite skforecast

If you use skforecast for a scientific publication, we would appreciate it if you cite the published software.

Zenodo:

Amat Rodrigo, Joaquin, & Escobar Ortiz, Javier. (2024). skforecast (v0.14.0). Zenodo. https://doi.org/10.5281/zenodo.8382788

APA:

Amat Rodrigo, J., & Escobar Ortiz, J. (2024). skforecast (Version 0.14.0) [Computer software]. https://doi.org/10.5281/zenodo.8382788

BibTeX:

@software{skforecast, author = {Amat Rodrigo, Joaquin and Escobar Ortiz, Javier}, title = {skforecast}, version = {0.14.0}, month = {11}, year = {2024}, license = {BSD-3-Clause}, url = {https://skforecast.org/}, doi = {10.5281/zenodo.8382788} }


Did you like the article? Your support is important

Website maintenance has high cost, your contribution will help me to continue generating free educational content. Many thanks! 😊


Creative Commons Licence
This work by Joaquín Amat Rodrigo and Javier Escobar Ortiz is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.

Allowed:

  • Share: copy and redistribute the material in any medium or format.

  • Adapt: remix, transform, and build upon the material.

Under the following terms:

  • Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

  • NonCommercial: You may not use the material for commercial purposes.

  • ShareAlike: If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.