Out-Of-Sample Forecasting Techniques

Out-of-sample forecasting involves using a model to predict future values of a time series that were not used to train the model. This technique is crucial for evaluating the model’s ability to generalize to new data and make accurate predictions.

Cross-Validation

Cross-validation is a popular method for out-of-sample forecasting. It involves dividing the data into multiple folds, training the model on one or more folds, and evaluating its performance on the remaining folds. This process is repeated multiple times to obtain a more robust estimate of the model’s performance.

  • K-fold cross-validation: The data is divided into k folds, and the model is trained on k-1 folds and evaluated on the remaining fold. This process is repeated k times, with each fold being used as the testing set once.  
  • Time series cross-validation: This method is specifically designed for time series data. It involves training the model on historical data up to a certain point and evaluating its performance on subsequent data.

Holdout Validation

Holdout validation is a simpler method that involves randomly splitting the data into training and testing sets. The model is trained on the training set and evaluated on the testing set.

Rolling Window Forecasting

Rolling window forecasting involves using a fixed-size window of data to train the model and then sliding the window forward to make predictions. This method can be useful for capturing time-varying patterns in the data.

Forecasting Accuracy Metrics

Several metrics can be used to evaluate the accuracy of out-of-sample forecasts, including:

  • Mean Absolute Error (MAE): Measures the average absolute difference between the predicted values and the actual values.
  • Mean Squared Error (MSE): Measures the average squared difference between the predicted values and the actual values.  
  • Root Mean Squared Error (RMSE): The square root of the MSE, which is often preferred because it is in the same units as the original data.  
  • Mean Absolute Percentage Error (MAPE): Measures the average percentage difference between the predicted values and the actual values.

By using out-of-sample forecasting techniques and evaluating the accuracy of the forecasts, you can assess the reliability and generalizability of your time series model.

Steps to Forecast Using ARIMA
Pre-Installation Checklist for ARIMA and Time Series Forecasting

Get industry recognized certification – Contact us

keyboard_arrow_up
Open chat
Need help?
Hello 👋
Can we help you?