Previously we discussed how to measure accuracy of point forecasts and performance of prediction intervals in different cases. Now we look into the question how to tell the difference between competing forecasting approaches. Let’s imagine the situation, when we have four forecasting methods applied to 100 time series with accuracy measured in terms of RMSSE: […]

# Forecasting method vs forecasting model: what’s difference?

If you work in the field of statistics, analytics, data science or forecasting, then you probably have already noticed that some of the instruments that are used in your field are called “methods”, while the others are called “models”. The issue here is that the people, using these terms, usually know the distinction between them, […]

# Forecasting for the sake of forecasting

You probably have already noticed that we are in a pandemic of COVID-19 these days (breaking news: the UK has just announced a lockdown due to the virus). The number of news, memes and noise on the topic coming from around the world is astonishing! What is also astonishing is the number of posts on […]

# M-competitions, from M4 to M5: reservations and expectations

Some of you might have noticed that the guidelines for the M5 competition have finally been released. Those of you who have previously visited this blog, might know my scepticism about the M4 competition. So, I’ve decided to write this small post, outlining my reservations about the M4 and my thoughts and expectations about the […]

# Multiplicative State-Space Models for Intermittent Time Series, 2019

More than 2 years ago I published on this website a working paper entitled “Multiplicative State-Space Models for Intermittent Time Series“, written by John Boylan and I. This was an early version of the paper, which we submitted to International Journal of Forecasting on 31st January 2017. More than two years later (on 11th July […]

# What about all those zeroes? Measuring performance of models on intermittent demand

In one of the previous posts, we have discussed how to measure the accuracy of forecasting methods on the continuous data. All these MAE, RMSE, MASE, RMSSE, rMAE, rRMSE and other measures can give you an information about the mean or median performance of forecasting methods. We have also discussed how to measure the performance […]

# How confident are you? Assessing the uncertainty in forecasting

Introduction Some people think that the main idea of forecasting is in predicting the future as accurately as possible. I have bad news for them. The main idea of forecasting is in decreasing the uncertainty. Think about it: any event that we want to predict has some systematic components \(\mu_t\), which could potentially be captured […]

# Are you sure you’re precise? Measuring accuracy of point forecasts

Two years ago I have written a post “Naughty APEs and the quest for the holy grail“, where I have discussed why percentage-based error measures (such as MPE, MAPE, sMAPE) are not good for the task of forecasting performance evaluation. However, it seems to me that I did not explain the topic to the full […]

# useR!2019 in Toulouse, France

Salut mes amis! Today I’ve presented my smooth package at the useR!2019 conference in Toulouse, France. This is a nice conference, focused on specific solutions to specific problems. Here, people tend to present functions from their packages (not underlying models, like, for example, at ISF). On one hand, this has its own limitations, but on […]

# International Symposium on Forecasting 2019

The ISF2019 took place in Thessaloniki, Greece. This time I presented a spin-off of my research on intermittent demand in retail, entitled as “What about those sweet melons? Using mixture models for demand forecasting in retail”. The idea is quite trivial and simple: use mixture distribution regressions (e.g. logistic and log-normal distributions) in order to […]