Since the previous post on “The Creation of ADAM“, I had difficulties finding time to code anything, but I still managed to fix some bugs, implement a couple of features and make changes, important enough to call the next version of package smooth “3.1.0”. Here is what’s new:

- A new algorithm for ARIMA order selection in ADAM via
`auto.adam()`

function. The algorithm is explained in the draft of my online textbook. This is a more efficient algorithm than the previous one (which was originally implemented for`auto.ssarima()`

) in terms of computational time and forecast accuracy. We will see how it performs on a dataset in the R experiment below. - We no longer depend on forecast package, and we now use
`forecast()`

method from`greybox`

package. Hopefully, this will not lead to conflicts between the packages, but if it does, please, let me know, and I will fix them. The main motivation for this move was the amount of packages`forecast`

relies on, making R download half of CRAN, whenever you need to install`forecast`

for the first time. This is a bit irritating and complicates testing process. Furthermore, Rob Hyndman’s band of programmers now focuses on tideverts packages, and`forecast`

is not included in that infrastructure, so inevitably it will become obsolete, and we would all need to move away from`forecast`

anyway. `ves()`

and`viss()`

functions have been moved from`smooth`

package to a new one called legion. The package is in development stage and will be released closer to the end of March 2021. It will focus on multivariate models for time series analysis and for forecasting, such as Vector Exponential Smoothing, Vector ARIMA etc.- ADAM now also supports Gamma distribution via the respective parameter in
`adam()`

function. There is some information about that in the draft of the online textbook on ADAM. - Finally,
`msdecompose()`

function (for multiplies seasonal decomposition) now has a built-in mechanism of missing data interpolation based on polynomials and Fourier series. The same mechanism is now used in`adam()`

.

### A small competition in R

I will not repeat the code from the previous post, you can copy and paste it in your R script and run to replicate the experiment described there. But I will provide the results of the experiment applied with functions from smooth v3.1.0. Here are the two final tables with error measures:

Means: MASE RMSSE Coverage Range sMIS Time ADAM-ETS(ZZZ) 2.415 2.098 0.888 1.398 2.437 0.654 ADAM-ETS(ZXZ)2.250 1.961 0.8951.2252.0920.497 ADAM-ARIMA 2.326 2.007 0.841 0.848 3.101 3.029 ADAM-ARIMA-old 2.551 2.203 0.862 0.968 3.098 5.990 ETS(ZXZ) 2.279 1.977 0.862 1.372 2.490 1.128 ETSHyndman 2.263 1.970 0.882 1.200 2.2580.404AutoSSARIMA 2.482 2.134 0.8010.7803.335 1.700 AutoARIMA 2.303 1.989 0.834 0.805 3.013 1.385Medians: MASE RMSSE Range sMIS Time ADAM-ETS(ZZZ) 1.362 1.215 0.671 0.917 0.396 ADAM-ETS(ZXZ) 1.327 1.184 0.675 0.909 0.310 ADAM-ARIMA 1.324 1.187 0.630 0.917 2.818 ADAM-ARIMA-old 1.476 1.300 0.769 1.006 3.525 ETS(ZXZ) 1.335 1.198 0.616 0.931 0.551 ETSHyndman 1.3231.1810.653 0.9250.164AutoSSARIMA 1.419 1.2710.5770.988 0.909 AutoARIMA1.3101.182 0.6090.8810.322

In the tables above, the models that perform the best in terms of selected error measures are marked in **boldface**, the new ADAM ARIMA is in dark red, while the old one is in grey colour. We can see that while the new ADAM ARIMA does not beat the benchmark models, such as ETS(Z,X,Z) or even `auto.arima()`

, it is doing much better than the previous version of ADAM ARIMA in terms of accuracy and computational time and does not fail as badly. The new algorithm is definitely better than the one implemented in `auto.ssarima()`

function from smooth. When it comes to prediction intervals, the ADAM ARIMA outperforms both `auto.arima()`

from forecast package and `auto.ssarima()`

from smooth in terms of coverage, getting closer to the nominal 95% confidence level.

I am still not totally satisfied with ADAM ARIMA in case of optimised initials (it seems to work well in case of `initial="backcasting"`

) and will continue working in this direction, but at least now it works better than in smooth v3.0.1. I also plan to improve the computational speed of ADAM with factor variables, plan to work on development of classes for predicted explanatory variables and then make all `smooth`

functions agnostic of the classes of data. If you have any suggestions, please file an issue on github.

Till next time!