When you follow academics on social media, you typically see many success stories. This person published a paper in Management Science; another one published in EJOR; your colleague from a different university created a great package; and there is also an academic who is ten years younger than you and has already published ten more papers than you. Living in the stream of these success stories, not being able to do as much, might be mentally challenging and demotivating for academics, especially at the start of their career, when they are expected to do a lot but have not done that yet. This is a well-known phenomenon in psychology (“Compare and Despair”), leading to anxiety and depression. We know that people tend to tell their success stories, hiding all the difficulties they had to face, making it sound easier than in real life. This distorts the reality, as it might seem that everyone around you is amazingly successful and does not face those challenges that you do.
But in real life, academia is difficult, publishing is hard, and getting recognition is not straightforward. So, to balance things out, starting from this post, I will be telling stories of my papers (behind the scene), explaining what the original idea was, what the reviewers said, and what we ended up with. Some of the stories are nasty and ugly, some are funny, but in general, the academic reality is not a rose petals valley. What I want to give with these stories is an understanding that the ridiculous situations you sometimes face when submitting papers are not unique and that they have nothing to do with you or your personality. It is a lottery: sometimes you win, sometimes you lose. And there is only so much you can do about that. Hopefully, this will motivate early career researchers and give an idea of how to treat others if you are asked to act as a reviewer.
The story of the paper
The paper evolved from the PIC restrictions on the specific VETS(M,N,M) model to the PIC restrictions on any pure additive or pure multiplicative VETS model. One of the reviewers was highly dissatisfied with the proposed idea and pushed us towards writing a different paper, proposing to drop the PIC restrictions. In the end, we were lucky, and the Associate Editor accepted the paper despite the rejection recommendation by the reviewer.
Back in 2018, John Boylan approached me and asked me to join the project that he and a colleague of his, Huijing Chen, were working on. They studied the performance of models with a variety of restrictions on parameters in the case of seasonal data, based on the research of Ouwehand et al. (2007). The idea was to capture accurately seasonal patterns across several products exhibiting similar characteristics. This would allow forecasting similar products in one go, even in the case of the small sample sizes, where the conventional approaches typically fail. My contribution was to implement those restrictions in a multivariate framework based on the vector exponential smoothing from de Silva et al. (2010). Over the 2018-2019, we had meetings between us and developed a taxonomy of restrictions for a specific type of model, VETS(M,N,M) – vector exponential smoothing without trend and with multiplicative error and seasonal components. Based on the proposed framework, a forecaster could use, for example, similar smoothing parameters for different time series or use the same set of initial seasonal indices between them, thus reducing the number of parameters to estimate and increasing the efficiency of the estimates. We have decided to focus our research on the VETS(M,N,M) model rather than on an additive one because it is more reasonable to expect that, for example, the sales of several similar products in January will increase by x% rather than by x units. To simplify our life, we formulated the model similar to how it was done by Akram et al. (2009) – by applying a pure additive model to the data in logarithms. It took us almost two years to develop the theory, write necessary R functions (now available in
legion package in R), conduct experiments and write the paper, which was then submitted to EJOR at the beginning of August 2020.
In November 2020, we received the comments of three reviewers. They had several major concerns, the main of which was that we apply our approach to one specific type of model, VETS(M,N,M). They wanted us to extend this to other types of components. So, we took our time to develop an extended taxonomy, allowing the pure additive and pure multiplicative VETS models and restricting level, trend and seasonal components depending on the preferences of a forecaster. We have rewritten the paper, I have created a new function in R (which is called
vets(), it’s in the
legion), and we have updated the experiments in the paper. After working on the paper for more than half a year, we resubmitted it at the beginning of June 2021.
In August 2021, we received the second round of comments. One of the reviewers was satisfied with the progress and accepted the paper, the other one had only minor comments, but the third one still had major reservations about the paper:
- They claimed that the suggested restrictions are excessive and can be dropped. For example, they said that imposing commonality on the initial components of the model is not needed because initial values are not even discussed in Hyndman & Athanasopoulos textbook. If we agreed with the reviewer, we would need to remove the essence of the paper;
- They also claimed that applying the model to the data in logarithms does not allow taking care of time-varying heteroscedasticity in contrast with what the conventional multiplicative ETS models do. This is incorrect; taking logarithms is one of the textbook methods to resolve heteroscedasticity. Besides, the conventional ETS and the one applied to the data in logarithms work similarly (see, for example, Akram et al. (2009) or Section 6.1 of ADAM monograph).
There were other technical comments related to the model formulation. We agreed with them and prepared a thorough response to the reviewer, trying to clarify why the restrictions are important and why we think that the proposed approach is sensible.
The Associate Editor has also pointed out that the paper was OR light, and that we needed to make it more relevant to the EJOR audience. We have rewritten the introduction and added some comments in the experimental part to make it closer to the OR. We then resubmitted the paper in October 2021.
In mid-December 2021, we received the next round of comments. Now two out of three reviewers accepted the paper, while the third one would still recommend “major revision”:
- They did not like that we do not cover the mixed models in our framework (for example, with multiplicative error and additive trend). The reviewer wanted us to develop a taxonomy analogous to the ETS, covering all 30 possible models (for details, see for example Section 4.1 of ADAM monograph);
- They thought that the main (and the only) contribution of the paper is in the introduction of 6 VETS models applied to the data in logarithms.
In the new version of the paper and the responses to the reviewer, we have tried to clarify the original contribution of the paper, emphasising that we were focusing on restrictions of VETS components and parameters rather than on the new model forms.
In addition, the Associate Editor has asked us to make the paper even more relevant to the OR, so we have updated the experiment to align it with specific managerial decisions. We resubmitted the new version of the paper on 28th February 2022.
At the end of April 2022, we received the final comments. The third reviewer recommended rejection of the paper with the comment: “After 4 rounds of revisions the authors have decided to change the main contribution of the paper from proposing a Taxonomy of VETS models, to a new taxonomy of Parameters, Initial States and Components (PIC). I do not see how this is a significant enough contribution – I would have thought and was hoping they would go the other way and expand the set of VETS models into a full taxonomy.”
However, the Associate Editor disagreed and accepted the paper, given the positive feedback from the first two reviewers. So, in the end, this is a success story, but the paper was accepted only because the AE overruled the decision of the third reviewer. If it was not for him, I am sure that the paper would have been rejected. Our team is immensely grateful for the support provided to us by the Associate Editor.
As the final word, I should say that while we disagreed on the main issues with the third reviewer, we know that he is a smart academic who has his own strong view on how things should be done. The main issue in our case was that he wanted us to write a different paper and, as a result, was dissatisfied when we disagreed.
Lessons to learn from this
For us (the author’s perspective), the main issue was that we were distracted from the main idea of the paper and carried away by the comments and changes to the paper. The important lesson from this is to keep the idea in mind and make it clear from the moment of submission.
From the reviewer’s perspective: try to understand the main contribution and do not force the authors to write your paper. If the authors do something differently, it does not necessarily mean that they are wrong. Every academic has their own view on a subject, which is why we share ideas, work together and collaborate. Finally, the ideas do not belong to anyone, be open-minded to different views on the same subject. Your view is important but it is not the only one.
If you want to read the paper itself, it is available here.