AR-Net Future Iteration Method Produces Autocorrelation Issues, Historic Metrics Decline #1519
Replies: 2 comments
-
Beta Was this translation helpful? Give feedback.
0 replies
-
duplicate of #1546 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We use NP for a number of prediction tasks and I've been focusing on improving the accuracy of our forecasts. One problem that I've been running into is that the shift-method the NP team uses to iteratively produce future predictions is both undocumented, undiscussed, and unexamined. Can someone from the NP team explain it to me?
For context here is the result of the training fit:
Here is the result of the 30 day forecast fit:
As I progress forward in my future predictions, the previous performance suffers. The historic fits to the data become less and less well-fitting, and every metric seems to suffer. I first noticed this in the uncertainty measurements. I have been using CQR and I notice that my miscoverage rate linearly increases with yhat.
So, I began to examine my results and found that it is no fluke. Each successive yhat has poorer performance on it's own historic data. So, I started looking at the residuals, and that's when I saw an indicator as to what is happening... whatever operation is being performed on the data is introducing autocorrelation issues. Very significant ones too.
In one particular dataset that was daily data with 1.5 years of history and a 30 day forward prediction window, yhat1 had:
However, when I looked at yhat30, it had:
For yhat1, the autocorrelation falls with in the acceptable parameters. For yhat30, the autocorrelation is very pronounced. DW says we've introduced positive autocorrelation errors, and LB just says it's displaying significant issues - on it's own historic prediction data.
TLDR: Whatever is being done to produce these future predictions is introducing autocorrelation issues, does not maintain consistency for ar-lag numbers, and seems to result in linearly increasing errors for the same historical data. These are pretty significant results too.
Beta Was this translation helpful? Give feedback.
All reactions