-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
params/metrics: cache remote files #9932
Conversation
Codecov ReportPatch coverage is
📢 Thoughts on this report? Let us know!. |
@skshetry hey, could you please summarize high level so that I know what to expect - what does exactly it solve (caching? just an ability to use DVC-tracked metrics / params?). btw, do we need a tests for this as well? |
This only saves the metric/params files from the remote to the cache and reused on successive invocation, similar to what we did in the plots. Using dvc tracked metrics/params was already supported when we introduced top-level metrics/params but it was broken in certain scenarios. That should be fixed by #9909. This PR is a temporary solution so consider this as an optimization rather than a feature. So, no test is required. It uses A better solution is to use |
perfect, thanks. Does it all mean I can also remove
yep, my take overall that we should be testing expected behavior (e.g. not to break if during refactoring) as well, not only bug fixes. Benchmarks is a good option I guess, but it's even harder to maintain and add that amount of them. It's interesting question on what is the best way to put all these behavioral assumptions into tests.
in case like these I feel func tests are enough probably ... |
yes, you should not need
I think the push for keeping I'd much rather test dependencies and those components with a well-defined responsibilities and boundaries than at the high-level. Also, adding tests for high-level is not possible at all, given there are 6 dependents here just for We have similar issues in These kinds of tests only give a false sense of security, and lead us to a CI wait time for >30 minutes. |
👍
to make it clear, I was not suggesting a very specific approach, it was from the top of my head. Please consider this as a retro-like discussion. I'm more concerned here "high" level. If let's say something breaks the next time we are refactoring this - it'll be a p0 for us. How can we prevent this from happening? My intuition that there should be a safeguard of some form to make the refactoring safe. Can it be a parametrized set of tests - may be, can it be a benchmark - probably, does it mean we need to split things - I don't know. I don't have enough context for this. But the fact that we do not have "asserts" (in a broad sense) on the behavior that is important for us does bother me.
if we have tests like this, we should be reconsidering them of course
I would personally always prioritize less p0 / p1 for end users vs CI time |
See iterative/dvc.org#4851 (comment).