-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clear old jobs while loading the jobs from schedule #405
Conversation
Thanks @sandip-mane! Maybe we should update some tests too? I think the related tests are still using the non-bang version. |
out = Sidekiq::Cron::Job.load_from_hash! @jobs_hash | ||
assert_equal out.size, 0, "should have no errors" | ||
assert_equal Sidekiq::Cron::Job.all.size, 2, "Should have 2 jobs after load" | ||
|
||
out_2 = Sidekiq::Cron::Job.load_from_hash @jobs_hash | ||
out_2 = Sidekiq::Cron::Job.load_from_hash! @jobs_hash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was referring the ScheduleLoader
tests: https://github.com/sidekiq-cron/sidekiq-cron/blob/master/test/unit/schedule_loader_test.rb. Not sure if those should be updated too 🤔.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, seemed like the right thing to do. Have updated the tests.
Codecov Report
@@ Coverage Diff @@
## master #405 +/- ##
=========================================
Coverage ? 93.36%
=========================================
Files ? 10
Lines ? 482
Branches ? 0
=========================================
Hits ? 450
Misses ? 32
Partials ? 0 Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
This should have been a breaking change, I used to create some cronjobs programmatically and they are gone on every restart. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR introduced a massive issue for us, where it destroys existing jobs when we deploy.
We only load some of our schedules from a YAML, because of this, the ones that are dynamically created and stored in Redis are cleared.
@markets shall we revert this change? The issue with dynamic cron jobs will continue to be an issue on every deploy IMO. I think the best way would be to document the usage of dynamic cron jobs. Thoughts? |
@rchasman are all jobs from |
hmm 🤔 maybe we should revert this change... It seems a good idea, but it seems it breaks with dynamic jobs, which is worst scenario compared to old jobs in the schedule (that can be cleaned manually). Another idea (probably ideal solution?) is to only cleanup the ones defined in the YAML, but right now, we don't have a mechanism (an attribute or an extra data structure) to distinct between dynamic jobs vs static (defined in the YAML). So yes, maybe it's better to revert it and document it? |
My honest opinion: I feel this should be the plan:
|
Agree with the plan! Please, feel free to send PRs to move this forward, unfortunately I've very limited time at this point to work on these changes. |
hey @sandip-mane 👋🏼 I already merged the revert, but before pushing a new release I think it would be better to have the "dynamic crons" thing (points 2 and 3 of the plan). This way, we'll publish a much more solid release. If we release only the revert, the original issues may arise again... What do you think? |
Sure sure, I will try to send PRs for the other 2 issues by this weekend (Working on a release towards a deadline). |
Patch for #396