Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server middleware removes payload hash key before expiration #26

Closed
tjhosford opened this issue Dec 12, 2013 · 3 comments
Closed

Server middleware removes payload hash key before expiration #26

tjhosford opened this issue Dec 12, 2013 · 3 comments

Comments

@tjhosford
Copy link

Hi,

I'm using this gem to throttle duplicate jobs queued within a 24 hour window.

Unfortunately, the server-side middleware is not letting me achieve this. Once the job is processed by the server middleware, the payload_hash key is removed, whereas I expect it to just expire after my TTL. To get around this I've put in a hack to set the "unique_unlock_order" option to -1, so that the key is never deleted.

I'm a bit confused because if the purpose of the gem is to ensure unique jobs, why would the key ever be removed and rather than just letting it expire on it's own?

I'm also a bit unclear of the use case for the server side piece entirely, so maybe you could provide an example?

@mhenrixon
Copy link
Owner

I'll check that out sounds like there is room for improvement. Not sure what you mean by the server side piece however, could you elaborate?

@mhenrixon
Copy link
Owner

To me it sounds like you are after something like https://github.com/tobiassvn/sidetiq so I am closing this issue. The payload is supposed to be cleared when the job is run successfully and only ever kept for the maximum time configured for the payload. As I see it there should be no need to keep the payload after the job has been completed. That is a task for a scheduler.

mhenrixon added a commit that referenced this issue Jan 27, 2014
@tjhosford
Copy link
Author

@mhenrixon Sorry I never responded to this... I never received a github email and forgot about the comment to be honest.

I think the point here is that this gem just ensures no two indentical jobs are queued during a given window, whereas it sounds like it ensures that no two identical jobs will be processed during the same window.

Consider the following (sort of contrived) example: I want to send 1 email (a background job) to a user if their account has received a new notification. But I only want to do this once within a 24 hour period so as to not bombard them with emails. The argument to the job is just the user_id of the user to email.

With the current implementation, after the first email job is processed, the uniqueness check no longer happens. So the user will keep getting emails when new notifications are created (assuming the queues empty quickly).

I saw you updated the docs which may help a bit, but anyway I hope that illustrates where the confusion came from.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants