-
Notifications
You must be signed in to change notification settings - Fork 414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent multiple QueueEvents handlers from processing same message #2221
Comments
QueueEvents is not designed to process jobs. You use the Worker instance for this. It scales actually nicely horizontally as you can add as many workers as you see fit. |
Sure, I'm talking about the event messages. Not jobs. |
A message is the same as a job. The worker receives a "job/message" and does something with it. |
Events are useful for debugging, updating progress, etc. |
Okay, so again, we're not supposed to do any transactional work inside BullMQ event listeners because they are not guaranteed to be delivered -- is that correct? What do you recommend instead of event listeners, for performing rollback work? |
As I mentioned before, you use the worker for that. The worker gets jobs/messages and you process them in the processor function. This is all architected so that you can enable concurrency processing per worker, delivery guarantees and so on. |
Going back to the original topic, how can I horizontally scale a QueueEvents listener?
Are event delivery guarantees not something that BullMQ can offer for event listeners? I think it's possible to write an event emitter that will redeliver forever until a consumer acknowledges it (example below) -- is there a technical limitation that prevents BullMQ from emulating this behavior with redis? e.g.
|
Why would you like to do something like that? Maybe it is better that you explain your use case, so I can tell you if it is possible to achieve with BullMQ or not. |
@manast I will send you an email at the commercial support email on your docs that describes our system in more detail. (We are BullMQ Pro subscribers.) Thank you. |
Great. I will close this issue then. |
@manast I have a use case where I'd like to prevent multiple QueueEvents handlers from processing the same message. I'm working with a system that makes use of flows. We'd like to listen to the If these QueueEvents handlers are scaled horizontally, we get events that are processed multiple times. How can we use this class while ensuring that each event is only consumed once? Or maybe there is a better way to go about architecting this solution. |
@godinja I think the most robust solution is to update the database status from within the job processor itself. When the job starts processing you can update the database as you would from the active event, and so on. QueueEvents is mostly useful for updating UIs or for debugging purposes. |
The problem with this, at least in our use case, is that the processor for the parent job will never execute if the child fails since we are enable the Would you ever consider implementing some sort of event lock functionality that can be conditionally enabled by any of the event emitters? |
What do you mean with "event lock" ? Btw, even if you were getting the same event several times if you happen to have several instances of QueueEvents running, what does it matter other than being a bit more inneficient? |
Is your feature request related to a problem? Please describe.
AFAICT, there's currently no way of instructing the QueueEvents listener to lock incoming messages, such that only one QueueEvents object will process any given message.
That makes it impossible to scale the QueueEvents horizontally, and creates a bottleneck in any system that relies on BullMQ.
Describe the solution you'd like
The QueueEvents object should lock or lease the message in a way that allows re-delivery if the node fails to handle it. In the happy path, only one listener should handle any given message.
Describe alternatives you've considered
Additional context
We're using BullMQ Pro with the batches functionality.
The text was updated successfully, but these errors were encountered: