Skip to content

Commit

Permalink
Merge branch 'master' into feat/better-worker-concurrency
Browse files Browse the repository at this point in the history
  • Loading branch information
manast authored Nov 13, 2023
2 parents 073d741 + 526f6e8 commit b587695
Show file tree
Hide file tree
Showing 57 changed files with 2,826 additions and 1,503 deletions.
52 changes: 51 additions & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ jobs:
exit 1
fi
compatibility:
node-redis:
runs-on: ubuntu-latest

name: testing node@${{ matrix.node-version }}, redis@${{ matrix.redis-version }}
Expand Down Expand Up @@ -87,6 +87,56 @@ jobs:
- run: yarn build
- run: yarn test

node-dragonflydb:
runs-on: ubuntu-latest

env:
node-version: lts/*

name: testing node@lts/*, dragonflydb@latest

services:
dragonflydb:
image: docker.dragonflydb.io/dragonflydb/dragonfly
env:
DFLY_cluster_mode: emulated
DFLY_lock_on_hashtags: true
ports:
- 6379:6379

steps:
- name: Checkout repository
uses: actions/checkout@v3 # v3
- name: Use Node.js ${{ env.node-version }}
uses: actions/setup-node@v3 # v3
with:
node-version: lts/*
cache: 'yarn'
- run: yarn install --ignore-engines --frozen-lockfile --non-interactive
- run: yarn build
- run: BULLMQ_TEST_PREFIX={b} yarn test

node-upstash:
runs-on: ubuntu-latest
continue-on-error: true

env:
node-version: lts/*
REDIS_HOST: ${{ secrets.REDIS_HOST }}

name: testing node@lts/*, upstash@latest
steps:
- name: Checkout repository
uses: actions/checkout@v3 # v3
- name: Use Node.js ${{ env.node-version }}
uses: actions/setup-node@v3 # v3
with:
node-version: lts/*
cache: 'yarn'
- run: yarn install --ignore-engines --frozen-lockfile --non-interactive
- run: yarn build
- run: yarn test

python:
runs-on: ubuntu-latest

Expand Down
1 change: 1 addition & 0 deletions .mocharc.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@ module.exports = {
file: ['./mocha.setup.ts'],
spec: ['./tests/test_*.ts'],
timeout: 4000,
'trace-warnings': true,
};
40 changes: 29 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ You can find tutorials and news in this blog: https://blog.taskforce.sh/

# Official FrontEnd

[<img src="http://taskforce.sh/assets/logo_square.png" width="200" alt="Taskforce.sh, Inc" style="padding: 200px"/>](https://taskforce.sh)
[<img src="http://taskforce.sh/assets/logo_square.png" width="150" alt="Taskforce.sh, Inc" style="padding: 200px"/>](https://taskforce.sh)

Supercharge your queues with a professional front end:

Expand All @@ -49,6 +49,32 @@ Supercharge your queues with a professional front end:

Sign up at [Taskforce.sh](https://taskforce.sh)

# 🚀 Sponsors 🚀

<table cellspacing="0" cellpadding="0" border="0">
<tr>
<td>
<a href="https://www.dragonflydb.io/">
<img src="https://raw.githubusercontent.com/dragonflydb/dragonfly/main/.github/images/logo-full.svg" width=550 alt="Dragonfly" />
</a>
</td>
<td>
Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive
better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more <a href="https://www.dragonflydb.io/docs/integrations/bullmq">here</a> on how to use it with BullMQ.
</td>
</tr>
<tr>
<td>
<a href="https://dashboard.memetria.com/new?utm_campaign=BULLMQ">
<img src="https://www.memetria.com/images/logo/memetria-logo.svg" width=350 alt="Memetria for Redis" />
</a>
</td>
<td>
If you need high quality production Redis instances for your BullMQ project, please consider subscribing to <a href="https://dashboard.memetria.com/new?utm_campaign=BULLMQ">Memetria for Redis</a>, leaders in Redis hosting that works perfectly with BullMQ. Use the promo code "BULLMQ" when signing up to help us sponsor the development of BullMQ!
</td>
</tr>
</table>

# Used by

Some notable organizations using BullMQ:
Expand Down Expand Up @@ -115,8 +141,8 @@ Some notable organizations using BullMQ:
<td valign="center">
<a href="https://www.nocodb.com">
<img
src="https://www.nocodb.com/brand/logo-text.png"
width="150"
src="https://github.com/nocodb/nocodb/raw/develop/packages/nc-gui/assets/img/icons/512x512.png"
width="50"
alt="NoCodeDB"
/>
</a>
Expand Down Expand Up @@ -200,14 +226,6 @@ Since there are a few job queue solutions, here is a table comparing them:
| UI ||||| ||
| Optimized for | Jobs / Messages | Jobs / Messages | Jobs / Messages | Jobs | Messages | Jobs |

# 🚀 Sponsor 🚀

<a href="https://dashboard.memetria.com/new?utm_campaign=BULLMQ">
<img src="https://www.memetria.com/images/logo/memetria-logo.svg" width=300 alt="Memetria for Redis" />
</a>

If you need high quality production Redis instances for your BullMQ project, please consider subscribing to [Memetria for Redis](https://dashboard.memetria.com/new?utm_campaign=BULLMQ), leaders in Redis hosting that works perfectly with BullMQ. Use the promo code "BULLMQ" when signing up to help us sponsor the development of BullMQ!

## Contributing

Fork the repo, make some changes, submit a pull-request! Here is the [contributing](contributing.md) doc that has more details.
Expand Down
4 changes: 2 additions & 2 deletions docs/gitbook/bull-3.x-migration/compatibility-class.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Compatibility class

The Queue3 class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.
The `Queue3` class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.

Differences in interface include

Expand All @@ -11,7 +11,7 @@ Differences in interface include
* name property is mandatory in `add()` method
* concurrency is moved from `process()` argument to queue options

Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and getting saved to Redis as is. When job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.
Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and gets saved to Redis as is. When a job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.

The all-in-one example:

Expand Down
4 changes: 2 additions & 2 deletions docs/gitbook/bull/patterns/manually-fetching-jobs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Manually fetching jobs

If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for your.
If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for you.

Manually transitioning states for jobs can be done with a few simple methods.

Expand Down Expand Up @@ -53,4 +53,4 @@ if (nextJobdata) {

**Note**

By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. The recommended is to extend the lock when half the lock time has passsed.
By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. It is recommended to extend the lock when half the lock time has passsed.
2 changes: 1 addition & 1 deletion docs/gitbook/bull/patterns/message-queue.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Message queue

Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:
Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue, the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:

Server A:

Expand Down
2 changes: 1 addition & 1 deletion docs/gitbook/bull/patterns/persistent-connections.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Persistent connections

A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep this connections alive for as long as the service is running.
A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep these connections alive for as long as the service is running.

For example, if your service has a connection to a database, and the connection to said database breaks, you would like that service to handle this disconnection as gracefully as possible and as soon as the database is back online continue to work without human intervention.

Expand Down
3 changes: 0 additions & 3 deletions docs/gitbook/bull/patterns/redis-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,3 @@ const queue = new Queue('cluster', {

If you use several queues in the same cluster, you should use different prefixes so that the queues are evenly placed in the cluster nodes.

###

\
4 changes: 0 additions & 4 deletions docs/gitbook/bull/patterns/returning-job-completions.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,5 @@

A common pattern is where you have a cluster of queue processors that just process jobs as fast as they can, and some other services that need to take the result of these processors and do something with it, maybe storing results in a database.

\
The most robust and scalable way to accomplish this is by combining the standard job queue with the message queue pattern: a service sends jobs to the cluster just by opening a job queue and adding jobs to it, and the cluster will start processing as fast as it can. Everytime a job gets completed in the cluster a message is sent to a results message queue with the result data, and this queue is listened by some other service that stores the results in a database.



\
1 change: 0 additions & 1 deletion docs/gitbook/bull/patterns/reusing-redis-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,4 @@ const opts = {

const queueFoo = new Queue("foobar", opts);
const queueQux = new Queue("quxbaz", opts);

```
2 changes: 1 addition & 1 deletion docs/gitbook/bullmq-pro/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@ If you use docker you must make sure that you also add the _**.npmrc**_ file abo
```docker
WORKDIR /app
ADD .npmrc /app/.npmr
ADD .npmrc /app/.npmrc
```
2 changes: 1 addition & 1 deletion docs/gitbook/bullmq-pro/observables/cancelation.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Cancellation

As mentioned, Observables allows for clean cancellation. Currently we support a TTL value that defines the maximum processing time before the job is finally cancelled:
As mentioned, Observables allow for clean cancellation. Currently we support a TTL value that defines the maximum processing time before the job is finally cancelled:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand Down
44 changes: 44 additions & 0 deletions docs/gitbook/changelog.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,47 @@
## [4.13.2](https://github.com/taskforcesh/bullmq/compare/v4.13.1...v4.13.2) (2023-11-09)


### Bug Fixes

* **backoff:** fix builtin backoff type ([#2265](https://github.com/taskforcesh/bullmq/issues/2265)) [python] ([76959eb](https://github.com/taskforcesh/bullmq/commit/76959eb9d9495eb1b6d2d31fab93c8951b5d3b93))
* **job:** set delay value on current job instance when it is retried ([#2266](https://github.com/taskforcesh/bullmq/issues/2266)) (python) ([76e075f](https://github.com/taskforcesh/bullmq/commit/76e075f54d5745b6cec3cb11305bf3110d963eae))

## [4.13.1](https://github.com/taskforcesh/bullmq/compare/v4.13.0...v4.13.1) (2023-11-08)


### Bug Fixes

* **connection:** better handling of attached listeners ([02474ad](https://github.com/taskforcesh/bullmq/commit/02474ad59a7b340d7bb2a7415ae7a88e14200398))
* **connection:** move redis instance check to queue base ([13a339a](https://github.com/taskforcesh/bullmq/commit/13a339a730f46ff22acdd4a046e0d9c4b7d88679))

# [4.13.0](https://github.com/taskforcesh/bullmq/compare/v4.12.10...v4.13.0) (2023-11-05)


### Features

* **queue:** improve clean to work iteratively ([#2260](https://github.com/taskforcesh/bullmq/issues/2260)) ([0cfa66f](https://github.com/taskforcesh/bullmq/commit/0cfa66fd0fa0dba9b3941f183cf6f06d8a4f281d))

## [4.12.10](https://github.com/taskforcesh/bullmq/compare/v4.12.9...v4.12.10) (2023-11-05)


### Bug Fixes

* update delay job property when moving to delayed set ([#2261](https://github.com/taskforcesh/bullmq/issues/2261)) ([69ece08](https://github.com/taskforcesh/bullmq/commit/69ece08babd7716c14c38c3dd50630b44c7c1897))

## [4.12.9](https://github.com/taskforcesh/bullmq/compare/v4.12.8...v4.12.9) (2023-11-05)


### Bug Fixes

* **add-job:** trim events when waiting-children event is published ([#2262](https://github.com/taskforcesh/bullmq/issues/2262)) (python) ([198bf05](https://github.com/taskforcesh/bullmq/commit/198bf05fa5a4e1ce50081296033a2e0f26ece498))

## [4.12.8](https://github.com/taskforcesh/bullmq/compare/v4.12.7...v4.12.8) (2023-11-03)


### Bug Fixes

* **worker:** keep extending locks while closing workers ([#2259](https://github.com/taskforcesh/bullmq/issues/2259)) ([c4d12ea](https://github.com/taskforcesh/bullmq/commit/c4d12ea3a9837ffd7f58e2134796137c4181c3de))

## [4.12.7](https://github.com/taskforcesh/bullmq/compare/v4.12.6...v4.12.7) (2023-10-29)


Expand Down
22 changes: 19 additions & 3 deletions docs/gitbook/guide/going-to-production.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,12 +62,28 @@ queue.on("error", (err) => {

### Gracefully shut-down workers

Since your workers will run on servers, it is unavoidable that these servers will need to be restarted from time to time. As your workers may be processing jobs when the server is about to restart, it is important to properly close the workers to minimize the risk of stalled jobs. If a worker is killed without waiting for their jobs to complete, these jobs will be marked as stalled and processed automatically when new workers come online (with a waiting time of about 30 seconds by default). However it is better to avoid having stalled jobs, and as mentioned this can be done by closing the workers when the server is going to be restarted. In NodeJS you can accomplish this by listening to the SIGINT signal like this:
Since your workers will run on servers, it is unavoidable that these servers will need to be restarted from time to time. As your workers may be processing jobs when the server is about to restart, it is important to properly close the workers to minimize the risk of stalled jobs. If a worker is killed without waiting for their jobs to complete, these jobs will be marked as stalled and processed automatically when new workers come online (with a waiting time of about 30 seconds by default). However it is better to avoid having stalled jobs, and as mentioned this can be done by closing the workers when the server is going to be restarted.&#x20;

In a Node.js server, it is considered good practice to listen for both `SIGINT` and `SIGTERM` signals to close gracefully. Here's why:

* `SIGINT` is typically sent when a user types Ctrl+C in the terminal to interrupt a process. Your server should listen to this signal during development or when it's running in the foreground, so you can shut it down properly when this key combination is pressed.
* `SIGTERM` is the signal that is usually sent to a process to request its termination. Unlike `SIGKILL`, this signal can be caught by the process (which can then clean up resources and exit gracefully). This is the signal that system daemons, orchestration tools like Kubernetes, or process managers like PM2 typically use to stop a service.

Here is an example on how you accomplish this:

```typescript
process.on("SIGINT", async () => {

const gracefulShutdown = async (signal) => {
console.log(`Received ${signal}, closing server...`);
await worker.close();
});
// Other asynchronous closings
process.exit(0);
}

process.on('SIGINT', () => gracefulShutdown('SIGINT'));

process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));

```

Keep in mind that the code above does not guarantee that the jobs will never end up being stalled, as the job may take longer time than the grace period for the server to restart.
Expand Down
4 changes: 4 additions & 0 deletions docs/gitbook/guide/jobs/job-ids.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@ await myQueue.add(
);
```

{% hint style="danger" %}
Custom job ids must not contains **:** separator as it will be translated in 2 different values, we are also following Redis naming convention. So if you need to add a separator, use a different value, for example **-**, **\_**.
{% endhint %}

## Read more:

- 💡 [Duplicated Event Reference](https://api.docs.bullmq.io/interfaces/v4.QueueEventsListener.html#duplicated)
14 changes: 14 additions & 0 deletions docs/gitbook/guide/jobs/repeatable.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,20 @@ There are some important considerations regarding repeatable jobs:
- If there are no workers running, repeatable jobs will not accumulate next time a worker is online.
- repeatable jobs can be removed using the [removeRepeatable](https://api.docs.bullmq.io/classes/v4.Queue.html#removeRepeatable) method or [removeRepeatableByKey](https://api.docs.bullmq.io/classes/v4.Queue.html#removeRepeatableByKey).

```typescript
import { Queue } from 'bullmq';

const repeat = { pattern: '*/1 * * * * *' };

const myQueue = new Queue('Paint');

const job1 = await myQueue.add('red', { foo: 'bar' }, { repeat });
const job2 = await myQueue.add('blue', { foo: 'baz' }, { repeat });

const isRemoved1 = await myQueue.removeRepeatableByKey(job1.repeatJobKey);
const isRemoved2 = await queue.removeRepeatable('blue', repeat);
```

All repeatable jobs have a repeatable job key that holds some metadata of the repeatable job itself. It is possible to retrieve all the current repeatable jobs in the queue calling [getRepeatableJobs](https://api.docs.bullmq.io/classes/v4.Queue.html#getRepeatableJobs):

```typescript
Expand Down
16 changes: 15 additions & 1 deletion docs/gitbook/guide/retrying-failing-jobs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Retrying failing jobs

As your queues processes jobs, it is inevitable that over time some of these jobs will fail. In BullMQ, a job is considered failed in the following scenarios:
As your queues process jobs, it is inevitable that over time some of these jobs will fail. In BullMQ, a job is considered failed in the following scenarios:

- The processor function defined in your [Worker](https://docs.bullmq.io/guide/workers) has thrown an exception.
- The job has become [stalled](https://docs.bullmq.io/guide/jobs/stalled) and it has consumed the "max stalled count" setting.
Expand All @@ -19,6 +19,10 @@ Often it is desirable to automatically retry failed jobs so that we do not give

BullMQ supports retries of failed jobs using back-off functions. It is possible to use the **built-in** backoff functions or provide **custom** ones. If you do not specify a back-off function, the jobs will be retried without delay as soon as they fail.

{% hint style="info" %}
Retried jobs will respect their priority when they are moved back to waiting state.
{% endhint %}

#### Built-in backoff strategies

The current built-in backoff functions are "exponential" and "fixed".
Expand Down Expand Up @@ -81,6 +85,12 @@ const worker = new Worker('foo', async job => doSomeProcessing(), {
});
```

{% hint style="info" %}
If your backoffStrategy returns 0, jobs will be moved at the end of our waiting list (priority 0) or moved back to prioritized state (priority > 0).

If your backoffStrategy returns -1, jobs won't be retried, instead they will be moved to failed state.
{% endhint %}

You can then use your custom strategy when adding jobs:

```typescript
Expand Down Expand Up @@ -128,3 +138,7 @@ const worker = new Worker('foo', async job => doSomeProcessing(), {
},
});
```

## Read more:

- 💡 [Stop Retrying Jobs](../patterns/stop-retrying-jobs.md)
Loading

0 comments on commit b587695

Please sign in to comment.