diff --git a/docs/gitbook/bull-3.x-migration/compatibility-class.md b/docs/gitbook/bull-3.x-migration/compatibility-class.md
index a283ab7452..c25a5b803a 100644
--- a/docs/gitbook/bull-3.x-migration/compatibility-class.md
+++ b/docs/gitbook/bull-3.x-migration/compatibility-class.md
@@ -1,17 +1,17 @@
# Compatibility class
-The Queue3 class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.
+The `Queue3` class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.
Differences in interface include
* fixed order of `add()` and `process()` method arguments
* class instantiation requires use of the `new` operator
-* interfaces for Queue and Job options and Job class do not have wrappers and used directly
+* interfaces for Queue and Job options and Job class do not have wrappers and are used directly
* there's no `done` argument expected in `process()` callback anymore; now the callback must always return a `Promise` object
* name property is mandatory in `add()` method
* concurrency is moved from `process()` argument to queue options
-Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and getting saved to Redis as is. When job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.
+Functional differences generally include only the absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and gets saved to Redis as is. When a job is in progress, you can read this value using `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.
The all-in-one example:
diff --git a/docs/gitbook/bull/patterns/custom-backoff-strategy.md b/docs/gitbook/bull/patterns/custom-backoff-strategy.md
index e3401d15a8..3ebbbb396e 100644
--- a/docs/gitbook/bull/patterns/custom-backoff-strategy.md
+++ b/docs/gitbook/bull/patterns/custom-backoff-strategy.md
@@ -112,5 +112,3 @@ myQueue.add({ msg: 'Specific Error' }, {
}
});
```
-
-\
diff --git a/docs/gitbook/bull/patterns/manually-fetching-jobs.md b/docs/gitbook/bull/patterns/manually-fetching-jobs.md
index 9dad3f4afa..87a25b802e 100644
--- a/docs/gitbook/bull/patterns/manually-fetching-jobs.md
+++ b/docs/gitbook/bull/patterns/manually-fetching-jobs.md
@@ -1,6 +1,6 @@
# Manually fetching jobs
-If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for your.
+If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for you.
Manually transitioning states for jobs can be done with a few simple methods.
@@ -53,4 +53,4 @@ if (nextJobdata) {
**Note**
-By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. The recommended is to extend the lock when half the lock time has passsed.
+By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds. If it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. It is recommended to extend the lock when half the lock time has passsed.
diff --git a/docs/gitbook/bull/patterns/message-queue.md b/docs/gitbook/bull/patterns/message-queue.md
index 8ac38257d4..14e7547104 100644
--- a/docs/gitbook/bull/patterns/message-queue.md
+++ b/docs/gitbook/bull/patterns/message-queue.md
@@ -1,6 +1,6 @@
# Message queue
-Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:
+Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue, the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:
Server A:
diff --git a/docs/gitbook/bull/patterns/persistent-connections.md b/docs/gitbook/bull/patterns/persistent-connections.md
index 0ce113bd59..bc37c9fedb 100644
--- a/docs/gitbook/bull/patterns/persistent-connections.md
+++ b/docs/gitbook/bull/patterns/persistent-connections.md
@@ -1,18 +1,18 @@
# Persistent connections
-A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep this connections alive for as long as the service is running.
+A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep these connections alive for as long as the service is running.
For example, if your service has a connection to a database, and the connection to said database breaks, you would like that service to handle this disconnection as gracefully as possible and as soon as the database is back online continue to work without human intervention.
-Since Bull relies on **ioredis** for accessing Redis, the default is auto-reconnect forever, this behaviour can be customized but most likely the default is the best setting currently: [https://github.com/luin/ioredis#auto-reconnect](https://github.com/luin/ioredis#auto-reconnect)
+Since Bull relies on **ioredis** for accessing Redis, the default is auto-reconnect forever. This behaviour can be customized but most likely the default is the best setting currently: [https://github.com/luin/ioredis#auto-reconnect](https://github.com/luin/ioredis#auto-reconnect)
-In the context of Bull, we have normally two different cases that are handled differently.
+In the context of Bull, we have normally two different cases that are handled differently.
### Workers
A worker is consuming jobs from the queue as fast as it can. If it loses the connection to Redis we want the worker to "wait" until Redis is available again. For this to work we need to understand an important setting in our Redis options (which are handled by ioredis):
-#### maxRetriesPerRequest
+#### `maxRetriesPerRequest`
This setting tells the ioredis client how many times to try a command that fails before throwing an error. So even though Redis is not reachable or offline, the command will be retried until this situation changes or the maximum number of attempts is reached.
@@ -22,11 +22,11 @@ This guarantees that the workers will keep processing forever as long as there i
### Queue
-A simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. has usually different requirements as the worker.
+A simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. usually has different requirements from the worker.
For example, say that you are adding jobs to a queue as the result of a call to an HTTP endpoint. The caller of this endpoint cannot wait forever if the connection to Redis happens to be down when this call is made.
-Therefore the **maxRetriesPerRequest** setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.
+Therefore the `maxRetriesPerRequest` setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.
diff --git a/docs/gitbook/bull/patterns/redis-cluster.md b/docs/gitbook/bull/patterns/redis-cluster.md
index 7f4244ed8a..10b96dfecf 100644
--- a/docs/gitbook/bull/patterns/redis-cluster.md
+++ b/docs/gitbook/bull/patterns/redis-cluster.md
@@ -11,7 +11,3 @@ const queue = new Queue('cluster', {
```
If you use several queues in the same cluster, you should use different prefixes so that the queues are evenly placed in the cluster nodes.
-
-###
-
-\
diff --git a/docs/gitbook/bull/patterns/returning-job-completions.md b/docs/gitbook/bull/patterns/returning-job-completions.md
index c1d38d7414..84008d522b 100644
--- a/docs/gitbook/bull/patterns/returning-job-completions.md
+++ b/docs/gitbook/bull/patterns/returning-job-completions.md
@@ -2,9 +2,4 @@
A common pattern is where you have a cluster of queue processors that just process jobs as fast as they can, and some other services that need to take the result of these processors and do something with it, maybe storing results in a database.
-\
The most robust and scalable way to accomplish this is by combining the standard job queue with the message queue pattern: a service sends jobs to the cluster just by opening a job queue and adding jobs to it, and the cluster will start processing as fast as it can. Everytime a job gets completed in the cluster a message is sent to a results message queue with the result data, and this queue is listened by some other service that stores the results in a database.
-
-
-
-\
diff --git a/docs/gitbook/bull/patterns/reusing-redis-connections.md b/docs/gitbook/bull/patterns/reusing-redis-connections.md
index cc32b63e39..c8bf3a48fb 100644
--- a/docs/gitbook/bull/patterns/reusing-redis-connections.md
+++ b/docs/gitbook/bull/patterns/reusing-redis-connections.md
@@ -35,5 +35,4 @@ const opts = {
const queueFoo = new Queue("foobar", opts);
const queueQux = new Queue("quxbaz", opts);
-
```
diff --git a/docs/gitbook/bullmq-pro/batches.md b/docs/gitbook/bullmq-pro/batches.md
index 10c8afae9b..cce961c93a 100644
--- a/docs/gitbook/bullmq-pro/batches.md
+++ b/docs/gitbook/bullmq-pro/batches.md
@@ -4,11 +4,11 @@ description: Processing jobs in batches
# Batches
-It is possible to configure the workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called batch) in one go.
+It is possible to configure workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called _batch_) in one go.
Workers using batches have slightly different semantics and behavior than normal workers, so read carefully the following examples to avoid pitfalls.
-In order to enable batches you must pass the batch option with a size representing the maximum amount of jobs per batch:
+In order to enable batches you must pass the `batches` option with a size representing the maximum amount of jobs per batch:
```typescript
const worker = new WorkerPro(
@@ -26,7 +26,7 @@ const worker = new WorkerPro(
```
{% hint style="info" %}
-There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
+There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch, so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
{% endhint %}
### Failing jobs
@@ -54,18 +54,18 @@ const worker = new WorkerPro(
);
```
-Only the jobs that are `setAsFailed` will fail, the rest will be moved to complete when the processor for the batch job completes.
+Only the jobs that are `setAsFailed` will fail, the rest will be moved to _complete_ when the processor for the batch job completes.
### Handling events
Batches are handled by wrapping all the jobs in a batch into a dummy job that keeps all the jobs in an internal array. This approach simplifies the mechanics of running batches, however, it also affects things like how events are handled. For instance, if you need to listen for individual jobs that have completed or failed you must use global events, as the event handler on the worker instance will only report on the events produced by the wrapper batch job, and not the jobs themselves.
-It is possible, however, to call the getBatch function in order to retrieve all the jobs that belong to a given batch.
+It is possible, however, to call the `getBatch` function in order to retrieve all the jobs that belong to a given batch.
```typescript
worker.on('completed', job => {
const batch = job.getBatch();
- e;
+ // ...
});
```
@@ -82,7 +82,7 @@ queueEvents.on('completed', (jobId, err) => {
### Limitations
-Currently, all worker options can be used with the batches, however, there are some unsupported features that may be implemented in the future:
+Currently, all worker options can be used with batches, however, there are some unsupported features that may be implemented in the future:
- [Dynamic rate limit](https://docs.bullmq.io/guide/rate-limiting#manual-rate-limit)
- [Manually processing jobs](https://docs.bullmq.io/patterns/manually-fetching-jobs)
diff --git a/docs/gitbook/bullmq-pro/groups/README.md b/docs/gitbook/bullmq-pro/groups/README.md
index 32adb2032b..72fb20577b 100644
--- a/docs/gitbook/bullmq-pro/groups/README.md
+++ b/docs/gitbook/bullmq-pro/groups/README.md
@@ -1,6 +1,6 @@
# Groups
-Groups allows you to use only one queue yet distribute the jobs among groups so that the jobs are processed one by one relative to the group they belong to.
+Groups allows you to use a single queue while distributing the jobs among groups so that the jobs are processed one by one relative to the group they belong to.
For example, imagine that you have 1 queue for processing video transcoding for all your users, you may have thousands of users in your application. You need to offload the transcoding operation since it is lengthy and CPU consuming. If you have many users that want to transcode many files, then in a non-grouped queue one user could fill the queue with jobs and the rest of the users will need to wait for that user to complete all its jobs before their jobs get processed.
@@ -18,9 +18,9 @@ If you only use grouped jobs in a queue, the waiting jobs list will not grow, in
There is no hard limit on the amount of groups that you can have, nor do they have any impact on performance. When a group is empty, the group itself does not consume any resources in Redis.
{% endhint %}
-Another way to see groups is like "virtual" queues. So instead of having one queue per "user" you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.
+Another way to see groups is like "virtual" queues. So instead of having one queue per "user", you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.
-In order to use the group functionality just use the group property in the job options when adding a job:
+In order to use the group functionality, use the group property in the job options when adding a job:
```typescript
import { QueuePro } from '@taskforcesh/bullmq-pro';
@@ -48,7 +48,7 @@ const job2 = await queue.add(
);
```
-In order to process the jobs, just use a pro worker as you normally do with standard workers:
+In order to process the jobs, use a pro worker as you normally do with standard workers:
```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
diff --git a/docs/gitbook/bullmq-pro/groups/concurrency.md b/docs/gitbook/bullmq-pro/groups/concurrency.md
index 52d3c5496a..ed5435761e 100644
--- a/docs/gitbook/bullmq-pro/groups/concurrency.md
+++ b/docs/gitbook/bullmq-pro/groups/concurrency.md
@@ -1,10 +1,10 @@
# Concurrency
-By default, there is no limit on the number of jobs that the workers can run in parallel for every group. Even using a rate limit, that would only limit the processing speed, but still you could have an unbounded number of jobs processed simultaneously in every group.
+By default, there is no limit on the number of jobs that workers can run in parallel for every group. Even using a rate limit, that would only limit the processing speed, but still you could have an unbounded number of jobs processed simultaneously in every group.
-It is possible to constraint how many jobs are allowed to be processed concurrently per group, so for example, if you choose 3 as max concurrency factor, the workers will never work on more than 3 jobs at the same time for any given group. This limits only the group, you could have any number of concurrent jobs as long as they are not from the same group.
+It is possible to constrain how many jobs are allowed to be processed concurrently per group. For example, if you choose 3 as max concurrency factor, the workers will never work on more than 3 jobs at the same time for any given group. This limits only the group; you could have any number of concurrent jobs as long as they are not from the same group.
-You enable the concurrency setting like this:
+The concurrency factor is configured as follows:
```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
@@ -13,7 +13,7 @@ const worker = new WorkerPro('myQueue', processFn, {
group: {
concurrency: 3 // Limit to max 3 parallel jobs per group
},
- concurrency: 100
+ concurrency: 100,
connection
});
```
diff --git a/docs/gitbook/bullmq-pro/groups/max-group-size.md b/docs/gitbook/bullmq-pro/groups/max-group-size.md
index f924ebfa85..bae9c26d0f 100644
--- a/docs/gitbook/bullmq-pro/groups/max-group-size.md
+++ b/docs/gitbook/bullmq-pro/groups/max-group-size.md
@@ -2,9 +2,9 @@
It is possible to set a maximum group size. This can be useful if you want to keep the number of jobs within some limits and you can afford to discard new jobs.
-When a group has reached the defined max size, adding new jobs to that group will result in an exception being thrown, that you can catch and ignore if you do not care about it.
+When a group has reached the defined max size, adding new jobs to that group will result in an exception being thrown that you can catch and ignore if you do not care about it.
-You can use the "maxSize" option when adding jobs to a group like this:
+You can use the `maxSize` option when adding jobs to a group like this:
```typescript
import { QueuePro, GroupMaxSizeExceededError } from '@taskforcesh/bullmq-pro';
@@ -25,11 +25,8 @@ try {
throw err;
}
}
-
```
-
-
{% hint style="info" %}
-The maxSize option is not yet available for "addBulk".
+The `maxSize` option is not yet available for `addBulk`.
{% endhint %}
diff --git a/docs/gitbook/bullmq-pro/groups/pausing-groups.md b/docs/gitbook/bullmq-pro/groups/pausing-groups.md
index bd325ff677..29d9b1cf7a 100644
--- a/docs/gitbook/bullmq-pro/groups/pausing-groups.md
+++ b/docs/gitbook/bullmq-pro/groups/pausing-groups.md
@@ -2,28 +2,28 @@
BullMQ Pro supports pausing groups globally. A group is paused when no workers will pick up any jobs that belongs to the paused group. When you pause a group, the workers that are currently busy processing a job from that group, will continue working on that job until it completes (or failed), and then will just keep idling until the group has been resumed.
-Pausing a group is performed by calling the _**pauseGroup**_ method on a [queue](https://api.bullmq.pro/classes/v6.Queue.html#pauseGroup) instance:
+Pausing a group is performed by calling the `pauseGroup` method on a [`Queue`](https://api.bullmq.pro/classes/v6.Queue.html#pauseGroup) instance:
```typescript
await myQueue.pauseGroup('groupId');
```
{% hint style="info" %}
-Even if the groupId does not exist at that time, the groupId will be added in our paused list as a group could be ephemeral
+Even if the `groupId` does not exist at that time, the `groupId` will be added in our paused list as a group could be ephemeral
{% endhint %}
{% hint style="warning" %}
-It will return false if the group is already paused.
+`pauseGroup` will return `false` if the group is already paused.
{% endhint %}
-Resuming a group is performed by calling the _**resumeGroup**_ method on a [queue](https://api.bullmq.pro/classes/v6.Queue.html#resumeGroup) instance:
+Resuming a group is performed by calling the `resumeGroup` method on a [`Queue`](https://api.bullmq.pro/classes/v6.Queue.html#resumeGroup) instance:
```typescript
await myQueue.resumeGroup('groupId');
```
{% hint style="warning" %}
-It will return false if the group does not exist or when the group is already resumed.
+`resumeGroup` will return `false` if the group does not exist or when the group is already resumed.
{% endhint %}
## Read more:
diff --git a/docs/gitbook/bullmq-pro/groups/prioritized.md b/docs/gitbook/bullmq-pro/groups/prioritized.md
index 7c4c518a64..d7b85f70d8 100644
--- a/docs/gitbook/bullmq-pro/groups/prioritized.md
+++ b/docs/gitbook/bullmq-pro/groups/prioritized.md
@@ -1,6 +1,6 @@
# Prioritized intra-groups
-BullMQ Pro supports priorities per group. A job is prioritized in a group when group and priority options are provided together.
+BullMQ Pro supports priorities per group. A job is prioritized in a group when group and priority options are provided _together_.
```typescript
await myQueue.add(
diff --git a/docs/gitbook/bullmq-pro/groups/rate-limiting.md b/docs/gitbook/bullmq-pro/groups/rate-limiting.md
index 7a7945b457..e821fbfb7f 100644
--- a/docs/gitbook/bullmq-pro/groups/rate-limiting.md
+++ b/docs/gitbook/bullmq-pro/groups/rate-limiting.md
@@ -2,7 +2,7 @@
A useful feature when using groups is to be able to rate limit the groups independently of each other, so you can evenly process the jobs belonging to many groups and still limit how many jobs per group are allowed to be processed by unit of time.
-The way the rate limiting works is that when the jobs for a given group exceed the maximum amount of jobs per unit of time that particular group gets rate limited. The jobs that belongs to this particular group will not be processed until the rate limit expires.
+The way the rate limiting works is that when the jobs for a given group exceed the maximum amount of jobs per unit of time, that particular group gets rate limited. The jobs that belong to this particular group will not be processed until the rate limit expires.
For example "group 2" is rate limited in the following chart:
@@ -19,7 +19,7 @@ const worker = new WorkerPro('myQueue', processFn, {
group: {
limit: {
max: 100, // Limit to 100 jobs per second per group
- duration 1000,
+ duration: 1000,
}
},
connection
@@ -28,9 +28,9 @@ const worker = new WorkerPro('myQueue', processFn, {
### Manual rate-limit
-Sometimes is useful to rate-limit a group manually instead of based on some static options. For example, if you have an API that returns 429 (Too many requests), and you want to rate-limit the group based on that response.
+Sometimes is useful to rate-limit a group manually instead of based on some static options. For example, if you have an API that returns `429 Too Many Requests`, and you want to rate-limit the group based on that response.
-For this purpose, you can use the worker method **rateLimitGroup** like this:
+For this purpose, you can use the worker method `rateLimitGroup` like this:
```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
diff --git a/docs/gitbook/bullmq-pro/install.md b/docs/gitbook/bullmq-pro/install.md
index 4250082d93..aa10668b1b 100644
--- a/docs/gitbook/bullmq-pro/install.md
+++ b/docs/gitbook/bullmq-pro/install.md
@@ -2,7 +2,7 @@
In order to install BullMQ Pro you need to use a NPM token from [taskforce.sh](https://taskforce.sh).
-With the token at hand just update or create a ._**npmrc**_ file in your app repository with the following contents:
+With the token at hand just update or create a `.npmrc` file in your app repository with the following contents:
```
@taskforcesh:registry=https://npm.taskforce.sh/
@@ -10,9 +10,9 @@ With the token at hand just update or create a ._**npmrc**_ file in your app rep
always-auth=true
```
-"NPM\_\_TASKFORCESH\_\_TOKEN" is an environment variable pointing to your token.
+where `NPM__TASKFORCESH__TOKEN` is an environment variable pointing to your token.
-Then just install the @taskforcesh/bullmq-pro package as you would install any other package, with npm, yarn or pnpm:
+Then just install the `@taskforcesh/bullmq-pro` package as you would install any other package, with `npm`, `yarn` or `pnpm`:
```
yarn add @taskforcesh/bullmq-pro
@@ -32,10 +32,10 @@ const worker = new WorkerPro('myQueue', async job => {
### Using Docker
-If you use docker you must make sure that you also add the _**.npmrc**_ file above in your **Dockerfile**:
+If you use docker you must make sure that you also add the `.npmrc` file above in your `Dockerfile`:
```docker
WORKDIR /app
-ADD .npmrc /app/.npmr
+ADD .npmrc /app/.npmrc
```
diff --git a/docs/gitbook/bullmq-pro/nestjs/README.md b/docs/gitbook/bullmq-pro/nestjs/README.md
index 53c66f5623..ea52ef3d12 100644
--- a/docs/gitbook/bullmq-pro/nestjs/README.md
+++ b/docs/gitbook/bullmq-pro/nestjs/README.md
@@ -5,10 +5,10 @@ yarn add @taskforcesh/nestjs-bullmq-pro
```
{% hint style="info" %}
-BullMQ-Pro needs a token, please review [install](https://docs.bullmq.io/bullmq-pro/install) section.
+BullMQ-Pro needs a token, as explained in the [install](https://docs.bullmq.io/bullmq-pro/install) section.
{% endhint %}
-Once the installation process is complete, we can import the **BullModule** into the root **AppModule**.
+Once the installation process is complete, we can import the `BullModule` into the root `AppModule`.
```typescript
import { Module } from '@nestjs/common';
@@ -27,7 +27,7 @@ import { BullModule } from '@taskforcesh/nestjs-bullmq-pro';
export class AppModule {}
```
-To register a queue, import the **BullModule.registerQueue()** dynamic module, as follows:
+To register a queue, import the `BullModule.registerQueue()` dynamic module, as follows:
```typescript
BullModule.registerQueue({
@@ -35,7 +35,7 @@ BullModule.registerQueue({
});
```
-To register a flow producer, import the **BullModule.registerFlowProducer()** dynamic module, as follows:
+To register a flow producer, import the `BullModule.registerFlowProducer()` dynamic module, as follows:
```typescript
BullModule.registerFlowProducer({
@@ -45,7 +45,7 @@ BullModule.registerFlowProducer({
# Processor
-To register a processor, you may need to use **Processor** decorator:
+To register a processor, you may need to use `Processor` decorator:
```typescript
import {
diff --git a/docs/gitbook/bullmq-pro/nestjs/producers.md b/docs/gitbook/bullmq-pro/nestjs/producers.md
index 17b80ea48d..508fc910a9 100644
--- a/docs/gitbook/bullmq-pro/nestjs/producers.md
+++ b/docs/gitbook/bullmq-pro/nestjs/producers.md
@@ -14,10 +14,10 @@ export class AudioService {
```
{% hint style="info" %}
-The **@InjectQueue()** decorator identifies the queue by its name, as provided in the **registerQueue()**.
+The `@InjectQueue()` decorator identifies the queue by its name, as provided in the `registerQueue()`.
{% endhint %}
-Now, add a job by calling the queue's add() method.
+Now, add a job by calling the queue's `add()` method.
```typescript
const job = await this.audioQueue.add({
@@ -43,10 +43,10 @@ export class FlowService {
```
{% hint style="info" %}
-The **@InjectFlowProducer()** decorator identifies the flow producer by its name, as provided in the **registerFlowProducer()**.
+The `@InjectFlowProducer()` decorator identifies the flow producer by its name, as provided in the `registerFlowProducer()`.
{% endhint %}
-Now, add a flow by calling the flow producer's add() method.
+Now, add a flow by calling the flow producer's `add()` method.
```typescript
const job = await this.fooFlowProducer.add({
diff --git a/docs/gitbook/bullmq-pro/nestjs/queue-events-listeners.md b/docs/gitbook/bullmq-pro/nestjs/queue-events-listeners.md
index f82c831494..9182d3140c 100644
--- a/docs/gitbook/bullmq-pro/nestjs/queue-events-listeners.md
+++ b/docs/gitbook/bullmq-pro/nestjs/queue-events-listeners.md
@@ -1,6 +1,6 @@
# Queue Events Listeners
-To register a QueueEvents instance, you need to use **QueueEventsListener** decorator:
+To register a `QueueEvents` instance, you need to use `QueueEventsListener` decorator:
```typescript
import {
diff --git a/docs/gitbook/bullmq-pro/observables/README.md b/docs/gitbook/bullmq-pro/observables/README.md
index f1d3bfd460..6a04311647 100644
--- a/docs/gitbook/bullmq-pro/observables/README.md
+++ b/docs/gitbook/bullmq-pro/observables/README.md
@@ -1,14 +1,14 @@
# Observables
-Instead of returning regular promises, your workers can also return an Observable, this allows for some more advanced uses cases:
+Instead of returning regular promises, your workers can also return an `Observable`, this allows for some more advanced uses cases:
* It makes it possible to cleanly cancel a running job.
-* You can define a "Time to live" (TTL) so that jobs that take too long time will be automatically canceled.
-* Since the last value returned by the observable is persisted, you could retry a job and continue where you left of, for example, if the job implements a state machine or similar.
+* You can define a "Time to live" (TTL) so that jobs that take too long will be automatically canceled.
+* Since the last value returned by the observable is persisted, you could retry a job and continue where you left off, for example, if the job implements a state machine or similar.
-If you are new to Observables you may want to read this [introduction](https://www.learnrxjs.io/learn-rxjs/concepts/rxjs-primer). The two biggest advantages that Observables have over Promises are that they can emit more than 1 value and that they are cancelable.
+If you are new to `Observables` you may want to read this [introduction](https://www.learnrxjs.io/learn-rxjs/concepts/rxjs-primer). The two biggest advantages that `Observables` have over `Promises` are that they can emit more than 1 value and that they are _cancelable_.
-Let's see a silly example of a worker making use of Observables:
+Let's see a silly example of a worker making use of `Observables`:
```typescript
import { WorkerPro } from "@taskforcesh/bullmq-pro"
@@ -32,12 +32,11 @@ const processor = async () => {
};
const worker = new WorkerPro(queueName, processor, { connection });
-
```
-In the example above, the observable will emit 4 values, the first 3 directly and then a 4th after 500 ms. Also note that the "subscriber" returns a "unsubscribe" function. This is the function that will be called if the Observable is cancelled, so this is where you would do the necessary clean up.
+In the example above, the observable will emit 4 values, the first 3 directly and then a 4th after 500 ms. Also note that the `subscriber` returns a `unsubscribe` function. This is the function that will be called if the `Observable` is cancelled, so this is where you would do the necessary clean up.
-You may be asking whats the use of returning several values for a worker. One case that comes to mind is if you have a larger processor and you want to make sure that if the process crashes you can continue from the latest value. You could do this with a simple switch-case on the return value, something like this:
+You may be asking what the use of returning several values from a worker is. One case that comes to mind is if you have a larger processor and you want to make sure that if the process crashes you can continue from the latest value. You could do this with a `switch` statement on the return value, something like this:
```typescript
import { WorkerPro } from "@taskforcesh/bullmq-pro"
diff --git a/docs/gitbook/bullmq-pro/observables/cancelation.md b/docs/gitbook/bullmq-pro/observables/cancelation.md
index 6bb5b0994a..0d47bc331a 100644
--- a/docs/gitbook/bullmq-pro/observables/cancelation.md
+++ b/docs/gitbook/bullmq-pro/observables/cancelation.md
@@ -1,6 +1,6 @@
# Cancellation
-As mentioned, Observables allows for clean cancellation. Currently we support a TTL value that defines the maximum processing time before the job is finally cancelled:
+As mentioned, `Observables` allow for clean cancellation. Currently we support a TTL value that defines the maximum processing time before the job is finally cancelled:
```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
@@ -11,7 +11,7 @@ const worker = new WorkerPro(queueName, processor, {
});
```
-This parameter allows to provide ttl values per job name too:
+This parameter allows to provide `ttl` values per job name too:
```typescript
const worker = new WorkerPro(queueName, processor, {
diff --git a/docs/gitbook/guide/architecture.md b/docs/gitbook/guide/architecture.md
index a7b9fea3c8..74e16814fb 100644
--- a/docs/gitbook/guide/architecture.md
+++ b/docs/gitbook/guide/architecture.md
@@ -6,20 +6,23 @@ description: >-
# Architecture
-In order to use the full potential of Bull queues, it is important to understand the lifecycle of a job. From the moment a producer calls the [`add`](https://api.docs.bullmq.io/classes/v4.Queue.html#add) method on a queue instance, a job enters a lifecycle where it will be in different states, until its completion or failure (although technically a failed job could be retried and get a new lifecycle).
+In order to use the full potential of Bull queues, it is important to understand the lifecycle of a job. From the moment a producer calls the [`add`](https://api.docs.bullmq.io/classes/v4.Queue.html#add) method on a `Queue` instance, a job enters a lifecycle where it will be in different states, until its completion or failure (although technically a failed job could be retried and get a new lifecycle).
-When a job is added to a queue it can be in one of three states, it can either be in the **“wait”** status, which is, in fact, a waiting list, where all jobs must enter before they can be processed, or it can be in a **“prioritized“** status: a prioritized status implies that a job with higher priority will be processed first, or it can be in a **“delayed”** status: a delayed status implies that the job is waiting for some timeout or to be promoted for being processed, however, a delayed job will not be processed directly, instead it will be placed at the beginning of the waiting list or at prioritized set and processed as soon as a worker is idle.
+When a job is added to a queue it can be in one of three states:
+- **“wait”**: a waiting list, where all jobs must enter before they can be processed.
+- **“prioritized”**: implies that a job with higher priority will be processed first.
+- **“delayed”**: implies that the job is waiting for some timeout or to be promoted for being processed. These jobs are not processed directly, but instead are placed at the beginning of the waiting list, or in a prioritized set, and processed as soon as a worker is idle.
{% hint style="warning" %}
-Note that priorities go from 0 to 2^21, where 0 is the highest priority, this follows a similar standard as processed in Unix (https://en.wikipedia.org/wiki/Nice_(Unix), where a higher number means less priority).
+Note that priorities go from `0` to `2^21`, where 0 is the highest priority. This follows a similar standard as processed in Unix (https://en.wikipedia.org/wiki/Nice_(Unix), where a higher number means less priority).
{% endhint %}
-The next state for a job is the **“active”** state. The active state is represented by a set, and are jobs that are currently being processed, i.e. they are running in the `process` function explained in the previous chapter. A job can be in the active state for an unlimited amount of time until the process is completed or an exception is thrown so that the job will end in either the **“completed”** or the **“failed”** status.
+The next state for a job is the **“active”** state. The active state is represented by a set, and are jobs that are currently being processed (i.e. they are running in the `process` function explained in the previous chapter). A job can be in the active state for an unlimited amount of time until the process is completed or an exception is thrown so that the job will end in either the **“completed”** or the **“failed”** status.
Another way to add a job is by the [`add`](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#add) method on a flow producer instance.
-When a job is added by a flow producer, it can be in one of three states, it can either be in the **“wait”** or **“prioritized“** or **“delayed“** status, when there aren't children, or it can be in a **“waiting-children”** status: a waiting-children status implies that the job is waiting for all its children to be completed, however, a waiting-children job will not be processed directly, instead it will be placed at the waiting list or at delayed set (if delay is provided) or at prioritized set (if delay is 0 and priority is greater than 0) as soon as the last child is marked as completed.
+When a job is added by a flow producer, it can be in one of three states, it can either be in the **“wait”** or **“prioritized“** or **“delayed“** status, when there aren't children, or it can be in a **“waiting-children”** status: a waiting-children status implies that the job is waiting for all its children to be completed, however, a waiting-children job will not be processed directly, instead it will be placed at the waiting list or at delayed set (if `delay` is provided) or at prioritized set (if `delay` is 0 and `priority` is greater than 0) as soon as the last child is marked as completed.
diff --git a/docs/gitbook/guide/connections.md b/docs/gitbook/guide/connections.md
index 2b8a142db8..2101213df3 100644
--- a/docs/gitbook/guide/connections.md
+++ b/docs/gitbook/guide/connections.md
@@ -32,13 +32,13 @@ const myQueue = new Queue('myqueue', { connection });
const myWorker = new Worker('myqueue', async (job)=>{}, { connection });
```
-Note that in the second example, even though the ioredis instance is being reused, the worker will create a duplicated connection that it needs internally to make blocking connections. Please read on the [ioredis](https://github.com/luin/ioredis/blob/master/API.md) documentation on how to properly create an instance of `IORedis.`
+Note that in the second example, even though the ioredis instance is being reused, the worker will create a duplicated connection that it needs internally to make blocking connections. Consult the [ioredis](https://github.com/luin/ioredis/blob/master/API.md) documentation to learn how to properly create an instance of `IORedis.`
{% hint style="danger" %}
-When using ioredis connections, be carefull not to use the "keyPrefix" option in [ioredis](https://luin.github.io/ioredis/interfaces/CommonRedisOptions.html#keyPrefix) as this option is not compatible with BullMQ that provides its own key prefixing mechanism.
+When using ioredis connections, be careful not to use the "keyPrefix" option in [ioredis](https://luin.github.io/ioredis/interfaces/CommonRedisOptions.html#keyPrefix) as this option is not compatible with BullMQ, which provides its own key prefixing mechanism.
{% endhint %}
-If you can afford many connections, by all means just use them. Redis connections have quite low overhead, so you should not need to care about reusing connections unless your service provider is imposing you hard limitations.
+If you can afford many connections, by all means just use them. Redis connections have quite low overhead, so you should not need to care about reusing connections unless your service provider imposes hard limitations.
{% hint style="danger" %}
Make sure that your redis instance has the setting
diff --git a/docs/gitbook/guide/events.md b/docs/gitbook/guide/events.md
index 7276675c08..503ada05de 100644
--- a/docs/gitbook/guide/events.md
+++ b/docs/gitbook/guide/events.md
@@ -1,6 +1,6 @@
# Events
-All classes in BullMQ emit useful events that inform on the lifecycles of the jobs that are running in the queue. Every class is an EventEmitter and emits different events.
+All classes in BullMQ emit useful events that inform on the lifecycles of the jobs that are running in the queue. Every class is an `EventEmitter` and emits different events.
Some examples:
@@ -32,7 +32,7 @@ myWorker.on('failed', (job: Job) => {
});
```
-The events above are local for the workers that actually completed the jobs, however, in many situations you want to listen to all the events emitted by all the workers in one single place. For this you can use the [QueueEvents](../api/bullmq.queueevents.md) class:
+The events above are local for the workers that actually completed the jobs. However, in many situations you want to listen to all the events emitted by all the workers in one single place. For this you can use the [`QueueEvents`](../api/bullmq.queueevents.md) class:
```typescript
import { QueueEvents } from 'bullmq';
@@ -51,7 +51,7 @@ queueEvents.on(
);
```
-The QueueEvents class is implemented using [Redis streams](https://redis.io/topics/streams-intro). This has some nice properties, for example, it provides guarantees that the events are delivered and not lost during disconnections such as it would be the case with standard pub-sub.
+The `QueueEvents` class is implemented using [Redis streams](https://redis.io/topics/streams-intro). This has some nice properties, for example, it provides guarantees that the events are delivered and not lost during disconnections such as it would be the case with standard pub-sub.
{% hint style="danger" %}
The event stream is auto-trimmed so that its size does not grow too much, by default it is \~10.000 events, but this can be configured with the `streams.events.maxLen` option.
@@ -59,7 +59,7 @@ The event stream is auto-trimmed so that its size does not grow too much, by def
### Manual trim events
-In case you need to trim your events manually, you can use **trimEvents** method:
+In case you need to trim your events manually, you can use **`trimEvents`** method:
{% tabs %}
{% tab title="TypeScript" %}
@@ -69,7 +69,7 @@ import { Queue } from 'bullmq';
const queue = new Queue('paint');
-await queue.trimEvents(10); // left 10 events
+await queue.trimEvents(10); // leaves 10 events
```
{% endtab %}
@@ -81,7 +81,7 @@ from bullmq import Queue
queue = Queue('paint')
-await queue.trimEvents(10) # left 10 events
+await queue.trimEvents(10) # leaves 10 events
```
{% endtab %}
diff --git a/docs/gitbook/guide/flows/README.md b/docs/gitbook/guide/flows/README.md
index 5649b94e95..6e98591f1f 100644
--- a/docs/gitbook/guide/flows/README.md
+++ b/docs/gitbook/guide/flows/README.md
@@ -4,15 +4,15 @@
Flows are a brand new feature in BullMQ, and although is implemented on a stable foundation there could be some unknown issues.
{% endhint %}
-BullMQ supports parent - child relationships between jobs. The basic idea is that a parent job will not be moved to the wait status, i.e. can be picked up by a worker, until all its children jobs have been processed successfully. Apart from that, a parent or a child job are no different from regular jobs.
+BullMQ supports parent - child relationships between jobs. The basic idea is that a parent job will not be moved to the wait status (i.e. where it could be picked up by a worker) until all its children jobs have been processed successfully. Apart from that, a parent or a child job are no different from regular jobs.
This functionality enables the creation of flows where jobs are the node of trees of arbitrary depth.
{% hint style="warning" %}
-Flows are added to a queue using the "_FlowProducer_" class.
+Flows are added to a queue using the `FlowProducer` class.
{% endhint %}
-In order to create "flows" you must use the [FlowProducer](https://api.docs.bullmq.io/classes/v4.FlowProducer.html) class. The method [_**add**_](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#add) accepts an object with the following interface:
+In order to create "flows" you must use the [`FlowProducer`](https://api.docs.bullmq.io/classes/v4.FlowProducer.html) class. The [_**`add`**_](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#add) method accepts an object with the following interface:
```typescript
interface FlowJob {
@@ -45,7 +45,7 @@ const flow = await flowProducer.add({
});
```
-The above code will add atomically 4 jobs, one to the "renovate" queue and 3 to the "steps" queue. When the 3 jobs in the "steps" queue are completed, the parent job in the "renovate" queue will be processed as a regular job.
+The above code will atomically add 4 jobs: one to the "renovate" queue, and 3 to the "steps" queue. When the 3 jobs in the "steps" queue are completed, the parent job in the "renovate" queue will be processed as a regular job.
The above call will return instances for all the jobs added to the queue.
@@ -69,7 +69,7 @@ const stepsQueue = new Worker('steps', async job => {
});
```
-We can implement a parent worker that sums the costs of the children's jobs using the "_getChildrenValues_" method. This method returns an object with job keys as keys and the result of that given job as a value:
+We can implement a parent worker that sums the costs of the children's jobs using the `getChildrenValues` method. This method returns an object with _job keys_ as keys and the _result of that given job_ as a value:
```typescript
import { Worker } from 'bullmq';
@@ -86,7 +86,7 @@ const stepsQueue = new Worker('renovate', async job => {
});
```
-It is possible to add as deep job hierarchies as needed, see the following example where jobs are depending on each other, this allows serial execution of jobs:
+It is possible to add as deep job hierarchies as needed. See the following example where jobs are depending on each other, allowing serial execution of jobs:
```typescript
import { FlowProducer } from 'bullmq';
@@ -111,28 +111,28 @@ const chain = await flowProducer.add({
In this case one job will be processed after the previous one has been completed.
{% hint style="info" %}
-The order of processing would be: 'chassis', 'wheels' and finally 'engine'.
+The order of processing would be: `chassis`, `wheels` and finally `engine`.
{% endhint %}
## Getters
-There are some special getters that can be used in order to get jobs related to a flow. First, we have a method in the Job class to get all the dependencies for a given job:
+There are some special getters that can be used in order to get jobs related to a flow. First, we have a method in the `Job` class to get all the dependencies for a given job:
```typescript
const dependencies = await job.getDependencies();
```
-it will return all the **direct** **dependencies**, i.e. the children of a given job.
+it will return all the **direct** **dependencies** (i.e. the children of a given job).
-The Job class also provides another method that we presented above to get all the values produced by the children of a given job:
+The `Job` class also provides another method that we presented above to get all the values produced by the children of a given job:
```typescript
const values = await job.getChildrenValues();
```
-Also, a new property is available in the Job class, _**parentKey,**_ with a fully qualified key for the job parent.
+Also, a new property is available in the `Job` class, _**`parentKey`,**_ with a fully qualified key for the job parent.
-Finally, there is also a new state where a job can be in, "waiting-children", for parent jobs that have not yet had their children completed:
+Finally, there is also a new state which a job can be in, "waiting-children", for parent jobs that have not yet had their children completed:
```typescript
const state = await job.getState();
@@ -141,7 +141,7 @@ const state = await job.getState();
## Provide options
-When adding a flow it is also possible to provide an extra options object "**queueOptions"**, where you can add your specific options for every queue that is used in the flow. These options would affect each one of the jobs that are added to the flow using the FlowProducer.
+When adding a flow it is also possible to provide an extra object **`queueOptions`** object, in which you can add your specific options for every queue that is used in the flow. These options would affect each one of the jobs that are added to the flow using the `FlowProducer`.
```typescript
import { FlowProducer } from 'bullmq';
@@ -188,7 +188,7 @@ When removing a job that is part of the flow there are several important conside
3. Since a job can be both a parent and a child in a large flow, both 1 and 2 will occur if removing such a job.
4. If any of the jobs that would be removed happen to be locked, none of the jobs will be removed, and an exception will be thrown.
-Apart from the considerations above, removing a job can simply be done by either using the Job or the Queue class:
+Apart from the considerations above, removing a job can simply be done by either using the `Job` or the `Queue` class:
```typescript
await job.remove();
diff --git a/docs/gitbook/guide/flows/adding-bulks.md b/docs/gitbook/guide/flows/adding-bulks.md
index 3f897fd234..dc40606e43 100644
--- a/docs/gitbook/guide/flows/adding-bulks.md
+++ b/docs/gitbook/guide/flows/adding-bulks.md
@@ -1,6 +1,6 @@
# Adding flows in bulk
-Sometimes it is necessary to add a complete bulk of flows atomically. For example, there could be a requirement that all the flows must be created or none of them. Also, adding a bulk of flows can be faster since it reduces the number of roundtrips to Redis:
+Sometimes it is necessary to atomically add flows in bulk. For example, there could be a requirement that all the flows must be created or none of them. Also, adding flows in bulk can be faster since it reduces the number of roundtrips to Redis:
```typescript
import { FlowProducer } from 'bullmq';
diff --git a/docs/gitbook/guide/flows/fail-parent.md b/docs/gitbook/guide/flows/fail-parent.md
index 17bc4cd46b..c6edf5577f 100644
--- a/docs/gitbook/guide/flows/fail-parent.md
+++ b/docs/gitbook/guide/flows/fail-parent.md
@@ -1,8 +1,8 @@
# Fail Parent
-In some situations, you need to fail a job when one of its children fail.
+In some situations, you may need to fail a job when _one of its children_ fails.
-The pattern to solve this requirement consists on using **failParentOnFailure** option.
+The pattern to solve this requirement consists of using the **`failParentOnFailure`** option.
```typescript
const flow = new FlowProducer({ connection });
@@ -41,7 +41,7 @@ const originalTree = await flow.add({
```
{% hint style="info" %}
-As soon as a _child_ with this option fails, the parent job will be moved to failed state. This option will be validated recursively, so a grandparent could be failed and so on.
+As soon as a _child_ with this option fails, the parent job will be moved to the failed state. This option will be validated recursively, so a grandparent could be failed and so on.
{% endhint %}
## Read more:
diff --git a/docs/gitbook/guide/flows/get-flow-tree.md b/docs/gitbook/guide/flows/get-flow-tree.md
index c310be27b1..9e22169d11 100644
--- a/docs/gitbook/guide/flows/get-flow-tree.md
+++ b/docs/gitbook/guide/flows/get-flow-tree.md
@@ -1,8 +1,8 @@
# Get Flow Tree
-In some situations, you need to get a job and all of its children, grandchildren and so on.
+In some situations, you need to get a job and all of its children, grandchildren, and so on.
-The pattern to solve this requirement consists on using [getFlow](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#getFlow) method.
+The pattern to solve this requirement consists of using the [`getFlow`](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#getFlow) method.
```typescript
const flow = new FlowProducer({ connection });
@@ -48,10 +48,10 @@ const { children, job } = tree;
```
{% hint style="info" %}
-Each _child_ may have a job property and in case they have children as well, they would have children property
+Each _child_ may have a `job` property and in the case they have children as well, they would have the `children` property
{% endhint %}
-You would also may need a way to limit that information if you have many children for one of the job nodes.
+You may also need a way to limit that information if you have many children for one of the job nodes.
```typescript
const limitedTree = await flow.getFlow({
diff --git a/docs/gitbook/guide/going-to-production.md b/docs/gitbook/guide/going-to-production.md
index 7d58e6e44a..cddabe634f 100644
--- a/docs/gitbook/guide/going-to-production.md
+++ b/docs/gitbook/guide/going-to-production.md
@@ -4,13 +4,13 @@ In this chapter, we will offer crucial considerations and tips to help you achie
### Persistence
-Since BullMQ is based on Redis, persistence needs to be configured manually. Many hosting solutions do not offer persistence by default, instead, it needs to be configured per instance. We recommend enabling Append-only-file, which provides a robust and fast solution, usually, 1 second per write is enough for most applications: [https://redis.io/docs/management/persistence/#aof-advantages](https://redis.io/docs/management/persistence/#aof-advantages).
+Since BullMQ is based on Redis, persistence needs to be configured manually. Many hosting solutions do not offer persistence by default; instead, it needs to be configured per instance. We recommend enabling [AOF (_Append Only File_)](https://redis.io/docs/management/persistence/#aof-advantages), which provides a robust and fast solution. Usually, 1 second per write is enough for most applications.
Even though persistence is very fast, it will have some effect on performance, so please make the proper benchmarks to know that it is not impacting your solution in a way that is not acceptable to you.
### Max memory policy
-Redis is used quite often as a cache, meaning that it will remove keys according to some defined policy when it reaches several levels of memory consumption. BullMQ on the other hand cannot work properly if Redis evicts keys arbitrarily. Therefore is very important to configure the `maxmemory` setting to `noeviction`. This is the **only** setting that guarantees the correct behavior of the queues.
+Redis is used quite often as a cache, meaning that it will remove keys according to some defined policy when it reaches several levels of memory consumption. BullMQ on the other hand cannot work properly if Redis evicts keys arbitrarily. **Therefore is very important to configure the `maxmemory` setting to `noeviction`.** This is the **only** setting that guarantees the correct behavior of the queues.
### Automatic reconnections
@@ -18,15 +18,15 @@ In a production setting, one of the things that are crucial for system robustnes
In order to understand how to properly handle disconnections it is important to understand some options provided by [IORedis](https://www.npmjs.com/package/ioredis#Auto-reconnect). The ones interesting for us are:
-* retryStrategy
-* maxRetriesPerRequest
-* enableOfflineQueue
+* `retryStrategy`
+* `maxRetriesPerRequest`
+* `enableOfflineQueue`
-It is also important to understand the difference in behavior that is often desired for Queue and Worker classes. Normally the operations performed using the Queue class should [fail quickly](../patterns/failing-fast-when-redis-is-down.md) if there is a temporal disconnection, whereas for Workers we want to wait indefinitely without raising any exception.
+It is also important to understand the difference in behavior that is often desired for `Queue` and `Worker` classes. Normally the operations performed using the `Queue` class should [fail quickly](../patterns/failing-fast-when-redis-is-down.md) if there is a temporal disconnection, whereas for `Worker`s we want to wait indefinitely without raising any exception.
-#### retryStrategy
+#### `retryStrategy`
-This option is used to determine the function used to perform retries. The retries will keep forever until the reconnection has been accomplished. For IORedis connections created inside BullMQ we use the following strategy:
+This option is used to determine the function used to perform retries. The retries will continue forever until the reconnection has been accomplished. For IORedis connections created inside BullMQ we use the following strategy:
```ts
retryStrategy: function (times: number) {
@@ -34,19 +34,19 @@ This option is used to determine the function used to perform retries. The retri
}
```
-In other words, it will retry using exponential backoff, with a minimum 1-second retry time and max of 20 seconds. This retryStrategy can easily be overridden by passing a custom one defining custom IORedis options.
+In other words, it will retry using exponential backoff, with a minimum 1-second retry time and max of 20 seconds. This `retryStrategy` can easily be overridden by passing a custom one defining custom IORedis options.
-#### maxRetriesPerRequest
+#### `maxRetriesPerRequest`
-This option sets a limit on the number of times a retry on a failed request will be performed. For Workers, it is important to set this option to **null**. Otherwise, the exceptions raised by Redis when calling certain commands could break the worker functionality. When instantiating a Worker this option will always be set to null by default, but it could be overridden, either if passing an existing IORedis instance or by passing a different value for this option when instantiating the Worker. In both cases BullMQ will output a warning, please make sure to address this warning as it can have several unintended consequences.
+This option sets a limit on the number of times a retry on a failed request will be performed. For `Worker`s, it is important to set this option to **`null`**. Otherwise, the exceptions raised by Redis when calling certain commands could break the worker functionality. When instantiating a `Worker` this option will always be set to `null` by default, but it could be overridden, either if passing an existing IORedis instance or by passing a different value for this option when instantiating the `Worker`. In both cases BullMQ will output a warning; please make sure to address this warning as it can have several unintended consequences.
-#### enableOfflineQueue
+#### `enableOfflineQueue`
-IORedis provides a small offline queue that is used to queue commands while the connection is offline. You will probably want to disable this queue for the Queue instance, but leave it as is for Worker instances. That will make the Queue calls [fail quickly](../patterns/failing-fast-when-redis-is-down.md) while leaving the Workers to wait as needed until the connection has been re-established.
+IORedis provides a small offline queue that is used to queue commands while the connection is offline. You will probably want to disable this queue for the `Queue` instance, but leave it as is for `Worker` instances. That will make the `Queue` calls [fail quickly](../patterns/failing-fast-when-redis-is-down.md) while leaving the `Worker`s to wait as needed until the connection has been re-established.
### Log errors
-It is really useful to attach a handler for the error event which will be triggered when there are connection issues, this will be helpful when debugging your queues and prevent "unhandled errors".
+It is really useful to attach a handler for the error event which will be triggered when there are connection issues. This will be helpful when debugging your queues and prevent "unhandled errors".
```typescript
worker.on("error", (err) => {
@@ -62,7 +62,7 @@ queue.on("error", (err) => {
### Gracefully shut-down workers
-Since your workers will run on servers, it is unavoidable that these servers will need to be restarted from time to time. As your workers may be processing jobs when the server is about to restart, it is important to properly close the workers to minimize the risk of stalled jobs. If a worker is killed without waiting for their jobs to complete, these jobs will be marked as stalled and processed automatically when new workers come online (with a waiting time of about 30 seconds by default). However it is better to avoid having stalled jobs, and as mentioned this can be done by closing the workers when the server is going to be restarted. In NodeJS you can accomplish this by listening to the SIGINT signal like this:
+Since your workers will run on servers, it is unavoidable that these servers will need to be restarted from time to time. As your workers may be processing jobs when the server is about to restart, it is important to properly close the workers to minimize the risk of stalled jobs. If a worker is killed without waiting for their jobs to complete, these jobs will be marked as stalled and processed automatically when new workers come online (with a waiting time of about 30 seconds by default). However it is better to avoid having stalled jobs, and as mentioned this can be done by closing the workers when the server is going to be restarted. In NodeJS you can accomplish this by listening to the `SIGINT` signal like this:
```typescript
process.on("SIGINT", async () => {
@@ -74,15 +74,15 @@ Keep in mind that the code above does not guarantee that the jobs will never end
### Auto-job removal
-By default, all jobs processed by BullMQ will be either completed or failed and kept forever. This behavior is not usually the most desired, so you would like to configure a maximum number of jobs to keep. The most common configuration is to keep a handful of completed jobs, just to have some visibility of the latest completed, whereas you can keep either all of the failed jobs or a very large number in case you want to manually retry them or perform a deeper debugging study on the reason why the jobs failed.
+By default, all jobs processed by BullMQ will be either _completed_, or _failed_ and kept forever. This behavior is not usually the most desired, so you will likely want to configure a maximum number of jobs to keep. The most common configuration is to keep a handful of completed jobs, just to have some visibility of the latest completed, whereas you can keep either all of the failed jobs or a very large number in case you want to manually retry them or perform a deeper debugging study on the reason why the jobs failed.
You can read more about how to configure auto removal [here](https://docs.bullmq.io/guide/queues/auto-removal-of-jobs).
### Protecting data
-Another important point to think about when deploying for production is the fact that the data field of the jobs is stored in clear text. The best is to avoid storing sensitive data in the job altogether., but if this is not possible, then it is highly recommended to encrypt the part of the data that is sensible before it is added to the queue.
+Another important point to think about when deploying for production is the fact that the data field of the jobs is stored in clear text. **The best is to avoid storing sensitive data in the job altogether.** However, if this is not possible, then it is highly recommended to encrypt the part of the data that is sensitive before it is added to the queue.
-Please do not take security lightly as it should be a major concern today, and the risks of losing data and economic damage to your business are real and very serious.
+**Please do not take security lightly as it should be a major concern today, and the risks of losing data and economic damage to your business are real and very serious.**
### Unhandled exceptions and rejections
@@ -98,8 +98,4 @@ process.on("unhandledRejection", (reason, promise) => {
// Handle the error safely
logger.error({ promise, reason }, "Unhandled Rejection at: Promise");
});
-
```
-
-
-
diff --git a/docs/gitbook/guide/jobs/README.md b/docs/gitbook/guide/jobs/README.md
index a0dd1a1ed5..cf6419e414 100644
--- a/docs/gitbook/guide/jobs/README.md
+++ b/docs/gitbook/guide/jobs/README.md
@@ -1,6 +1,6 @@
# Jobs
-Queues can hold different types of jobs which determine how and when they are processed. In this section, we will describe them in detail.
+Queues can hold different types of jobs, which determine how and when they are processed. In this section, we will describe them in detail.
An important thing to consider is that you can mix the different job types in the same queue, so you can add FIFO jobs, and at any moment add a LIFO or a delayed job.
diff --git a/docs/gitbook/guide/jobs/delayed.md b/docs/gitbook/guide/jobs/delayed.md
index 5d32a73300..137c289603 100644
--- a/docs/gitbook/guide/jobs/delayed.md
+++ b/docs/gitbook/guide/jobs/delayed.md
@@ -1,10 +1,10 @@
# Delayed
-Delayed jobs are a special type of job that instead of being processed as fast as possible is placed on a special "delayed set" where it will wait until the delay time has passed and then it is processed as a regular job.
+Delayed jobs are a special type of job that is placed into a special "delayed set", instead of being processed as fast as possible. After the delay time has passed, the job is processed as a regular job.
-In order to add delayed jobs to the queue, simply use the "delay" option with the amount of time in milliseconds that you want to delay the job with.
+In order to add delayed jobs to the queue, use the `delay` option with the amount of time (in milliseconds) that you want to delay the job with.
-Note that it is not guaranteed that the job will be processed at the exact delayed time specified, as it depends on how busy the workers are when the time has passed and how many other delayed jobs are scheduled at that exact time. In practice, however, the delay time is quite accurate in most cases.
+Note that it is not guaranteed that the job will be processed at the _exact_ delayed time specified, as it depends on how busy the workers are when the time has passed, and how many other delayed jobs are scheduled at that exact time. In practice, however, the delay time is quite accurate in most cases.
This is an example of how to add delayed jobs to a queue:
@@ -28,7 +28,7 @@ await myQueue.add('house', { color: 'white' }, { delay });
## Change delay
-If you want to change the delay after inserting a delayed job, just use **changeDelay** method. For example, let's say you want to change the delay from 2000 to 4000 milliseconds:
+If you want to change the delay _after_ inserting a delayed job, use **`changeDelay`** method. For example, let's say you want to change the delay from 2000 to 4000 milliseconds:
```typescript
const job = await Job.create(queue, 'test', { foo: 'bar' }, { delay: 2000 });
@@ -37,7 +37,7 @@ await job.changeDelay(4000);
```
{% hint style="warning" %}
-Take in count that your job must be into delayed state when you change the delay.
+Only jobs currently in the **delayed** state can have their delay changed.
{% endhint %}
## Read more:
diff --git a/docs/gitbook/guide/jobs/fifo.md b/docs/gitbook/guide/jobs/fifo.md
index 8059858eee..ee1ebf9298 100644
--- a/docs/gitbook/guide/jobs/fifo.md
+++ b/docs/gitbook/guide/jobs/fifo.md
@@ -4,7 +4,7 @@ description: First-In, First-Out
# FIFO
-The first type of job we are going to describe is the FIFO (First-In, First-Out) type. This is the standard type when adding jobs to a queue. The jobs are processed in the same order as they are inserted into the queue.
+The first type of job we are going to describe is the FIFO (_First-In, First-Out_) type. This is the standard type when adding jobs to a queue. The jobs are processed in the same order as they are inserted into the queue.
This order is preserved independently on the number of processors you have; however, if you have more than one worker or a concurrency factor larger than 1, even though the workers will start the jobs in order, they may be completed in a slightly different order, since some jobs may take more time to complete than others.
@@ -31,7 +31,7 @@ In the example above all completed jobs will be removed automatically and the la
## Default job options
-Quite often, you will want to provide the same job options to all the jobs that you add to the Queue. In this case, you can use the "defaultJobOptions" option when instantiating the Queue class:
+Quite often, you will want to provide the same job options to all the jobs that you add to the queue. In this case, you can use the `defaultJobOptions` option when instantiating the `Queue` class:
```typescript
const queue = new Queue('Paint', { defaultJobOptions: {
diff --git a/docs/gitbook/guide/jobs/getters.md b/docs/gitbook/guide/jobs/getters.md
index 7cadcb996c..a4cba3ebd2 100644
--- a/docs/gitbook/guide/jobs/getters.md
+++ b/docs/gitbook/guide/jobs/getters.md
@@ -38,7 +38,16 @@ counts = await myQueue.getJobCounts('wait', 'completed', 'failed')
{% endtab %}
{% endtabs %}
-The available status are: _completed, failed, delayed, active, wait, waiting-children, prioritized, _paused_ and _repeat._
+The available status are:
+- _completed_,
+- _failed_,
+- _delayed_,
+- _active_,
+- _wait_,
+- _waiting-children_,
+- _prioritized_,
+- _paused_, and
+- _repeat_.
#### Get Jobs
diff --git a/docs/gitbook/guide/jobs/job-data.md b/docs/gitbook/guide/jobs/job-data.md
index 4f5739694c..38eae8f0f7 100644
--- a/docs/gitbook/guide/jobs/job-data.md
+++ b/docs/gitbook/guide/jobs/job-data.md
@@ -1,6 +1,6 @@
# Job Data
-Every job can have its own custom data. The data is stored in the **data** attribute of the job:
+Every job can have its own custom data. The data is stored in the **`data`** attribute of the job:
{% tabs %}
{% tab title="TypeScript" %}
@@ -34,7 +34,7 @@ job.data # { color: 'red' }
## Update data
-If you want to change the data after inserting a job, just use the **updateData** method. For example:
+If you want to change the data after inserting a job, just use the **`updateData`** method. For example:
{% tabs %}
{% tab title="TypeScript" %}
diff --git a/docs/gitbook/guide/jobs/job-ids.md b/docs/gitbook/guide/jobs/job-ids.md
index d283036253..425149b4e6 100644
--- a/docs/gitbook/guide/jobs/job-ids.md
+++ b/docs/gitbook/guide/jobs/job-ids.md
@@ -1,16 +1,16 @@
# Job Ids
-All jobs in BullMQ need to have a unique job id. These id is used to store construct a key where the data is stored in Redis and as a pointer to the job as it is being moving around the different states it can be during its lifetime.
+All jobs in BullMQ need to have a unique job id. This id is used to construct a key to store the data in Redis, and as a pointer to the job as it is moved between the different states it can be in during its lifetime.
-By default job ids are generated automatically as an increasing counter, however it is also possible to specify a custom id.
+By default, job ids are generated automatically as an increasing counter, however it is also possible to specify a _custom id_.
The main reason to be able to specify a custom id is in cases when you want to avoid duplicated jobs. Since ids must be unique, if you add a job with an existing id then that job will just be ignored and not added to the queue at all.
{% hint style="danger" %}
-Jobs that are removed from the queue, either manually or when using settings such as removeOnComplete/Failed will not be considered as duplicates meaning that you can add the same job id many times over as long as the previous job has already been removed from the queue.
+Jobs that are removed from the queue (either manually, or when using settings such as `removeOnComplete`/`removeOnFailed`) will **not** be considered as duplicates, meaning that you can add the same job id many times over as long as the previous job has already been removed from the queue.
{% endhint %}
-In order to specify a custom job id just use the jobId option when adding jobs to the queue:
+In order to specify a custom job id, use the `jobId` option when adding jobs to the queue:
```typescript
await myQueue.add(
diff --git a/docs/gitbook/guide/jobs/lifo.md b/docs/gitbook/guide/jobs/lifo.md
index 42a9673cfe..2ded46d8b5 100644
--- a/docs/gitbook/guide/jobs/lifo.md
+++ b/docs/gitbook/guide/jobs/lifo.md
@@ -4,7 +4,7 @@ description: 'Last-in, First Out'
# LIFO
-In some cases, it is useful to process the jobs in a LIFO \(Last-in, First-Out\) fashion. This means that the newest jobs added to the queue will be processed before the older ones.
+In some cases, it is useful to process jobs in a LIFO \(_Last-in, First-Out_\) fashion. This means that the newest jobs added to the queue will be processed **before** the older ones.
```typescript
import { Queue } from 'bullmq';
diff --git a/docs/gitbook/guide/jobs/prioritized.md b/docs/gitbook/guide/jobs/prioritized.md
index 31a70fcdca..335eaffa1e 100644
--- a/docs/gitbook/guide/jobs/prioritized.md
+++ b/docs/gitbook/guide/jobs/prioritized.md
@@ -1,14 +1,14 @@
# Prioritized
-Jobs can also include a priority option. Using priorities, job's processing order will be affected by the specified priority instead of following a FIFO or LIFO pattern.
+Jobs can also include a `priority` option. Using priorities, job processing order will be affected by the specified `priority` instead of following a FIFO or LIFO pattern.
{% hint style="warning" %}
-Adding prioritized jobs is a slower operation than the other types of jobs, with a complexity O(log(n)) relative to the number of jobs in prioritized set in the Queue.
+Adding prioritized jobs is a slower operation than the other types of jobs, with a complexity `O(log(n))`` relative to the number of jobs in the prioritized set in the queue.
{% endhint %}
-Note that the priorities go from 1 to 2 097 152, whereas a lower number is always a higher priority than higher numbers.
+Note that the priorities go from `1` to `2 097 152`, where a lower number is always a **higher** priority than higher numbers.
-Jobs without a priority assigned will get the least priority.
+Jobs without a `priority`` assigned will get the least priority.
```typescript
import { Queue } from 'bullmq';
@@ -23,11 +23,11 @@ await myQueue.add('wall', { color: 'blue' }, { priority: 7 });
// finally pink.
```
-If several jobs are added with the same priority value, then the jobs within that priority will be processed in FIFO (First in first out) fashion.
+If several jobs are added with the same priority value, then the jobs within that priority will be processed in [FIFO (_First in, first out_)](../fifo.md) fashion.
## Change priority
-If you want to change the priority after inserting a job, just use the **changePriority** method. For example, let's say that you want to change the priority from 16 to 1:
+If you want to change the `priority` after inserting a job, use the **`changePriority`** method. For example, let's say that you want to change the `priority` from `16` to `1`:
```typescript
const job = await Job.create(queue, 'test2', { foo: 'bar' }, { priority: 16 });
@@ -37,7 +37,7 @@ await job.changePriority({
});
```
-or if you want to use lifo option:
+or if you want to use the [LIFO (_Last In, First Out_)](../lifo.md) option:
```typescript
const job = await Job.create(queue, 'test2', { foo: 'bar' }, { priority: 16 });
@@ -49,7 +49,7 @@ await job.changePriority({
## Get Prioritized jobs
-As prioritized is a new state. You must use **getJobs** or **getPrioritized** method as:
+As prioritized is a new state. You must use **`getJobs`** or **`getPrioritized`** method as:
```typescript
const jobs = await queue.getJobs(['prioritized']);
diff --git a/docs/gitbook/guide/jobs/removing-job.md b/docs/gitbook/guide/jobs/removing-job.md
index 4e36f87295..390a3dcf6f 100644
--- a/docs/gitbook/guide/jobs/removing-job.md
+++ b/docs/gitbook/guide/jobs/removing-job.md
@@ -28,7 +28,7 @@ await job.remove()
{% endtab %}
{% endtabs %}
-{% hint style="info" %}
+{% hint style="warning" %}
Locked jobs (in active state) can not be removed. An error will be thrown.
{% endhint %}
@@ -40,15 +40,15 @@ There are 2 possible cases:
2. There are pending dependencies; in this case the parent is kept in waiting-children status.
{% hint style="info" %}
-Take in consideration that processed values will be kept in processed hset from the parent if this child is in **completed** state at the time when it's removed.
+Take into consideration that processed values will be kept in processed `hset` from the parent if this child is in **completed** state at the time when it's removed.
{% endhint %}
## Having pending dependencies
-We may try to remove all its pending descendents first.
+We may try to remove all its pending descendants first.
{% hint style="warning" %}
-In case one of the children is locked, it will stop the deletion process.
+If any of the children are locked, the deletion process will be stopped.
{% endhint %}
### Read more:
diff --git a/docs/gitbook/guide/jobs/repeatable.md b/docs/gitbook/guide/jobs/repeatable.md
index 1dfbf8de16..f6e37914cd 100644
--- a/docs/gitbook/guide/jobs/repeatable.md
+++ b/docs/gitbook/guide/jobs/repeatable.md
@@ -7,14 +7,12 @@ The Repeatable Job configuration is not a job, so it will not show up in methods
Every time a repeatable job is picked up for processing, the next repeatable job is added to the queue with a proper delay. Repeatable jobs are thus nothing more than delayed jobs that are added to the queue according to some settings.
{% hint style="info" %}
-Repeatable jobs are just delayed jobs, therefore you also need a QueueScheduler instance to schedule the jobs accordingly.
-{% endhint %}
+As Repeatable jobs are just delayed jobs, prior to BullMQ 2.0 you also need a `QueueScheduler` instance to schedule the jobs accordingly.
-{% hint style="danger" %}
-From BullMQ 2.0 and onwards, the QueueScheduler is not needed anymore.
+However, from BullMQ 2.0 onwards, the `QueueScheduler` is not needed anymore.
{% endhint %}
-There are two ways to specify a repeatable's job repetition pattern, either with a cron expression (using [cron-parser](https://www.npmjs.com/package/cron-parser)'s "unix cron w/ optional seconds" format), or specifying a fix amount of milliseconds between repetitions.
+There are two ways to specify a repeatable's job repetition pattern, either with a cron expression (using [cron-parser](https://www.npmjs.com/package/cron-parser)'s "unix cron w/ optional seconds" format), or specifying a fixed amount of milliseconds between repetitions.
```typescript
import { Queue, QueueScheduler } from 'bullmq';
@@ -50,9 +48,9 @@ There are some important considerations regarding repeatable jobs:
- Bull is smart enough not to add the same repeatable job if the repeat options are the same.
- If there are no workers running, repeatable jobs will not accumulate next time a worker is online.
-- repeatable jobs can be removed using the [removeRepeatable](https://api.docs.bullmq.io/classes/v4.Queue.html#removeRepeatable) method or [removeRepeatableByKey](https://api.docs.bullmq.io/classes/v4.Queue.html#removeRepeatableByKey).
+- Repeatable jobs can be removed using the [`removeRepeatable`](https://api.docs.bullmq.io/classes/v4.Queue.html#removeRepeatable) or [`removeRepeatableByKey`](https://api.docs.bullmq.io/classes/v4.Queue.html#removeRepeatableByKey) methods.
-All repeatable jobs have a repeatable job key that holds some metadata of the repeatable job itself. It is possible to retrieve all the current repeatable jobs in the queue calling [getRepeatableJobs](https://api.docs.bullmq.io/classes/v4.Queue.html#getRepeatableJobs):
+All repeatable jobs have a repeatable job key that holds some metadata of the repeatable job itself. It is possible to retrieve all the current repeatable jobs in the queue calling [`getRepeatableJobs`](https://api.docs.bullmq.io/classes/v4.Queue.html#getRepeatableJobs):
```typescript
import { Queue } from 'bullmq';
@@ -62,7 +60,7 @@ const myQueue = new Queue('Paint');
const repeatableJobs = await myQueue.getRepeatableJobs();
```
-Since repeatable jobs are delayed jobs, and the repetition is achieved by generating a new delayed job precisely before the current job starts processing. The jobs require unique ids which avoid duplicates, which implies that the standard jobId option does not work the same as with regular jobs. With repeatable jobs the jobId is used to generate the unique ids, for instance if you have 2 repeatable jobs with the same name and options you could use the jobId to have 2 different repeatable jobs:
+The standard `jobId` option does not work the same as with regular jobs. Because repeatable jobs are _delayed_ jobs, and the repetition is achieved by generating a new delayed job precisely before the current job starts processing, the jobs require unique ids to avoid being considered duplicates. Therefore, with repeatable jobs, the `jobId` option is used to _generate_ the unique ids (rather than itself being the unique id). For instance, if you have two repeatable jobs with the same name and options, you could use distinct `jobId`s to differentiate them:
```typescript
import { Queue, QueueScheduler } from 'bullmq';
@@ -98,7 +96,7 @@ await myQueue.add(
## Slow repeatable jobs
-It is worth to mention the case where the repeatable frequency is greater than the time it takes to process a job.
+It is worth mentioning the case where the repeatable frequency is greater than the time it takes to process a job.
For instance, let's say that you have a job that is repeated every second, but the process of the job itself takes 5 seconds. As explained above, repeatable jobs are just delayed jobs, so this means that the next repeatable job will be added as soon as the next job is starting to be processed.
@@ -168,11 +166,11 @@ const worker = new Worker(
```
{% hint style="warning" %}
-As you may notice, repeat strategy setting should be provided in Queue and Worker classes. The reason we need in both places is because the first time we add the job to the Queue we need to calculate when is the next iteration, but after that the Worker takes over and we use the worker settings.
+As you may notice, the repeat strategy setting should be provided in `Queue` and `Worker` classes. The reason we need in **both** places is because the first time we add the job to the `Queue` we need to calculate when is the next iteration, but after that the `Worker` takes over and we use the worker settings.
{% endhint %}
{% hint style="info" %}
-Repeat strategy function receives an optional jobName parameter as the 3rd one.
+The repeat strategy function receives an optional `jobName` third parameter.
{% endhint %}
## Read more:
diff --git a/docs/gitbook/guide/jobs/stalled.md b/docs/gitbook/guide/jobs/stalled.md
index 2c2922a45d..afa09cd002 100644
--- a/docs/gitbook/guide/jobs/stalled.md
+++ b/docs/gitbook/guide/jobs/stalled.md
@@ -1,20 +1,20 @@
# Stalled
{% hint style="danger" %}
-From BullMQ 2.0 and onwards, the QueueScheduler is not needed anymore. For manually fetching jobs check this [pattern](https://docs.bullmq.io/patterns/manually-fetching-jobs#checking-for-stalled-jobs)
+From BullMQ 2.0 and onwards, the `QueueScheduler` is not needed anymore. For manually fetching jobs check this [pattern](https://docs.bullmq.io/patterns/manually-fetching-jobs#checking-for-stalled-jobs)
{% endhint %}
-When a job is in an active state, i.e., it is being processed by a worker, it needs to continuously update the queue to notify that the worker is still working on the job. This mechanism prevents a worker that crashes or enters an endless loop from keeping a job in an active state forever.
+When a job is in an active state (i.e. it is being processed by a worker), it needs to continuously update the queue to notify that the worker is still working on the job. This mechanism prevents a worker that crashes or enters an endless loop from keeping a job in an active state forever.
When a worker is not able to notify the queue that it is still working on a given job, that job is moved back to the waiting list, or to the failed set. We then say that the job has stalled and the queue will emit the 'stalled' event.
{% hint style="info" %}
-There is not a 'stalled' state, only a 'stalled' event emitted when a job is automatically moved from active to waiting state.
+There is not a 'stalled' state, only a 'stalled' event emitted when a job is automatically moved from _active_ to _waiting_ state.
{% endhint %}
-If a job stalls more than a predefined limit (see the maxStalledCount option [https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#maxStalledCount](https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#maxStalledCount)), the job will be failed permanently with the error "_job stalled more than allowable limit_". The default is 1, as stalled jobs should be a rare occurrence, but you can increase this number if needed.
+If a job stalls more than a predefined limit (see the [`maxStalledCount` option](https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#maxStalledCount)), the job will be failed permanently with the error "_job stalled more than allowable limit_". The default is 1, as stalled jobs should be a rare occurrence, but you can increase this number if needed.
-In order to avoid stalled jobs, make sure that your worker does not keep Node.js event loop too busy, the default max stalled check duration is 30 seconds, so as long as you do not perform CPU operations exceeding that value you should not get stalled jobs.
+In order to avoid stalled jobs, make sure that your worker does not keep the Node.js event loop too busy. The default max stalled check duration is 30 seconds, so as long as you do not perform CPU operations exceeding that value you should not get stalled jobs.
Another way to reduce the chance of stalled jobs is using so-called "sandboxed" processors. In this case, the workers will spawn new separate Node.js processes, running separately from the main process.
diff --git a/docs/gitbook/guide/metrics/metrics.md b/docs/gitbook/guide/metrics/metrics.md
index 9126fafec3..0657d25f65 100644
--- a/docs/gitbook/guide/metrics/metrics.md
+++ b/docs/gitbook/guide/metrics/metrics.md
@@ -1,9 +1,9 @@
# Metrics
BullMQ provides a simple metrics gathering functionality that allows you to track the performance of your queues.
-The workers can count the number of jobs they have processed per minute and store this data in a list to be consumed later.
+Workers can count the number of jobs they have processed per minute and store this data in a list to be consumed later.
-You enable it on the worker settings by specifying how many data points you want to keep, we recommend 2 weeks of metrics data which should take a very small amount of space, just around 120Kb of RAM per queue.
+You enable it on the worker settings by specifying how many data points you want to keep. We recommend 2 weeks of metrics data which should take a very small amount of space, just around 120Kb of RAM per queue.
```typescript
import { Worker, MetricsTime } from 'bullmq';
@@ -20,7 +20,7 @@ const myWorker = new Worker('Paint', {
You need to use the same setting on all your workers to get consistent metrics.
{% endhint %}
-In order to get the metrics you just use the `getMetrics` method on the Queue class. You can choose to get the metrics for the completed or failed jobs:
+In order to get the metrics, use the `getMetrics` method on the `Queue` class. You can choose to get the metrics for the __completed_ or _failed_ jobs:
```typescript
import { Queue } from 'bullmq';
@@ -43,5 +43,5 @@ const metrics = await queue.getMetrics('completed');
*/
```
-Note that the `getMetrics` method also accepts a start and end argument (0 and -1 by default), that you can
+Note that the `getMetrics` method also accepts a `start` and `end` argument (`0` and `-1` by default), that you can
use if you want to implement pagination.
diff --git a/docs/gitbook/guide/migration-to-newer-versions.md b/docs/gitbook/guide/migration-to-newer-versions.md
index e28db0c7b7..0b8ca2a175 100644
--- a/docs/gitbook/guide/migration-to-newer-versions.md
+++ b/docs/gitbook/guide/migration-to-newer-versions.md
@@ -10,19 +10,19 @@ The strategies needed vary depending on the nature of the upgrade. Regardless, w
## General advice
-When upgrading BullMQ, always consult the Changelog. It helps determine the extent of the changes and flags crucial considerations before upgrading.
+When upgrading BullMQ, always consult the _Changelog_. It helps determine the extent of the changes and flags crucial considerations before upgrading.
Avoid making large leaps between versions. If you're currently on version 1.3.7, for instance, a jump to version 4.2.6 may not be advisable. Upgrade incrementally whenever possible. Start with as many bugfix releases as possible, then proceed to new features, and finally, the major releases that encompass breaking changes.
## Bugfix upgrade
-Bugfix releases increase only the micro version number according to SemVer—for instance, an upgrade from 3.14.4 to 3.14.7. Bugfix upgrades require no special strategies; simply update your instances to the latest version without changing your code or deployment. While it's not critical that all instances run on the same version, we recommend it for consistency.
+Bugfix releases increase only the micro version number according to [SemVer (_Semantic Versioning_)](https://semver.org/) (for instance, an upgrade from 3.14.4 to 3.14.7). Bugfix upgrades require no special strategies; simply update your instances to the latest version without changing your code or deployment. While it's not critical that all instances run on the same version, we recommend it for consistency.
## New feature upgrade
-Following the SemVer specification, new features result in an increase in the minor version number, like going from 3.14.7 to 3.20.5. Generally, you can treat feature upgrades like bugfix upgrades—update all your instances to the latest version.
+Following the SemVer specification, new features result in an increase in the minor version number (like going from 3.14.7 to 3.20.5). Generally, you can treat feature upgrades like bugfix upgrades — update all your instances to the latest version.
-However, if you're also upgrading your code to utilize a new feature, ensure it's backward compatible with the older BullMQ version. Otherwise, an older Worker might stop functioning if a new Queue adds jobs leveraging a feature the older Worker doesn't understand.
+However, if you're also upgrading your code to utilize a new feature, ensure it's backward compatible with the older BullMQ version. Otherwise, an older `Worker` might stop functioning if a new `Queue` adds jobs leveraging a feature the older `Worker` doesn't understand.
The strategy here is to first upgrade all your instances to the version featuring the new functionality. After confirming all instances run the new version, proceed to deploy your code depending on those new features.
@@ -32,13 +32,15 @@ Occasionally, unavoidable changes incompatible with previous versions are made.
### API breaking changes
-API breaking changes could involve altered method parameters, removals, or different operational methods. These changes are usually straightforward to apply—you can run your BullMQ-dependent unit tests and address issues based on these changes. If you're using TypeScript, compilation errors will likely surface. Always read the [changelog](../changelog.md) for essential information about these changes.
+API breaking changes could involve altered method parameters, removals, or different operational methods. These changes are usually straightforward to apply — you can run your BullMQ-dependent unit tests and address issues based on these changes. If you're using TypeScript, compilation errors will likely surface. Always read the [changelog](../changelog.md) for essential information about these changes.
### Data structure breaking changes
-Data structure changes, which alter the queue's underlying structure, are more challenging. They can be either **additive** (introducing new data structures that older BullMQ versions don't understand) or **destructive** (changing or eradicating older data structures).
+Data structure changes, which alter the queue's underlying structure, are more challenging. They can be either
+- **additive** (introducing new data structures that older BullMQ versions don't understand), or
+- **destructive** (changing or eradicating older data structures).
-For additive changes, you could simply upgrade all instances to the new version—they should apply the change and continue working without issues, akin to a [new feature upgrade](migration-to-newer-versions.md#new-features-upgrade).
+For additive changes, you could simply upgrade all instances to the new version — they should apply the change and continue working without issues, akin to a [new feature upgrade](migration-to-newer-versions.md#new-features-upgrade).
Destructive changes are the most demanding, as these fundamental alterations may make older versions unworkable, making rollback impossible if the upgrade fails. The [changelog](../changelog.md) will provide crucial information to guide you through this type of upgrade.
@@ -48,7 +50,7 @@ For the most demanding upgrades, you might find these strategies useful:
### Pause/Upgrade/Unpause
-Since BullMQ supports global pause, one possible strategy, if suitable for your business case, is to pause the queue(s), wait until all current queued jobs have been processed, then perform the upgrade. Once all instances running BullMQ have been upgraded, you can unpause and let new jobs be processed by the new workers. Be aware this strategy is less useful if breaking changes affect Queue instances. Always consult the changelog for this type of information.
+Since BullMQ supports global pause, one possible strategy, if suitable for your business case, is to pause the queue(s), wait until all current queued jobs have been processed, then perform the upgrade. Once all instances running BullMQ have been upgraded, you can unpause and let new jobs be processed by the new workers. Be aware this strategy is less useful if breaking changes affect `Queue` instances. Always consult the changelog for this type of information.
### Use new queues altogether
diff --git a/docs/gitbook/guide/nestjs/README.md b/docs/gitbook/guide/nestjs/README.md
index a98109b082..a7bea3d9d6 100644
--- a/docs/gitbook/guide/nestjs/README.md
+++ b/docs/gitbook/guide/nestjs/README.md
@@ -4,7 +4,7 @@ There is a compatible module to be used in [NestJs](https://github.com/nestjs/ne
npm i @nestjs/bullmq
```
-Once the installation process is complete, we can import the **BullModule** into the root **AppModule**.
+Once the installation process is complete, we can import the **`BullModule`** into the root **`AppModule`**.
```typescript
import { Module } from '@nestjs/common';
@@ -23,7 +23,7 @@ import { BullModule } from '@nestjs/bullmq';
export class AppModule {}
```
-To register a queue, import the **BullModule.registerQueue()** dynamic module, as follows:
+To register a queue, import the **`BullModule.registerQueue()`** dynamic module, as follows:
```typescript
BullModule.registerQueue({
@@ -31,7 +31,7 @@ BullModule.registerQueue({
});
```
-To register a flow producer, import the **BullModule.registerFlowProducer()** dynamic module, as follows:
+To register a flow producer, import the **`BullModule.registerFlowProducer()`** dynamic module, as follows:
```typescript
BullModule.registerFlowProducer({
@@ -41,7 +41,7 @@ BullModule.registerFlowProducer({
# Processor
-To register a processor, you may need to use **Processor** decorator:
+To register a processor, you may need to use the **`Processor`** decorator:
```typescript
import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
diff --git a/docs/gitbook/guide/nestjs/producers.md b/docs/gitbook/guide/nestjs/producers.md
index 6bbcdfa163..daaa84bd5b 100644
--- a/docs/gitbook/guide/nestjs/producers.md
+++ b/docs/gitbook/guide/nestjs/producers.md
@@ -14,7 +14,7 @@ export class AudioService {
```
{% hint style="info" %}
-The **@InjectQueue()** decorator identifies the queue by its name, as provided in the **registerQueue()**.
+The **`@InjectQueue()`** decorator identifies the queue by its name, as provided in the **`registerQueue()`**.
{% endhint %}
Now, add a job by calling the queue's add() method.
@@ -43,10 +43,10 @@ export class FlowService {
```
{% hint style="info" %}
-The **@InjectFlowProducer()** decorator identifies the flow producer by its name, as provided in the **registerFlowProducer()**.
+The **`@InjectFlowProducer()`** decorator identifies the flow producer by its `name`, as provided in the **`registerFlowProducer()`**.
{% endhint %}
-Now, add a flow by calling the flow producer's add() method.
+Now, add a flow by calling the flow producer's `add()`` method.
```typescript
const job = await this.fooFlowProducer.add({
diff --git a/docs/gitbook/guide/nestjs/queue-events-listeners.md b/docs/gitbook/guide/nestjs/queue-events-listeners.md
index bd28f56606..592fff76da 100644
--- a/docs/gitbook/guide/nestjs/queue-events-listeners.md
+++ b/docs/gitbook/guide/nestjs/queue-events-listeners.md
@@ -1,6 +1,6 @@
# Queue Events Listeners
-To register a QueueEvents instance, you need to use **QueueEventsListener** decorator:
+To register a QueueEvents instance, you need to use the **`QueueEventsListener`** decorator:
```typescript
import {
diff --git a/docs/gitbook/guide/queues/README.md b/docs/gitbook/guide/queues/README.md
index 385e3f08fd..e9eec0a73e 100644
--- a/docs/gitbook/guide/queues/README.md
+++ b/docs/gitbook/guide/queues/README.md
@@ -2,7 +2,7 @@
A Queue is nothing more than a list of jobs waiting to be processed. The jobs can be small, message like, so that the queue can be used as a message broker, or they can be larger long running jobs.
-Queues are controlled with the Queue class. As all classes in BullMQ this is a lightweight class with a handful of methods that gives you control over the queue:
+Queues are controlled with the `Queue` class. As all classes in BullMQ, this is a lightweight class with a handful of methods that gives you control over the queue:
```typescript
const queue = new Queue('Cars');
@@ -12,7 +12,7 @@ const queue = new Queue('Cars');
See [Connections](../connections.md) for details on how to pass Redis details to use by the queue.
{% endhint %}
-When you instance a Queue, BullMQ will just _upsert_ a small "meta-key", so if the queue existed before it will just pick it up and you can continue adding jobs to it.
+When you instantiate a Queue, BullMQ will just _upsert_ a small "meta-key", so if the queue existed before it will just pick it up and you can continue adding jobs to it.
The most important method is probably the [_**add**_](https://api.docs.bullmq.io/classes/v4.Queue.html#add) method. This method allows you to add jobs to the queue in different fashions:
@@ -31,11 +31,9 @@ await queue.add('paint', { color: 'blue' }, { delay: 5000 });
The job will now wait **at** **least** 5 seconds before it is processed.
{% hint style="danger" %}
-In order for delay jobs to work you need to have at least one _QueueScheduler_ somewhere in your infrastructure. Read more [here](../queuescheduler.md).
-{% endhint %}
+Prior to BullMQ 2.0, in order for delay jobs to work you need to have at least one `QueueScheduler` somewhere in your infrastructure. Read more [here](../queuescheduler.md).
-{% hint style="danger" %}
-From BullMQ 2.0 and onwards, the QueueScheduler is not needed anymore.
+From BullMQ 2.0 and onwards, the `QueueScheduler` is not needed anymore.
{% endhint %}
There are many other options available such as priorities, backoff settings, lifo behaviour, remove-on-complete policies, etc. Please check the remaining of this guide for more information regarding these options.
diff --git a/docs/gitbook/guide/queues/adding-bulks.md b/docs/gitbook/guide/queues/adding-bulks.md
index dd4ce72844..495f22db60 100644
--- a/docs/gitbook/guide/queues/adding-bulks.md
+++ b/docs/gitbook/guide/queues/adding-bulks.md
@@ -1,6 +1,6 @@
# Adding jobs in bulk
-Sometimes it is necessary to add a complete bulk of jobs atomically. For example, there could be a requirement that all the jobs must be placed in the queue or none of them. Also, adding a bulk of jobs can be faster since it reduces the number of roundtrips to Redis:
+Sometimes it is necessary to add many jobs atomically. For example, there could be a requirement that all the jobs must be placed in the queue or none of them. Also, adding jobs in bulk can be faster since it reduces the number of roundtrips to Redis:
```typescript
import { Queue } from 'bullmq';
diff --git a/docs/gitbook/guide/queues/auto-removal-of-jobs.md b/docs/gitbook/guide/queues/auto-removal-of-jobs.md
index 6d67432d88..bfac359e00 100644
--- a/docs/gitbook/guide/queues/auto-removal-of-jobs.md
+++ b/docs/gitbook/guide/queues/auto-removal-of-jobs.md
@@ -2,11 +2,11 @@
By default, when your queue jobs are completed (or failed), they are stored in two special sets, the "completed" and the "failed" set. This is useful so that you can examine the results of your jobs, particularly in the early stages of development. However, as the solution reaches a production-grade level, we usually need to restrict the number of finished jobs to be kept, so that we do not fill Redis with data that is not particularly useful.
-BullMQ supports different strategies for auto-removing finalized jobs. These strategies are configured on the Job's options "[removeOnComplete](https://api.docs.bullmq.io/interfaces/BaseJobOptions.html#removeOnComplete)" and "[removeOnFail](https://api.docs.bullmq.io/interfaces/BaseJobOptions.html#removeOnFail)".
+BullMQ supports different strategies for auto-removing finalized jobs. These strategies are configured on the Job's options [`removeOnComplete`](https://api.docs.bullmq.io/interfaces/BaseJobOptions.html#removeOnComplete) and [`removeOnFail`](https://api.docs.bullmq.io/interfaces/BaseJobOptions.html#removeOnFail).
### Remove all finalized jobs
-The simplest option is to set removeOnComplete/Fail to "true", in this case, all jobs will be removed automatically as soon as they are finalized:
+The simplest option is to set `removeOnComplete`/`removeOnFail` to `true`, in this case, all jobs will be removed automatically as soon as they are finalized:
```typescript
await myQueue.add(
@@ -28,7 +28,7 @@ await myQueue.add(
);
```
-Or if you want to set it for all your jobs for an specific worker:
+Or if you want to set it for all your jobs for a specific worker:
```typescript
new Worker('test', async job => {}, {
@@ -40,7 +40,7 @@ new Worker('test', async job => {}, {
### Keep jobs based on their age
-Another possibility is to keep jobs up to a certain age. The "removeOn" option accepts a "[KeepJobs](https://api.docs.bullmq.io/interfaces/v4.KeepJobs.html)" object, that includes an "age" and a "count" fields. The age is used to specify how old jobs to keep (in seconds), and the count can be used to limit the total amount to keep. The count option is useful in cases we get an unexpected amount of jobs in a very short time, in this case we may just want to limit to a certain amount to avoid running out of memory.
+Another possibility is to keep jobs up to a certain age. The `removeOn` option accepts a [`KeepJobs`](https://api.docs.bullmq.io/interfaces/v4.KeepJobs.html) object, that includes an `age` and a `count` fields. The `age` is used to specify how old jobs to keep (in seconds), and the `count` can be used to limit the total amount to keep. The `count` option is useful in cases we get an unexpected amount of jobs in a very short time, in this case we may just want to limit to a certain amount to avoid running out of memory.
```typescript
await myQueue.add(
@@ -62,7 +62,7 @@ await myQueue.add(
The auto removal of jobs works lazily. This means that jobs are not removed unless a new job completes or fails, since that is when the auto-removal takes place.
{% endhint %}
-Or if you want to set it for all your jobs for an specific worker:
+Or if you want to set it for all your jobs for a specific worker:
```typescript
new Worker('test', async job => {}, {
diff --git a/docs/gitbook/guide/queuescheduler.md b/docs/gitbook/guide/queuescheduler.md
index b2d6d3aff9..fc1e9e60dc 100644
--- a/docs/gitbook/guide/queuescheduler.md
+++ b/docs/gitbook/guide/queuescheduler.md
@@ -1,10 +1,10 @@
# QueueScheduler
{% hint style="danger" %}
-The QueueScheduler is deprecated from BullMQ 2.0 and onwards. The information below is only relevant for older versions.
+The `QueueScheduler` is deprecated from BullMQ 2.0 and onwards. The information below is only relevant for older versions.
{% endhint %}
-The QueueScheduler is a helper class used to manage stalled and delayed jobs for a given Queue.
+The `QueueScheduler` is a helper class used to manage stalled and delayed jobs for a given Queue.
```typescript
import { QueueScheduler } from 'bullmq';
@@ -15,16 +15,16 @@ const queueScheduler = new QueueScheduler('test');
await queueScheduler.close();
```
-This class automatically moves delayed jobs back to the waiting queue when it is the right time to process them. It also automatically checks for stalled jobs, i.e., detects jobs that are active but where the worker has either crashed or stopped working properly. [Stalled jobs](jobs/stalled.md) are moved back or failed depending on the settings selected when instantiating the class.
+This class automatically moves delayed jobs back to the waiting queue when it is the right time to process them. It also automatically checks for stalled jobs (i.e. detects jobs that are active but where the worker has either crashed or stopped working properly). [Stalled jobs](jobs/stalled.md) are moved back or failed depending on the settings selected when instantiating the class.
{% hint style="info" %}
-You need at least one QueueScheduler running somewhere for a given queue if you require functionality such as delayed jobs, retries with backoff and rate limiting.
+You need at least one `QueueScheduler` running somewhere for a given queue if you require functionality such as delayed jobs, retries with backoff and rate limiting.
{% endhint %}
The reason for having this functionality in a separate class instead of in the workers (as in Bull 3.x) is because whereas you may want to have a large number of workers for parallel processing, for the scheduler you probably only want a couple of instances for each queue that requires delayed or stalled checks. One will be enough but you can have more just for redundancy.
{% hint style="warning" %}
-It is ok to have as many QueueScheduler instances as you want, just keep in mind that every instance will perform some bookkeeping so it may create some noticeable CPU and IO usage in your Redis instances.
+It is ok to have as many `QueueScheduler` instances as you want, just keep in mind that every instance will perform some bookkeeping so it may create some noticeable CPU and IO usage in your Redis instances.
{% endhint %}
## Read more:
diff --git a/docs/gitbook/guide/rate-limiting.md b/docs/gitbook/guide/rate-limiting.md
index 20b03d20f1..49442ee3b2 100644
--- a/docs/gitbook/guide/rate-limiting.md
+++ b/docs/gitbook/guide/rate-limiting.md
@@ -1,6 +1,6 @@
# Rate limiting
-BullMQ provides rate limiting for the queues. It is possible to configure the workers so that they obey a given rate limiting option:
+BullMQ provides queue rate limiting. It is possible to configure workers so that they obey a given rate limiting option:
```typescript
import { Worker, QueueScheduler } from 'bullmq';
@@ -16,11 +16,11 @@ const scheduler = new QueueScheduler('painter');
```
{% hint style="warning" %}
-Jobs that get rate limited will actually stay in waiting state.
+Jobs that get rate limited will actually stay in the waiting state.
{% endhint %}
{% hint style="danger" %}
-From BullMQ 2.0 and onwards, the QueueScheduler is not needed anymore.
+From BullMQ 2.0 and onwards, the `QueueScheduler` is not needed anymore.
{% endhint %}
{% hint style="info" %}
@@ -60,9 +60,9 @@ await queue.add('rate limited paint', { customerId: 'my-customer-id' });
### Manual rate-limit
-Sometimes is useful to rate-limit a queue manually instead of based on some static options. For example, if you have an API that returns 429 (Too many requests), and you want to rate-limit the queue based on that response.
+Sometimes is useful to rate-limit a queue manually instead of based on some static options. For example, you may have an API that returns `429 Too Many Requests`, and you want to rate-limit the queue based on that response.
-For this purpose, you can use the worker method **rateLimit** like this:
+For this purpose, you can use the worker method **`rateLimit`** like this:
```typescript
import { Worker } from 'bullmq';
@@ -89,7 +89,7 @@ const worker = new Worker(
Sometimes is useful to know if our queue is rate limited.
-For this purpose, you can use the **getRateLimitTtl** method like this:
+For this purpose, you can use the **`getRateLimitTtl`** method like this:
```typescript
import { Queue } from 'bullmq';
diff --git a/docs/gitbook/guide/retrying-failing-jobs.md b/docs/gitbook/guide/retrying-failing-jobs.md
index fc4ff481de..20702b21a9 100644
--- a/docs/gitbook/guide/retrying-failing-jobs.md
+++ b/docs/gitbook/guide/retrying-failing-jobs.md
@@ -1,21 +1,21 @@
# Retrying failing jobs
-As your queues processes jobs, it is inevitable that over time some of these jobs will fail. In BullMQ, a job is considered failed in the following scenarios:
+As your queues process jobs, it is inevitable that over time some of these jobs will fail. In BullMQ, a job is considered failed in the following scenarios:
-- The processor function defined in your [Worker](https://docs.bullmq.io/guide/workers) has thrown an exception.
-- The job has become [stalled](https://docs.bullmq.io/guide/jobs/stalled) and it has consumed the "max stalled count" setting.
+- The processor function defined in your [`Worker`](https://docs.bullmq.io/guide/workers) has thrown an exception.
+- The job has become [_stalled_](https://docs.bullmq.io/guide/jobs/stalled) and it has consumed the "max stalled count" setting.
{% hint style="danger" %}
-The exceptions thrown in a processor must be an [Error](https://nodejs.org/api/errors.html#class-error) object for BullMQ to work correctly.
+The exceptions thrown in a processor must be an [`Error`](https://nodejs.org/api/errors.html#class-error) object for BullMQ to work correctly.
-In general, as a best practice, it is better to always throw Error objects. There is even an eslint rule if you want to enforce it: https://eslint.org/docs/latest/rules/no-throw-literal
+In general, as a best practice, it is better to always throw `Error` objects. There is even an [ESLint rule](https://eslint.org/docs/latest/rules/no-throw-literal) if you want to enforce it.
{% endhint %}
## Retrying failing jobs
-When a processor throws an exception, the worker will catch it and move the job to the failed set. Depending on your [Queue settings](https://docs.bullmq.io/guide/queues/auto-removal-of-jobs), the job may stay in the failed set forever, or it could be automatically removed.
+When a processor throws an exception, the worker will catch it and move the job to the failed set. Depending on your [Queue settings](https://docs.bullmq.io/guide/queues/auto-removal-of-jobs), the job may stay in the failed set forever, or it could be automatically removed.
-Often it is desirable to automatically retry failed jobs so that we do not give up until a certain amount of retries have failed. In order to activate automatic job retries you should use the [attempts](https://api.docs.bullmq.io/interfaces/v4.BaseJobOptions.html#attempts) setting with a value larger than 1 (see the examples below).
+Often it is desirable to automatically retry failed jobs so that we do not give up until a certain amount of retries have failed. In order to activate automatic job retries you should use the [`attempts`](https://api.docs.bullmq.io/interfaces/v4.BaseJobOptions.html#attempts) setting with a value larger than 1 (see the examples below).
BullMQ supports retries of failed jobs using back-off functions. It is possible to use the **built-in** backoff functions or provide **custom** ones. If you do not specify a back-off function, the jobs will be retried without delay as soon as they fail.
@@ -23,7 +23,7 @@ BullMQ supports retries of failed jobs using back-off functions. It is possible
The current built-in backoff functions are "exponential" and "fixed".
-With exponential backoff, it will retry after `2 ^ (attempts - 1) * delay` milliseconds. For example, with a delay of 3000 milliseconds, for the 7th attempt, it will retry 2^6 \* 3000 milliseconds = 3.2 minutes after the previous attempt.
+With exponential backoff, it will retry after `2 ^ (attempts - 1) * delay` milliseconds. For example, with a delay of 3000 milliseconds, for the 7th attempt, it will retry `2^6 \* 3000` milliseconds = 3.2 minutes after the previous attempt.
With a fixed backoff, it will retry after `delay` milliseconds, so with a delay of 3000 milliseconds, it will retry _every_ attempt 3000 milliseconds after the previous attempt.
diff --git a/docs/gitbook/guide/returning-job-data.md b/docs/gitbook/guide/returning-job-data.md
index 975c9280ab..a034e50842 100644
--- a/docs/gitbook/guide/returning-job-data.md
+++ b/docs/gitbook/guide/returning-job-data.md
@@ -1,6 +1,6 @@
# Returning job data
-When a worker is done processing, sometimes it is convenient to return some data. This data can then be accessed for example by listening to the "completed" event. This return data is available at the job's "returnvalue" property.
+When a worker is done processing, sometimes it is convenient to return some data. This data can then be accessed for example by listening to the `completed` event. This return data is available at the job's `returnvalue` property.
Imagine a simple worker that performs some async processing:
@@ -14,7 +14,7 @@ const myWorker = new Worker('AsyncProc', async job => {
```
{% hint style="info" %}
-Note, in the example above we could just return directly doSomeAsyncProcessing, we just use a temporal variable to make the example more explicit.
+Note, in the example above we could just directly return the result of `doSomeAsyncProcessing`, we just use a temporary variable to make the example more explicit.
{% endhint %}
We can now listen to the completed event in order to get the result value:
@@ -32,8 +32,8 @@ queueEvents.on('completed', async ({ jobId: string }) => {
});
```
-If you want to store the result of the processing function it is still much more robust to do it in the process function itself, that will guarantee that if the job is completed the return value would be stored as well. Storing data on the completed event on the other hand could fail and still the job would complete without detecting the error.
+If you want to store the result of the processing function it is still much more robust to do it in the process function itself, as that will guarantee that if the job is completed the return value would be stored as well. Storing data on the completed event on the other hand could fail and still the job would complete without detecting the error.
## Using a "results" Queue
-Another common practice to send jobs results robustly is to have a special "results" queue where the results are sent to. The worker for this "results" queue can reliably do something with the data such as storing it in a database. This approach is useful for designing robust micro-service architectures, where data is sent between services using queues. Even if the service that processes the result is down at the time the results queue receives the data, the result will still be processed as soon as the service come up online again.
+Another common practice to send job results robustly is to have a special "results" queue to which results are sent. The worker for this "results" queue can reliably do something with the data such as storing it in a database. This approach is useful for designing robust micro-service architectures, where data is sent between services using queues. Even if the service that processes the result is down at the time the results queue receives the data, the result will still be processed as soon as the service come up online again.
diff --git a/docs/gitbook/guide/workers.md b/docs/gitbook/guide/workers.md
index 3fe6cb141c..b61df4b6b2 100644
--- a/docs/gitbook/guide/workers.md
+++ b/docs/gitbook/guide/workers.md
@@ -68,7 +68,7 @@ Due to the nature of NodeJS, which is \(in general\) single threaded and consist
When a job reaches a worker and starts to be processed, BullMQ will place a lock on this job to protect the job from being modified by any other client or worker. At the same time, the worker needs to periodically notify BullMQ that it is still working on the job.
{% hint style="info" %}
-This period is configured with the "stalledInterval" setting, which normally you should not need to modify.
+This period is configured with the `stalledInterval` setting, which normally you should not need to modify.
{% endhint %}
However if the CPU is very busy due to the process being very CPU intensive, the worker may not have time to renew the lock and tell the queue that it is still working on the job, then the job will likely be marked as Stalled.
diff --git a/docs/gitbook/guide/workers/README.md b/docs/gitbook/guide/workers/README.md
index 84d9ad387f..5918431a89 100644
--- a/docs/gitbook/guide/workers/README.md
+++ b/docs/gitbook/guide/workers/README.md
@@ -1,12 +1,12 @@
# Workers
-Workers are the actual instances that perform some job based on the jobs that are added in the queue. A worker is equivalent to a "message" receiver in a traditional message queue. The worker duty is to complete the job, if it succeeds, the job will be moved to the "completed" status. If the worker throws an exception during its processing, the job will automatically be moved to the "failed" status.
+Workers are the actual instances that perform some job based on the jobs that are added in the queue. A worker is equivalent to a "message" receiver in a traditional message queue. The worker's duty is to complete the job. If it succeeds, the job will be moved to the "completed" status. If the worker throws an exception during its processing, the job will automatically be moved to the "failed" status.
{% hint style="info" %}
Failed jobs can be automatically retried, see [Retrying failing jobs](../retrying-failing-jobs.md)
{% endhint %}
-A worker is instantiated with the Worker class, and the work itself will be performed in the process function. Process functions are meant to be asynchronous so either use the "async" keyword or return a promise.
+A worker is instantiated with the `Worker` class, and the work itself will be performed in the _process function_. Process functions are meant to be asynchronous, using either the `async` keyword or returning a promise.
```typescript
import { Worker, Job } from 'bullmq';
@@ -27,7 +27,7 @@ const worker = new Worker(queueName, async (job: Job) => {
When a worker instance is created, it launches the processor immediately
{% endhint %}
-In order to decide when your processor should start its execution, pass autorun as false as part of worker options:
+In order to decide when your processor should start its execution, pass `autorun: false` as part of worker options:
```typescript
import { Worker, Job } from 'bullmq';
@@ -50,7 +50,7 @@ const worker = new Worker(
worker.run();
```
-Note that a processor can optionally return a value. This value can be retrieved either by getting the job and accessing the "returnvalue" property or by listening to the "completed" event:
+Note that a processor can optionally return a value. This value can be retrieved either by getting the job and accessing the `returnvalue` property or by listening to the `completed` event:
```typescript
worker.on('completed', (job: Job, returnvalue: any) => {
@@ -60,7 +60,7 @@ worker.on('completed', (job: Job, returnvalue: any) => {
#### Progress
-Inside the worker process function it is also possible to emit progress events. Calling "job.progress" you can specify a number or an object if you have more complex needs. The "progress" event can be listened in the same way as the "completed" event:
+Inside the worker process function it is also possible to emit progress events. Calling `job.progress` you can specify a number or an object if you have more complex needs. The `progress` event can be listened for in the same way as the `completed` event:
```typescript
worker.on('progress', (job: Job, progress: number | object) => {
@@ -68,7 +68,7 @@ worker.on('progress', (job: Job, progress: number | object) => {
});
```
-Finally, when the process fails with an exception it is possible to listen for the "failed" event too:
+Finally, when the process fails with an exception it is possible to listen for the `failed` event too:
```typescript
worker.on('failed', (job: Job, error: Error) => {
@@ -96,7 +96,7 @@ queueEvents.on('progress', ({jobId: string, data: number | object}) => {
});
```
-Finally, you should attach an error listener to your worker to avoid NodeJS raising an unhandled exception when an error occurs, something like this:
+Finally, you should attach an error listener to your worker to avoid NodeJS raising an unhandled exception when an error occurs. For example:
```typescript
worker.on('error', err => {
@@ -106,7 +106,7 @@ worker.on('error', err => {
```
{% hint style="danger" %}
-If the error handler is missing, your worker may stop processing jobs when an error is emitted!. More info [here](https://nodejs.org/api/events.html#events\_error\_events).
+If the error handler is missing, your worker may stop processing jobs when an error is emitted! Find more info [here](https://nodejs.org/api/events.html#events\_error\_events).
{% endhint %}
## Typescript typings
diff --git a/docs/gitbook/guide/workers/concurrency.md b/docs/gitbook/guide/workers/concurrency.md
index a325b7a2f7..25c01899f9 100644
--- a/docs/gitbook/guide/workers/concurrency.md
+++ b/docs/gitbook/guide/workers/concurrency.md
@@ -34,7 +34,7 @@ worker.concurrency = 5;
The other way to achieve concurrency is to provide multiple workers. This is the recommended way to setup bull anyway since besides providing concurrency it also provides higher availability for your workers. You can easily launch a fleet of workers running in many different machines in order to execute the jobs in parallel in a predictable and robust way.
{% hint style="info" %}
-It is not possible to achieve a global concurrency of 1 job at once if you use more than one worker.
+It is not possible to achieve a global concurrency of at most 1 job at a time if you use more than one worker.
{% endhint %}
-You still can \(and it is a perfectly good practice\), choose a high concurrency factor for every worker, so that the resources of every machine where the worker is running are used more efficiently.
+You can still \(and it is a perfectly good practice to\) choose a high concurrency factor for every worker, so that the resources of every machine where the worker is running are used more efficiently.
diff --git a/docs/gitbook/guide/workers/graceful-shutdown.md b/docs/gitbook/guide/workers/graceful-shutdown.md
index 1c02731f89..c3e1606763 100644
--- a/docs/gitbook/guide/workers/graceful-shutdown.md
+++ b/docs/gitbook/guide/workers/graceful-shutdown.md
@@ -1,20 +1,18 @@
# Graceful shutdown
-BullMQ supports graceful shutdowns of the workers. This is important so that we can minimize stalled jobs when a worker for some reason must be shutdown. But note that even in the event of an "ungraceful shutdown", the stalled mechanism in BullMQ allows for new workers to pick up stalled jobs and continue working on them.
+BullMQ supports graceful shutdowns of workers. This is important so that we can minimize stalled jobs when a worker for some reason must be shutdown. But note that even in the event of a "ungraceful shutdown", the stalled mechanism in BullMQ allows for new workers to pick up stalled jobs and continue working on them.
{% hint style="danger" %}
-In order for stalled jobs to be picked up by other workers you need to have a [QueueScheduler](https://docs.bullmq.io/guide/queuescheduler) class running in the system.
-{% endhint %}
+Prior to BullMQ 2.0, in order for stalled jobs to be picked up by other workers you need to have a [`QueueScheduler`](https://docs.bullmq.io/guide/queuescheduler) class running in the system.
-{% hint style="danger" %}
-From BullMQ 2.0 and onwards, the QueueScheduler is not needed anymore, so the information above is only valid for older versions.
+From BullMQ 2.0 and onwards, the `QueueScheduler` is not needed anymore, so the information above is only valid for older versions.
{% endhint %}
-In order to perform a shutdown just call the _**close**_ method:
+In order to perform a shutdown just call the _**`close`**_ method:
```typescript
await worker.close();
```
-The above call will mark the worker as _closing_ so it will not pick up new jobs, at the same time it will wait for all the current jobs to be processed \(or failed\). This call will not timeout by itself, so you should make sure that your jobs finalize in a timely manner. If this call fails for some reason or it is not able to complete, the pending jobs will be marked as stalled and processed by other workers \(if correct stalled options are configured on the [QueueScheduler](https://api.docs.bullmq.io/interfaces/v1.QueueSchedulerOptions.html)\).
+The above call will mark the worker as _closing_ so it will not pick up new jobs, and at the same time it will wait for all the current jobs to be processed \(or failed\). This call will not timeout by itself, so you should make sure that your jobs finalize in a timely manner. If this call fails for some reason or it is not able to complete, the pending jobs will be marked as stalled and processed by other workers \(if correct stalled options are configured on the [`QueueScheduler`](https://api.docs.bullmq.io/interfaces/v1.QueueSchedulerOptions.html)\).
diff --git a/docs/gitbook/guide/workers/pausing-queues.md b/docs/gitbook/guide/workers/pausing-queues.md
index 49e6ee9a84..031dfdf77a 100644
--- a/docs/gitbook/guide/workers/pausing-queues.md
+++ b/docs/gitbook/guide/workers/pausing-queues.md
@@ -1,20 +1,20 @@
# Pausing queues
-BullMQ supports pausing queues globally or locally. A queue is paused globally when no workers will pick up any jobs from the queue. When you pause a queue, the workers that are currently busy processing a job, will continue working on that job until it completes (or failed), and then will just keep idling until the queue has been unpaused.
+BullMQ supports pausing queues _globally_ or _locally_. When a queue is paused _globally_ no workers will pick up any jobs from the queue. When you pause a queue, the workers that are currently busy processing a job will continue working on that job until it completes (or fails), and then will keep idling until the queue is unpaused.
-Pausing a queue is performed by calling the _**pause**_ method on a [queue](https://api.docs.bullmq.io/classes/v4.Queue.html) instance:
+Pausing a queue is performed by calling the _**`pause`**_ method on a [queue](https://api.docs.bullmq.io/classes/v4.Queue.html) instance:
```typescript
await myQueue.pause();
```
-It is also possible to pause a given worker instance, this is what we call pause locally. This pause works in a similar way as the global pause in the sense that the worker will conclude processing the jobs it has already started but will not process any new ones:
+It is also possible to _locally_ pause a given worker instance. This pause works in a similar way as the global pause in the sense that the worker will conclude processing the jobs it has already started but will not process any new ones:
```typescript
await myWorker.pause();
```
-The call above will wait for all the jobs currently being processed by this worker, if you do not want to wait for current jobs to complete before the call completes you can pass "true" to just pause the worker ignoring any running jobs:
+The call above will wait for all the jobs currently being processed by this worker to complete (or fail). If you do not want to wait for current jobs to complete before the call completes you can pass `true` to pause the worker **ignoring any running jobs**:
```typescript
await myWorker.pause(true);
diff --git a/docs/gitbook/guide/workers/sandboxed-processors.md b/docs/gitbook/guide/workers/sandboxed-processors.md
index c9e84f83c6..a9c904027f 100644
--- a/docs/gitbook/guide/workers/sandboxed-processors.md
+++ b/docs/gitbook/guide/workers/sandboxed-processors.md
@@ -4,13 +4,13 @@ description: Running jobs in isolated processes
# Sandboxed processors
-It is also possible to define workers to run on a separate process, we call these processors for sandboxed because they run isolated from the rest of the code.
+It is also possible to define workers to run on a separate process. We call these processors _sandboxed_, because they run isolated from the rest of the code.
-When your workers perform CPU-heavy operations, they will inevitably keep the NodeJS event loop busy, which prevents BullMQ from doing some job bookkeeping such as extending the job locks, which ultimately leads to "stalled" jobs.
+When your workers perform CPU-heavy operations, they will inevitably keep the NodeJS event loop busy, which prevents BullMQ from doing job bookkeeping such as extending job locks, ultimately leading to "stalled" jobs.
-Since these workers run the processor in a different process than the bookkeeping code, they will not result in stalled jobs as easily as standard workers. Make sure that you keep your concurrency factor within sane numbers for this not to happen
+Since _sandboxed_ workers run the processor in a different process than the bookkeeping code, they will not result in stalled jobs as easily as standard workers. Make sure that you keep your concurrency factor within sane numbers for this not to happen.
-In order to use a sandboxed processor just define the processor in a separate file:
+In order to use a sandboxed processor, define the processor in a separate file:
```typescript
import { SandboxedJob } from 'bullmq';
@@ -20,7 +20,7 @@ module.exports = async (job: SandboxedJob) => {
};
```
-and refer to it in the worker constructor:
+and pass its path to the worker constructor:
```typescript
import { Worker } from 'bullmq'
@@ -29,13 +29,13 @@ const processorFile = path.join(__dirname, 'my_procesor.js');
worker = new Worker(queueName, processorFile);
```
-If you are looking for a tutorial with code examples on how to use sandboxed processors using typescript you can find one [here](https://blog.taskforce.sh/using-typescript-with-bullmq/).
+A tutorial with code examples on how to use sandboxed processors using Typescript can be found [here](https://blog.taskforce.sh/using-typescript-with-bullmq/).
### Worker Threads
-The default mechanism for launching sandboxed workers is using Node's spawn process library. From BullMQ version v3.13.0, it is also possible to launch the workers using Node's new Worker Threads library. These threads are supposed to be less resource-demanding than the previous approach, however, they are still not as lightweight as we could expect since Nodes runtime needs to be duplicated by every thread.
+The default mechanism for launching sandboxed workers is using Node's spawn process library. From BullMQ version v3.13.0, it is also possible to launch the workers using Node's new Worker Threads library. These threads are supposed to be less resource-demanding than the previous approach, however, they are still not as lightweight as we could expect since Node's runtime needs to be duplicated by every thread.
-In order to enable worker threads support just use the "`useWorkerThreads`" option when defining an external processor file:
+In order to enable worker threads support use the `useWorkerThreads` option when defining an external processor file:
```typescript
import { Worker } from 'bullmq'
diff --git a/docs/gitbook/guide/workers/stalled-jobs.md b/docs/gitbook/guide/workers/stalled-jobs.md
index 45b714d5ac..ea1a22dd09 100644
--- a/docs/gitbook/guide/workers/stalled-jobs.md
+++ b/docs/gitbook/guide/workers/stalled-jobs.md
@@ -5,12 +5,12 @@ Due to the nature of NodeJS, which is \(in general\) single threaded and consist
When a job reaches a worker and starts to be processed, BullMQ will place a lock on this job to protect the job from being modified by any other client or worker. At the same time, the worker needs to periodically notify BullMQ that it is still working on the job.
{% hint style="info" %}
-This period is configured with the "stalledInterval" setting, which normally you should not need to modify.
+This period is configured with the `stalledInterval` setting, which normally you should not need to modify.
{% endhint %}
-However if the CPU is very busy due to the process being very CPU intensive, the worker may not have time to renew the lock and tell the queue that it is still working on the job, then the job will likely be marked as Stalled.
+However if the CPU is very busy (due to the process being very CPU intensive), the worker may not have time to renew the lock and tell the queue that it is still working on the job, which is likely to result in the job being marked as _stalled_.
-A stalled job is moved back to the waiting status and will be processed again by another worker, or if it has reached its maximum number of stalls moved to the failed set.
+A stalled job is moved back to the waiting status and will be processed again by another worker, or if it has reached its maximum number of stalls, it will be moved to the _failed_ set.
-Therefore it is very important to make sure the workers return the control to NodeJS event loop often enough to avoid this kind of problems.
+Therefore, it is very important to make sure the workers return control to the NodeJS event loop often enough to avoid this kind of problem.
diff --git a/docs/gitbook/patterns/adding-bulks.md b/docs/gitbook/patterns/adding-bulks.md
index 3d0d70fd8f..c49d1ff7bc 100644
--- a/docs/gitbook/patterns/adding-bulks.md
+++ b/docs/gitbook/patterns/adding-bulks.md
@@ -1,8 +1,8 @@
# Adding jobs in bulk accross different queues
-Sometimes it is necessary to add a complete bulk of jobs from different queues atomically. For example, there could be a requirement that all the jobs must be created or none of them. Also, adding a bulk of jobs can be faster since it reduces the number of roundtrips to Redis:
+Sometimes it is necessary to atomically add jobs to different queues in bulk. For example, there could be a requirement that all the jobs must be created or none of them. Also, adding jobs in bulk can be faster, since it reduces the number of roundtrips to Redis:
-You may be thinking on [queue.addBulk](https://api.docs.bullmq.io/classes/v4.Queue.html#addBulk), but this method only adds jobs from a single queue. Another option is [flowProducer.addBulk](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#addBulk), so let's see an example:
+You may be think of [`queue.addBulk`](https://api.docs.bullmq.io/classes/v4.Queue.html#addBulk), but this method only adds jobs to a single queue. Another option is [`flowProducer.addBulk`](https://api.docs.bullmq.io/classes/v4.FlowProducer.html#addBulk), so let's see an example:
```typescript
import { FlowProducer } from 'bullmq';
diff --git a/docs/gitbook/patterns/failing-fast-when-redis-is-down.md b/docs/gitbook/patterns/failing-fast-when-redis-is-down.md
index 44554addce..63a1777fbd 100644
--- a/docs/gitbook/patterns/failing-fast-when-redis-is-down.md
+++ b/docs/gitbook/patterns/failing-fast-when-redis-is-down.md
@@ -1,8 +1,8 @@
# Failing fast when Redis is down
-By design, BullMQ will reconnect automatically and if you add new jobs to a queue while the queue instance is disconnected from Redis, the add command will not fail, instead the call will keep waiting for a reconnection to occur until it can complete.
+By design, BullMQ reconnects to Redis automatically. If jobs are added to a queue while the queue instance is disconnected from Redis, the `add` command will not fail; instead, the call will keep waiting for a reconnection to occur until it can complete.
-This behavior is not always desirable; for example, if you have implemented a REST API that results in a call to "add", you do not want to keep the HTTP call busy while the "add" is waiting for the queue to reconnect to Redis. In this case, you can just pass the option "enableOfflineQueue: false", so that "ioredis" do not queue the commands and instead throws an exception:
+This behavior is not always desirable; for example, if you have implemented a REST API that results in a call to `add`, you do not want to keep the HTTP call busy while `add` is waiting for the queue to reconnect to Redis. In this case, you can pass the option `enableOfflineQueue: false`, so that `ioredis` do not queue the commands and instead throws an exception:
```typescript
const myQueue = new Queue("transcoding", {
@@ -21,7 +21,7 @@ app.post("/jobs", async (req, res) => {
})
```
-In this way, the caller can catch this temporal error and act upon it, maybe doing some retries or giving up depending on its requirements.
+Using this approach, the caller can catch the exception and act upon it depending on its requirements (for example, retrying the call or giving up).
{% hint style="danger" %}
Currently, there is a limitation in that the Redis instance must at least be online while the queue is being instantiated.
diff --git a/docs/gitbook/patterns/flows.md b/docs/gitbook/patterns/flows.md
index 4de5203865..b749eb0a5c 100644
--- a/docs/gitbook/patterns/flows.md
+++ b/docs/gitbook/patterns/flows.md
@@ -4,8 +4,8 @@
The following pattern, although still useful, has been mostly super-seeded by the new [Flows](../guide/flows/) functionality
{% endhint %}
-In some situations, you need to execute a flow of actions that each and one of them could fail, it could be database updates, calls to external services, or any other kind of asynchronous call.
+In some situations, you may need to execute a flow of several actions, any of which could fail. For example, you may need to update a database, make calls to external services, or any other kind of asynchronous call.
-Sometimes it may not be possible to create an [idempotent job](idempotent-jobs.md) that can execute all these actions again in the case one of them failed for any reason, instead we want to be able to only re-execute the action that failed and continue executing the rest of the actions that have not yet been executed.
+Sometimes it may not be possible to create an [idempotent job](idempotent-jobs.md) that can execute all these actions again in the case one of them failed for any reason. Instead, we may want to be able to only re-execute the action that failed and continue executing the rest of the actions that have not yet been executed.
-The pattern to solve this issue consists on dividing the flow of actions into one queue for every action. When the first action completes, it places the next action as a job in its correspondent queue.
+The pattern to solve this issue consists of dividing the flow of actions into one queue for every action. When the first action completes, it places the next action as a job in its corresponding queue.
diff --git a/docs/gitbook/patterns/idempotent-jobs.md b/docs/gitbook/patterns/idempotent-jobs.md
index bcf4908b24..41037604b5 100644
--- a/docs/gitbook/patterns/idempotent-jobs.md
+++ b/docs/gitbook/patterns/idempotent-jobs.md
@@ -1,8 +1,8 @@
# Idempotent jobs
-In order to take advantage from [the ability to retry failed jobs](../guide/retrying-failing-jobs.md), your jobs should be designed with failure in mind.
+In order to take advantage of [the ability to retry failed jobs](../guide/retrying-failing-jobs.md), your jobs should be designed with failure in mind.
-This means that it should not make a difference to the final state of the system if a job can be finished in the first attempt or if it fails and needs to be retried later. This is called _Idempotence_.
+This means that it should not make a difference to the final state of the system if a job successfully completes on its first attempt, or if it fails initially and succeeds when retried. This is called _Idempotence_.
To achieve this behaviour, your jobs should be as atomic and simple as possible. Performing many different actions \(such as database updates, API calls, ...\) at once makes it hard to keep track of the process flow and, if needed, rollback partial progress when an exception occurs.
diff --git a/docs/gitbook/patterns/manually-fetching-jobs.md b/docs/gitbook/patterns/manually-fetching-jobs.md
index 7a066fd9e8..857c5e7a77 100644
--- a/docs/gitbook/patterns/manually-fetching-jobs.md
+++ b/docs/gitbook/patterns/manually-fetching-jobs.md
@@ -23,19 +23,20 @@ if (succeeded) {
await worker.close();
```
-There is an important consideration regarding job "locks" when processing manually. Locks avoid other workers to fetch the same job that is being processed by a given worker. The ownership of the lock is determined by the "token" that is sent when getting the job.
+There is an important consideration regarding job "locks" when processing manually. Locks prevent workers from fetching the a job that is already being processed by another worker. The ownership of the lock is determined by the "token" that is sent when getting the job.
{% hint style="info" %}
the lock duration setting is called "visibility window" in other queue systems.
{% endhint %}
-Normally a job gets locked as soon as it is fetched from the queue with a max duration of "lockDuration" worker option. The default is 30 seconds but can be changed to any value easily, for example, to change it to 60 seconds:
+Normally a job gets locked as soon as it is fetched from the queue with a max duration of the specified `lockDuration` worker option. The default is 30 seconds but can be changed to any value easily. For example, to change it to 60 seconds:
```typescript
const worker = new Worker('my-queue', null, { lockDuration: 60000 });
```
-When using standard worker processors the lock is renewed automatically after half lock duration time has passed, however, this mechanism does not exist when processing jobs manually, so you need to make sure to process the job faster than the lockDuration to avoid the "QueueScheduler" to move the job back to the waiting list of the queue or you can extend the lock for the job manually:
+When using standard worker processors, the lock is renewed automatically after half the lock duration time has passed. However, this mechanism does not exist when processing jobs manually, so to avoid the job being moved back to the waiting list of the queue,
+you need to make sure to process the job faster than the `lockDuration`, or manually extend the lock:
```typescript
const job = (await worker.getNextJob(token)) as Job;
@@ -46,21 +47,21 @@ await job.extendLock(token, 30000);
### Choosing a token
-A token represents ownership, that a given worker is currently working on a given job. If the worker dies unexpectedly, the job could be picked up by another worker when the lock expires. A good approach for generating tokes for jobs is simply to generate a UUID for every new job, but it all depends on your specific use case.
+A token represents ownership by given worker currently working on a given job. If the worker dies unexpectedly, the job could be picked up by another worker when the lock expires. A good approach for generating tokens for jobs is simply to generate a UUID for every new job, but it all depends on your specific use case.
## Checking for stalled jobs
-When processing jobs manually you may also want to start the stalled jobs checker. This checker is needed to move jobs that may stall (they have lost their locks) back to the wait status (or failed if they have exhausted the maximum number of [stalled attempts](https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#maxStalledCount), which is 1 by default).
+When processing jobs manually you may also want to start the stalled jobs checker. This checker is needed to move stalled jobs (whose lock has expired) back to the _wait_ status (or _failed_ if they have exhausted the maximum number of [stalled attempts](https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#maxStalledCount), which is 1 by default).
```typescript
await worker.startStalledCheckTimer()
```
-The checker will run periodically (based on the [stalledInterval](https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#stalledInterval) option) until the worker is closed.
+The checker will run periodically (based on the [`stalledInterval`](https://api.docs.bullmq.io/interfaces/v4.WorkerOptions.html#stalledInterval) option) until the worker is closed.
## Looping through jobs
-In many cases, you will have an "infinite" loop that processes jobs one by one like the following example. Note that the third parameter in "job.moveToCompleted/moveToFailed" is not used, signalling that the next job should be returned automatically.
+In many cases, you will have an "infinite" loop that processes jobs one by one like the following example. Note that the third parameter in `job.moveToCompleted`/`job.moveToFailed` is not used, signalling that the next job should be returned automatically.
```typescript
const worker = new Worker('my-queue');
diff --git a/docs/gitbook/patterns/process-step-jobs.md b/docs/gitbook/patterns/process-step-jobs.md
index 6dc0e0c682..b956a9fa6f 100644
--- a/docs/gitbook/patterns/process-step-jobs.md
+++ b/docs/gitbook/patterns/process-step-jobs.md
@@ -1,6 +1,6 @@
# Process Step jobs
-Sometimes, it is useful to break processor function into small pieces that will be processed depending on the previous executed step, we could handle this kind of logic by using switch blocks:
+Sometimes, it is useful to break processor functions into small pieces that will be processed depending on the previous executed step. One way to handle this kind of logic is by using switch statements:
{% tabs %}
{% tab title="TypeScript" %}
@@ -78,13 +78,13 @@ worker = Worker("queueName", process, {"connection": connection})
{% endtab %}
{% endtabs %}
-As you can see, we should save the step value; in this case, we are saving it into the job's data. So even in the case of an error, it would be retried in the last step that was saved (in case we use a backoff strategy).
+By saving the next step value every time we complete the previous step (here, saving it in the job's data), we can ensure that if the job errors and retries, it does so starting from the correct step.
## Delaying
-There are situations when it is valuable to delay a job when it is being processed.
+There are situations when it is useful to delay a job when it is being processed.
-This can be handled using the `moveToDelayed` method. However, it is important to note that when a job is being processed by a worker, the worker keeps a lock on this job with a certain token value. For the `moveToDelayed` method to work, we need to pass said token so that it can unlock without error. Finally, we need to exit from the processor by throwing a special error `DelayedError` that will signal the worker that the job has been delayed so that it does not try to complete (or fail the job) instead.
+This can be handled using the `moveToDelayed` method. However, it is important to note that when a job is being processed by a worker, the worker keeps a lock on this job with a certain token value. For the `moveToDelayed` method to work, we need to pass said token so that it can unlock without error. Finally, we need to exit from the processor by throwing a special error (`DelayedError`) that will signal to the worker that the job has been delayed so that it does not try to complete (or fail the job) instead.
```typescript
import { DelayedError, Worker } from 'bullmq';
@@ -131,7 +131,7 @@ const worker = new Worker(
A common use case is to add children at runtime and then wait for the children to complete.
-This can be handled using the `moveToWaitingChildren` method. However, it is important to note that when a job is being processed by a worker, the worker keeps a lock on this job with a certain token value. For the `moveToWaitingChildren` method to work, we need to pass said token so that it can unlock without error. Finally, we need to exit from the processor by throwing a special error `WaitingChildrenError` that will signal the worker that the job has been moved to waiting-children so that it does not try to complete (or fail the job) instead.
+This can be handled using the `moveToWaitingChildren` method. However, it is important to note that when a job is being processed by a worker, the worker keeps a lock on this job with a certain token value. For the `moveToWaitingChildren` method to work, we need to pass said token so that it can unlock without error. Finally, we need to exit from the processor by throwing a special error (`WaitingChildrenError`) that will signal to the worker that the job has been moved to _waiting-children_, so that it does not try to complete (or fail) the job instead.
{% tabs %}
{% tab title="TypeScript" %}
@@ -278,7 +278,7 @@ Bullmq-Pro: this pattern could be handled by using observables; in that case, we
Another use case is to add flows at runtime and then wait for the children to complete.
-For example, we can add children dynamically in the processor function of a worker. This can be handled in this way:
+For example, we can add children dynamically in the worker's processor function:
```typescript
import { FlowProducer, WaitingChildrenError, Worker } from 'bullmq';
diff --git a/docs/gitbook/patterns/stop-retrying-jobs.md b/docs/gitbook/patterns/stop-retrying-jobs.md
index c64248347f..74889c06d1 100644
--- a/docs/gitbook/patterns/stop-retrying-jobs.md
+++ b/docs/gitbook/patterns/stop-retrying-jobs.md
@@ -1,16 +1,14 @@
# Stop retrying jobs
-When a processor throws an exception that is considered unrecoverable, you should use the `UnrecoverableError` class. In this case, BullMQ will just move the job to the failed set without performing any retries overriding any attempts settings used when adding the job to the queue.
+When a processor throws an exception that is considered unrecoverable, you should use the `UnrecoverableError` class. In this case, BullMQ will just move the job to the failed set without performing any retries, overriding any `attempts` settings used when adding the job to the queue.
```typescript
import { Worker, UnrecoverableError } from 'bullmq';
-const worker = new Worker('foo', async job => {doSomeProcessing();
-throw new UnrecoverableError('Unrecoverable');
-}, {
- connection
- },
-});
+const worker = new Worker('foo', async job => {
+ doSomeProcessing();
+ throw new UnrecoverableError('Unrecoverable');
+}, { connection });
await queue.add(
'test-retry',
@@ -24,7 +22,7 @@ await queue.add(
## Fail job when manual rate-limit
-When we set our queue as rate limited and it's being reprocessed, attempts check is ignored as this case is not considered as a real Error, but in case you want to consider the max attempt as an error you can do the following:
+When a job is rate limited using `Worker.RateLimitError` and tried again, the `attempts` check is ignored, as rate limiting is not considered a real error. However, if you want to manually check the attempts and avoid retrying the job, you can do the following:
```typescript
import { Worker, UnrecoverableError } from 'bullmq';
diff --git a/docs/gitbook/patterns/throttle-jobs.md b/docs/gitbook/patterns/throttle-jobs.md
index cf214af707..05ee0e46ad 100644
--- a/docs/gitbook/patterns/throttle-jobs.md
+++ b/docs/gitbook/patterns/throttle-jobs.md
@@ -1,8 +1,12 @@
# Throttle jobs
-Sometimes, you want to update data in reactions to a sequence of events instead at each event. You can enforce `jobId` to be unique with `JobsOptions.jobId?: string`. That overrides the job ID - by default, the job ID is a unique integer, but you can use this setting to override it. If you use this option, it is up to you to ensure the jobId is unique. If you attempt to add a job with an id that already exists, it will not be added.
+Sometimes, you may want to enqueue a job in reaction to a frequently occuring event, without running that job for _every_ event. For example, you may want to send an email to a user when they update their profile, but you don't want to send an email for every single update if they make many changes in rapid succession. This is sometimes called "debouncing".
-Hint: Be careful if using removeOnComplete/removeOnFailed options, since a removed job will not count as existing and a new job with the same job ID would indeed be added to the queue.
+You can achieve this by setting an identical `jobId` (using `JobsOptions.jobId?: string` to override the default unique integer) so **"identical" jobs are considered duplicates and not added to the queue**. If you use this option, it is up to you to ensure the `jobId`` is unique.
+
+{% hint style="warning" %}
+Hint: Be careful if using `removeOnComplete`/`removeOnFailed` options, since a removed job will not count as existing and a new job with the same job ID could be added to the queue without being detected as a duplicate.
+{% endhint %}
example:
diff --git a/docs/gitbook/python/introduction.md b/docs/gitbook/python/introduction.md
index b366727b90..49e2a8a665 100644
--- a/docs/gitbook/python/introduction.md
+++ b/docs/gitbook/python/introduction.md
@@ -37,7 +37,7 @@ await queue.close()
```
-In order to consume the jobs from the queue you need to use the Worker class, providing a "processor" function that will consume the jobs. As soon as the worker is instantiated it will start consuming jobs:
+In order to consume the jobs from the queue you need to use the `Worker` class, providing a "processor" function that will consume the jobs. As soon as the worker is instantiated it will start consuming jobs:
```python
from bullmq import Worker
@@ -60,6 +60,4 @@ async def main():
if __name__ == "__main__":
asyncio.run(main())
-
```
-