Skip to content

Commit

Permalink
Tweak documentation (grammar, format, etc.)
Browse files Browse the repository at this point in the history
This makes minor changes to the documentation, mostly consisting of
small grammar and spelling fixes, as well as formatting, and some
rewording.
  • Loading branch information
sambostock committed Aug 29, 2023
1 parent e640e5a commit 3930ac5
Show file tree
Hide file tree
Showing 68 changed files with 296 additions and 307 deletions.
6 changes: 3 additions & 3 deletions docs/gitbook/bull-3.x-migration/compatibility-class.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Compatibility class

The Queue3 class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.
The `Queue3` class is targeted to simplify migration of projects based on Bull 3. Though it does not offer 100% API and functional compatibility, upgrading to BullMQ with this class should be easier for users familiar with Bull 3.

Differences in interface include

* fixed order of `add()` and `process()` method arguments
* class instantiation requires use of the `new` operator
* interfaces for Queue and Job options and Job class do not have wrappers and used directly
* interfaces for Queue and Job options and Job class do not have wrappers and are used directly
* there's no `done` argument expected in `process()` callback anymore; now the callback must always return a `Promise` object
* name property is mandatory in `add()` method
* concurrency is moved from `process()` argument to queue options

Functional differences generally include only absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and getting saved to Redis as is. When job is in progress, you can read this value as `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.
Functional differences generally include only the absence of named processors feature and minor changes in local and global events set. The mandatory `name` property in `add()` method can contain any string and gets saved to Redis as is. When a job is in progress, you can read this value using `job.name` \(`job.data` and `job.id` are available as usual\). See the \[link\] for details.

The all-in-one example:

Expand Down
2 changes: 0 additions & 2 deletions docs/gitbook/bull/patterns/custom-backoff-strategy.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,5 +112,3 @@ myQueue.add({ msg: 'Specific Error' }, {
}
});
```

\
4 changes: 2 additions & 2 deletions docs/gitbook/bull/patterns/manually-fetching-jobs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Manually fetching jobs

If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for your.
If you want to manually fetch the jobs from the queue instead of letting the automatic processor taking care of it, this pattern is for you.

Manually transitioning states for jobs can be done with a few simple methods.

Expand Down Expand Up @@ -53,4 +53,4 @@ if (nextJobdata) {

**Note**

By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds, if it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. The recommended is to extend the lock when half the lock time has passsed.
By default the lock duration for a job that has been returned by `getNextJob` or `moveToCompleted` is 30 seconds. If it takes more time than that the job will be automatically marked as stalled and depending on the max stalled options be moved back to the wait state or marked as failed. In order to avoid this you must use `job.extendLock(duration)` in order to give you some more time before the lock expires. It is recommended to extend the lock when half the lock time has passsed.
2 changes: 1 addition & 1 deletion docs/gitbook/bull/patterns/message-queue.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Message queue

Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:
Bull can also be used for persistent message queues. This is a quite useful feature in some use cases. For example, you can have two servers that need to communicate with each other. By using a queue, the servers do not need to be online at the same time, so this creates a very robust communication channel. You can treat `add` as _send_ and `process` as _receive_:

Server A:

Expand Down
12 changes: 6 additions & 6 deletions docs/gitbook/bull/patterns/persistent-connections.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Persistent connections

A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep this connections alive for as long as the service is running.
A crucial feature for a subsystem in a microservice architecture is that it should automatically handle disconnections from other services and keep these connections alive for as long as the service is running.

For example, if your service has a connection to a database, and the connection to said database breaks, you would like that service to handle this disconnection as gracefully as possible and as soon as the database is back online continue to work without human intervention.

Since Bull relies on **ioredis** for accessing Redis, the default is auto-reconnect forever, this behaviour can be customized but most likely the default is the best setting currently: [https://github.com/luin/ioredis#auto-reconnect](https://github.com/luin/ioredis#auto-reconnect)
Since Bull relies on **ioredis** for accessing Redis, the default is auto-reconnect forever. This behaviour can be customized but most likely the default is the best setting currently: [https://github.com/luin/ioredis#auto-reconnect](https://github.com/luin/ioredis#auto-reconnect)

In the context of Bull, we have normally two different cases that are handled differently. 
In the context of Bull, we have normally two different cases that are handled differently.

### Workers

A worker is consuming jobs from the queue as fast as it can. If it loses the connection to Redis we want the worker to "wait" until Redis is available again. For this to work we need to understand an important setting in our Redis options (which are handled by ioredis):

#### maxRetriesPerRequest
#### `maxRetriesPerRequest`

This setting tells the ioredis client how many times to try a command that fails before throwing an error. So even though Redis is not reachable or offline, the command will be retried until this situation changes or the maximum number of attempts is reached.

Expand All @@ -22,11 +22,11 @@ This guarantees that the workers will keep processing forever as long as there i

### Queue

A simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. has usually different requirements as the worker. 
A simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. usually has different requirements from the worker.

For example, say that you are adding jobs to a queue as the result of a call to an HTTP endpoint. The caller of this endpoint cannot wait forever if the connection to Redis happens to be down when this call is made.

Therefore the **maxRetriesPerRequest** setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.
Therefore the `maxRetriesPerRequest` setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.



4 changes: 0 additions & 4 deletions docs/gitbook/bull/patterns/redis-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,3 @@ const queue = new Queue('cluster', {
```

If you use several queues in the same cluster, you should use different prefixes so that the queues are evenly placed in the cluster nodes.

###

\
5 changes: 0 additions & 5 deletions docs/gitbook/bull/patterns/returning-job-completions.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,4 @@

A common pattern is where you have a cluster of queue processors that just process jobs as fast as they can, and some other services that need to take the result of these processors and do something with it, maybe storing results in a database.

\
The most robust and scalable way to accomplish this is by combining the standard job queue with the message queue pattern: a service sends jobs to the cluster just by opening a job queue and adding jobs to it, and the cluster will start processing as fast as it can. Everytime a job gets completed in the cluster a message is sent to a results message queue with the result data, and this queue is listened by some other service that stores the results in a database.



\
1 change: 0 additions & 1 deletion docs/gitbook/bull/patterns/reusing-redis-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,4 @@ const opts = {

const queueFoo = new Queue("foobar", opts);
const queueQux = new Queue("quxbaz", opts);

```
14 changes: 7 additions & 7 deletions docs/gitbook/bullmq-pro/batches.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ description: Processing jobs in batches

# Batches

It is possible to configure the workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called batch) in one go.
It is possible to configure workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called _batch_) in one go.

Workers using batches have slightly different semantics and behavior than normal workers, so read carefully the following examples to avoid pitfalls.

In order to enable batches you must pass the batch option with a size representing the maximum amount of jobs per batch:
In order to enable batches you must pass the `batches` option with a size representing the maximum amount of jobs per batch:

```typescript
const worker = new WorkerPro(
Expand All @@ -26,7 +26,7 @@ const worker = new WorkerPro(
```

{% hint style="info" %}
There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch, so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
{% endhint %}

### Failing jobs
Expand Down Expand Up @@ -54,18 +54,18 @@ const worker = new WorkerPro(
);
```

Only the jobs that are `setAsFailed` will fail, the rest will be moved to complete when the processor for the batch job completes.
Only the jobs that are `setAsFailed` will fail, the rest will be moved to _complete_ when the processor for the batch job completes.

### Handling events

Batches are handled by wrapping all the jobs in a batch into a dummy job that keeps all the jobs in an internal array. This approach simplifies the mechanics of running batches, however, it also affects things like how events are handled. For instance, if you need to listen for individual jobs that have completed or failed you must use global events, as the event handler on the worker instance will only report on the events produced by the wrapper batch job, and not the jobs themselves.

It is possible, however, to call the getBatch function in order to retrieve all the jobs that belong to a given batch.
It is possible, however, to call the `getBatch` function in order to retrieve all the jobs that belong to a given batch.

```typescript
worker.on('completed', job => {
const batch = job.getBatch();
e;
// ...
});
```

Expand All @@ -82,7 +82,7 @@ queueEvents.on('completed', (jobId, err) => {

### Limitations

Currently, all worker options can be used with the batches, however, there are some unsupported features that may be implemented in the future:
Currently, all worker options can be used with batches, however, there are some unsupported features that may be implemented in the future:

- [Dynamic rate limit](https://docs.bullmq.io/guide/rate-limiting#manual-rate-limit)
- [Manually processing jobs](https://docs.bullmq.io/patterns/manually-fetching-jobs)
Expand Down
8 changes: 4 additions & 4 deletions docs/gitbook/bullmq-pro/groups/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Groups

Groups allows you to use only one queue yet distribute the jobs among groups so that the jobs are processed one by one relative to the group they belong to.
Groups allows you to use a single queue while distributing the jobs among groups so that the jobs are processed one by one relative to the group they belong to.

For example, imagine that you have 1 queue for processing video transcoding for all your users, you may have thousands of users in your application. You need to offload the transcoding operation since it is lengthy and CPU consuming. If you have many users that want to transcode many files, then in a non-grouped queue one user could fill the queue with jobs and the rest of the users will need to wait for that user to complete all its jobs before their jobs get processed.

Expand All @@ -18,9 +18,9 @@ If you only use grouped jobs in a queue, the waiting jobs list will not grow, in
There is no hard limit on the amount of groups that you can have, nor do they have any impact on performance. When a group is empty, the group itself does not consume any resources in Redis.
{% endhint %}

Another way to see groups is like "virtual" queues. So instead of having one queue per "user" you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.
Another way to see groups is like "virtual" queues. So instead of having one queue per "user", you have a "virtual" queue per user so that all users get their jobs processed in a more predictable way.

In order to use the group functionality just use the group property in the job options when adding a job:
In order to use the group functionality, use the group property in the job options when adding a job:

```typescript
import { QueuePro } from '@taskforcesh/bullmq-pro';
Expand Down Expand Up @@ -48,7 +48,7 @@ const job2 = await queue.add(
);
```

In order to process the jobs, just use a pro worker as you normally do with standard workers:
In order to process the jobs, use a pro worker as you normally do with standard workers:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand Down
8 changes: 4 additions & 4 deletions docs/gitbook/bullmq-pro/groups/concurrency.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Concurrency

By default, there is no limit on the number of jobs that the workers can run in parallel for every group. Even using a rate limit, that would only limit the processing speed, but still you could have an unbounded number of jobs processed simultaneously in every group.
By default, there is no limit on the number of jobs that workers can run in parallel for every group. Even using a rate limit, that would only limit the processing speed, but still you could have an unbounded number of jobs processed simultaneously in every group.

It is possible to constraint how many jobs are allowed to be processed concurrently per group, so for example, if you choose 3 as max concurrency factor, the workers will never work on more than 3 jobs at the same time for any given group. This limits only the group, you could have any number of concurrent jobs as long as they are not from the same group.
It is possible to constrain how many jobs are allowed to be processed concurrently per group. For example, if you choose 3 as max concurrency factor, the workers will never work on more than 3 jobs at the same time for any given group. This limits only the group; you could have any number of concurrent jobs as long as they are not from the same group.

You enable the concurrency setting like this:
The concurrency factor is configured as follows:

```typescript
import { WorkerPro } from '@taskforcesh/bullmq-pro';
Expand All @@ -13,7 +13,7 @@ const worker = new WorkerPro('myQueue', processFn, {
group: {
concurrency: 3 // Limit to max 3 parallel jobs per group
},
concurrency: 100
concurrency: 100,
connection
});
```
Expand Down
9 changes: 3 additions & 6 deletions docs/gitbook/bullmq-pro/groups/max-group-size.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

It is possible to set a maximum group size. This can be useful if you want to keep the number of jobs within some limits and you can afford to discard new jobs.

When a group has reached the defined max size, adding new jobs to that group will result in an exception being thrown, that you can catch and ignore if you do not care about it.
When a group has reached the defined max size, adding new jobs to that group will result in an exception being thrown that you can catch and ignore if you do not care about it.

You can use the "maxSize" option when adding jobs to a group like this:
You can use the `maxSize` option when adding jobs to a group like this:

```typescript
import { QueuePro, GroupMaxSizeExceededError } from '@taskforcesh/bullmq-pro';
Expand All @@ -25,11 +25,8 @@ try {
throw err;
}
}

```



{% hint style="info" %}
The maxSize option is not yet available for "addBulk".
The `maxSize` option is not yet available for `addBulk`.
{% endhint %}
10 changes: 5 additions & 5 deletions docs/gitbook/bullmq-pro/groups/pausing-groups.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,28 @@

BullMQ Pro supports pausing groups globally. A group is paused when no workers will pick up any jobs that belongs to the paused group. When you pause a group, the workers that are currently busy processing a job from that group, will continue working on that job until it completes (or failed), and then will just keep idling until the group has been resumed.

Pausing a group is performed by calling the _**pauseGroup**_ method on a [queue](https://api.bullmq.pro/classes/v6.Queue.html#pauseGroup) instance:
Pausing a group is performed by calling the `pauseGroup` method on a [`Queue`](https://api.bullmq.pro/classes/v6.Queue.html#pauseGroup) instance:

```typescript
await myQueue.pauseGroup('groupId');
```

{% hint style="info" %}
Even if the groupId does not exist at that time, the groupId will be added in our paused list as a group could be ephemeral
Even if the `groupId` does not exist at that time, the `groupId` will be added in our paused list as a group could be ephemeral
{% endhint %}

{% hint style="warning" %}
It will return false if the group is already paused.
`pauseGroup` will return `false` if the group is already paused.
{% endhint %}

Resuming a group is performed by calling the _**resumeGroup**_ method on a [queue](https://api.bullmq.pro/classes/v6.Queue.html#resumeGroup) instance:
Resuming a group is performed by calling the `resumeGroup` method on a [`Queue`](https://api.bullmq.pro/classes/v6.Queue.html#resumeGroup) instance:

```typescript
await myQueue.resumeGroup('groupId');
```

{% hint style="warning" %}
It will return false if the group does not exist or when the group is already resumed.
`resumeGroup` will return `false` if the group does not exist or when the group is already resumed.
{% endhint %}

## Read more:
Expand Down
2 changes: 1 addition & 1 deletion docs/gitbook/bullmq-pro/groups/prioritized.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Prioritized intra-groups

BullMQ Pro supports priorities per group. A job is prioritized in a group when group and priority options are provided together.
BullMQ Pro supports priorities per group. A job is prioritized in a group when group and priority options are provided _together_.

```typescript
await myQueue.add(
Expand Down
Loading

0 comments on commit 3930ac5

Please sign in to comment.