Skip to content

Commit

Permalink
docs(guide): add queue global concurrency section (#2667)
Browse files Browse the repository at this point in the history
  • Loading branch information
roggervalf authored Jul 21, 2024
1 parent 1bf1258 commit c905d62
Show file tree
Hide file tree
Showing 7 changed files with 44 additions and 39 deletions.
1 change: 1 addition & 0 deletions docs/gitbook/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
* [Queues](guide/queues/README.md)
* [Auto-removal of jobs](guide/queues/auto-removal-of-jobs.md)
* [Adding jobs in bulk](guide/queues/adding-bulks.md)
* [Global Concurrency](guide/queues/global-concurrency.md)
* [Removing Jobs](guide/queues/removing-jobs.md)
* [Workers](guide/workers/README.md)
* [Auto-removal of jobs](guide/workers/auto-removal-of-jobs.md)
Expand Down
16 changes: 16 additions & 0 deletions docs/gitbook/bullmq-pro/groups/prioritized.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,22 @@ await myQueue.add(
The priorities go from 0 to 2097151, where a higher number means lower priority (as in Unix [processes](https://en.wikipedia.org/wiki/Nice\_\(Unix\))). Thus, jobs without any explicit priority will have the highest priority.
{% endhint %}

## Get Counts per Priority for Group

If you want to get the `count` of jobs in `prioritized` status (priorities higher than 0) or in `waiting` status (priority 0) for specific group, use the **`getCountsPerPriorityForGroup`** method. For example, let's say that you want to get counts for `priority` `1` and `0`:

```typescript
const counts = await queue.getCountsPerPriorityForGroup('groupId', [1, 0]);
/*
{
'1': 11,
'0': 10
}
*/
```

## Read more:

* 💡 [Add Job API Reference](https://api.bullmq.pro/classes/v7.Queue.html#add)
* 💡 [Get Counts per Priority for Group API Reference](https://api.bullmq.pro/classes/v7.Queue.html#getCountsPerPriorityForGroup)

2 changes: 1 addition & 1 deletion docs/gitbook/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@

### Bug Fixes

* extendlock,createbulk use pipeline no multi command ([a053d9b](https://github.com/taskforcesh/bullmq/commit/a053d9b87e9799b151e2563b499dbff309b9d2e5))
* extendlock, createbulk use pipeline no multi command ([#2584](https://github.com/taskforcesh/bullmq/pull/2584)) ([a053d9b](https://github.com/taskforcesh/bullmq/commit/a053d9b87e9799b151e2563b499dbff309b9d2e5))

## [5.7.12](https://github.com/taskforcesh/bullmq/compare/v5.7.11...v5.7.12) (2024-05-24)

Expand Down
3 changes: 0 additions & 3 deletions docs/gitbook/guide/parallelism-and-concurrency.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,3 @@ The concurrency factor will just take advantage of NodeJS's event loop so that t
If the jobs are very CPU intensive without IO calls, then there is no point in having a large concurrency number as it will just add overhead. Still, since BullMQ itself also performs IO operations (when updating Redis and fetching new jobs), there is a chance that a slight concurrency factor may even improve the throughput of CPU-intensive jobs.

Secondly, you can run as many workers as you want. Every worker will run in parallel if it has a CPU at its disposal. You can run several workers in a given machine if the machine has more than one core, but you can also run workers in totally different machines. The jobs running on different workers will be running in parallel, so even if the job is CPU-intensive you will be able to increase the throughput which will normally scale linearly with the number of workers.



24 changes: 24 additions & 0 deletions docs/gitbook/guide/queues/global-concurrency.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Global Concurrency

The global concurrency factor is a queue option that determines how many jobs are allowed to be processed in parallel across all your worker instances.

```typescript
import { Queue } from 'bullmq';

await queue.setGlobalConcurrency(4);
```

And in order to get this value:

```typescript
const globalConcurrency = await queue.getGlobalConcurrency();
```

{% hint style="info" %}
Note that if you choose a concurrency level in your workers, it will not override the global one, it will just be the maximum jobs a given worker can process in parallel but never more than the global one.
{% endhint %}

## Read more:

- 💡 [Set Global Concurrency API Reference](https://api.docs.bullmq.io/classes/v5.Queue.html#setGlobalConcurrency)
- 💡 [Get Global Concurrency API Reference](https://api.docs.bullmq.io/classes/v5.Queue.html#getGlobalConcurrency)
24 changes: 2 additions & 22 deletions docs/gitbook/guide/workers/concurrency.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,6 @@
# Concurrency

There are basically two ways to achieve concurrency with BullMQ. You can run a worker with a concurrency factor larger than 1 \(which is the default value\), or you can run several workers in different node processes.

#### Global Concurrency factor

The global concurrency factor is a queue option that determines how many jobs are allowed to be processed in parallel across all your worker instances.

```typescript
import { Queue } from 'bullmq';

await queue.setGlobalConcurrency(4);
```

And in order to get this value:

```typescript
const globalConcurrency = await queue.getGlobalConcurrency();
```

{% hint style="info" %}
Note that if you choose a concurrency level in your workers, it will not override the global one, it will just be the maximum jobs a given worker can process in parallel but never more than the global one.
{% endhint %}
There are basically two ways to achieve concurrency with BullMQ using Worker instances. You can run a worker with a concurrency factor larger than 1 \(which is the default value\), or you can run several workers in different node processes.

#### Local Concurrency factor

Expand Down Expand Up @@ -54,7 +34,7 @@ worker.concurrency = 5;
The other way to achieve concurrency is to provide multiple workers. This is the recommended way to setup bull anyway since besides providing concurrency it also provides higher availability for your workers. You can easily launch a fleet of workers running in many different machines in order to execute the jobs in parallel in a predictable and robust way.

{% hint style="info" %}
It is not possible to achieve a global concurrency of at most 1 job at a time if you use more than one worker.
If you need to achieve a global concurrency of at most 1 job at a time, refer to [Global concurrency](../queues/global-concurrency).
{% endhint %}

You can still \(and it is a perfectly good practice to\) choose a high concurrency factor for every worker, so that the resources of every machine where the worker is running are used more efficiently.
13 changes: 0 additions & 13 deletions docs/gitbook/python/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,9 @@
## v2.8.0 (2024-07-10)
### Feature
* **queue:** Add getCountsPerPriority method [python] ([#2607](https://github.com/taskforcesh/bullmq/issues/2607)) ([`02b8338`](https://github.com/taskforcesh/bullmq/commit/02b83380334879cc2434043141566f2a375db958))
* **queue:** Add getCountsPerPriority method ([#2595](https://github.com/taskforcesh/bullmq/issues/2595)) ([`77971f4`](https://github.com/taskforcesh/bullmq/commit/77971f42b9fc425ad66e0b581e800ea429fc254e))

### Fix
* **parent:** Consider re-adding child that is in completed state using same jobIds (#2627) (python) fixes #2554 ([`00cd017`](https://github.com/taskforcesh/bullmq/commit/00cd0174539fbe1cc4628b9b6e1a7eb87a5ef705))
* **queue-getters:** Consider passing maxJobs when calling getRateLimitTtl (#2631) fixes #2628 ([`9f6609a`](https://github.com/taskforcesh/bullmq/commit/9f6609ab1856c473b2d5cf0710068ce2751d708e))
* **job:** Consider changing priority to 0 ([#2599](https://github.com/taskforcesh/bullmq/issues/2599)) ([`4dba122`](https://github.com/taskforcesh/bullmq/commit/4dba122174ab5173315fca7fdbb7454761514a53))
* **priority:** Consider paused state when calling getCountsPerPriority (python) ([#2609](https://github.com/taskforcesh/bullmq/issues/2609)) ([`6e99250`](https://github.com/taskforcesh/bullmq/commit/6e992504b2a7a2fa76f1d04ad53d1512e98add7f))
* **priority:** Use module instead of bit.band to keep order (python) ([#2597](https://github.com/taskforcesh/bullmq/issues/2597)) ([`9ece15b`](https://github.com/taskforcesh/bullmq/commit/9ece15b17420fe0bee948a5307e870915e3bce87))

Expand All @@ -27,19 +24,9 @@

## v2.7.7 (2024-06-04)
### Fix
* **worker:** Properly cancel blocking command during disconnections ([`2cf12b3`](https://github.com/taskforcesh/bullmq/commit/2cf12b3622b0517f645971ece8acdcf673bede97))
* Extendlock,createbulk use pipeline no multi command ([`a053d9b`](https://github.com/taskforcesh/bullmq/commit/a053d9b87e9799b151e2563b499dbff309b9d2e5))
* **repeat:** Throw error when endDate is pointing to the past ([#2574](https://github.com/taskforcesh/bullmq/issues/2574)) ([`5bd7990`](https://github.com/taskforcesh/bullmq/commit/5bd79900ea3ace8ec6aa00525aff81a345f8e18e))
* **retry-job:** Throw error when job is not in active state ([#2576](https://github.com/taskforcesh/bullmq/issues/2576)) ([`ca207f5`](https://github.com/taskforcesh/bullmq/commit/ca207f593d0ed455ecc59d9e0ef389a9a50d9634))
* **sandboxed:** Ensure DelayedError is checked in Sandboxed processors (#2567) fixes #2566 ([`8158fa1`](https://github.com/taskforcesh/bullmq/commit/8158fa114f57619b31f101bc8d0688a09c6218bb))
* **job:** Validate job existence when adding a log ([#2562](https://github.com/taskforcesh/bullmq/issues/2562)) ([`f87e3fe`](https://github.com/taskforcesh/bullmq/commit/f87e3fe029e48d8964722da762326e531c2256ee))

### Documentation
* Correct typo in `maxmemory-policy` reference ([`c19c839`](https://github.com/taskforcesh/bullmq/commit/c19c83979a50fd5e188bb97d0511481e460bdfc9))
* **aws-elasticache:** Fix image not displayed correctly(#2564) ([`2dd3709`](https://github.com/taskforcesh/bullmq/commit/2dd3709fe3b638f2ff13851fc9ff4dc81c4bfe94))
* Fix typo ([#2563](https://github.com/taskforcesh/bullmq/issues/2563)) ([`be68695`](https://github.com/taskforcesh/bullmq/commit/be68695028fac0581b1561be7c6705188d9cdbb7))
* **pro:** Add local group concurrency section ([#2551](https://github.com/taskforcesh/bullmq/issues/2551)) ([`cce0774`](https://github.com/taskforcesh/bullmq/commit/cce0774cffcee591407eee4d4530daa37aab3eca))

### Performance
* **job:** Set processedBy using hmset (#2592) (python) ([`238680b`](https://github.com/taskforcesh/bullmq/commit/238680b84593690a73d542dbe1120611c3508b47))

Expand Down

0 comments on commit c905d62

Please sign in to comment.