Skip to content

Commit

Permalink
doc: update docstring and readme.md
Browse files Browse the repository at this point in the history
  • Loading branch information
leavers committed Dec 27, 2023
1 parent f07f30e commit 68e9236
Show file tree
Hide file tree
Showing 3 changed files with 142 additions and 3 deletions.
103 changes: 102 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,102 @@
# flexexecutor
# Flexexecutor

![Testing](https://github.com/leavers/flexexecutor/workflows/Test%20Suite/badge.svg?event=push&branch=main)

Flexexecutor provides executors that can automatically scale the number of workers up
and down.

## Overview

Flexexecutor implements several subclasses of `concurrent.futures.Executor`.
Like the built-in `ThreadPoolExecutor` and `ProcessPoolExecutor`,
they create multiple workers to execute tasks concurrently.
Additionally, it can shut down idle workers to save resources. These subclasses are:

- `ThreadPoolExecutor` - thread concurrency, same name as the built-in class and can
be a in-place replacement.
- `AsyncPoolExecutor` - coroutine concurrency. Note that in flexexecutor's
implementation, all coroutines are executed in a dedicated worker thread.
- `ProcessPoolExecutor` - flexexecutor directly imports the built-in implementation as it
already has scaling capabilities.

## Features

- Supports various concurrency modes: threads, processes and coroutines.
- Automatically shut down idle workers to save resources.
- Single file design, keeps the code clean and easy for hackers to directly take away
and add more features.

## Installation

Flexexecutor is available on [PyPI](https://pypi.org/project/flexexecutor/):

```shell
pip install flexexecutor
```

## Usage

### ThreadPoolExecutor

```python
from flexexecutor import ThreadPoolExecutor


def task(i):
import time

print(f"task {i} started")
time.sleep(1)

if __name__ == "__main__":
with ThreadPoolExecutor(
# 1024 is the default value of max_workers, since workers are closed if they are
# idle for some time, you can set it to a big value to get better short-term
# performance.
max_workers=1024,
# Timeout for idle workers.
idle_timeout=60.0,
# These parameters are given for compatibility with the built-in
# `ThreadPoolExecutor`, I don't use them very often, do you?
thread_name_prefix="Task",
initializer=None,
initargs=(),
) as executor:
for i in range(1024):
executor.submit(task, i)
```

### AsyncPoolExecutor

```python
from flexexecutor import AsyncPoolExecutor


async def task(i):
import asyncio

print(f"task {i} started")
await asyncio.sleep(1)

if __name__ == "__main__":
# AsyncPoolExecutor behaves just like ThreadPoolExecutor except it only accepts
# coroutine functions.
with AsyncPoolExecutor(
# Default value of max_workers is huge, if you don't like it, set it smaller.
max_workers=1024,
# Idle timeout for the working thread.
idle_timeout=60.0,
# These parameters are given for compatibility with the built-in
# `ThreadPoolExecutor`, I don't use them very often, do you?
thread_name_prefix="Task",
initializer=None,
initargs=(),
) as executor:
for i in range(1024):
executor.submit(task, i)
```

### ProcessPoolExecutor

`ProcessPoolExecutor` in flexexecutor is just the same as the built-in
`concurrent.futures.ProcessPoolExecutor`, we just import it directly for convenience.
40 changes: 39 additions & 1 deletion flexexecutor.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,12 +88,31 @@ def _worker(executor_ref, work_queue, initializer, initargs, idle_timeout):
class ThreadPoolExecutor(_ThreadPoolExecutor):
def __init__(
self,
max_workers=None,
max_workers=1024,
thread_name_prefix="",
initializer=None,
initargs=(),
idle_timeout=60.0,
):
"""Initializes a new ThreadPoolExecutor instance.
:type max_workers: int, optional
:param max_workers: The maximum number of workers to create. Defaults to 1024.
:type thread_name_prefix: str, optional
:param thread_name_prefix: An optional name prefix to give our threads.
:type initializer: callable, optional
:param initializer: A callable used to initialize worker threads.
:type initargs: tuple, optional
:parm initargs: A tuple of arguments to pass to the initializer.
:type idle_timeout: float, optional
:param idle_timeout: The maximum amount of time (in seconds) that a worker
thread can remain idle before it is terminated. If set to None or negative
value, workers will never be terminated. Defaults to 60 seconds.
"""
if max_workers is None:
max_workers = 1024
super().__init__(max_workers, thread_name_prefix, initializer, initargs)
Expand Down Expand Up @@ -281,6 +300,25 @@ def __init__(
initargs=(),
idle_timeout=60.0,
):
"""Initializes a new AsyncPoolExecutor instance.
:type max_workers: int, optional
:param max_workers: The maximum number of workers to create. Defaults to 261244.
:type thread_name_prefix: str, optional
:param thread_name_prefix: An optional name prefix to give our threads.
:type initializer: callable, optional
:param initializer: A callable used to initialize worker threads.
:type initargs: tuple, optional
:parm initargs: A tuple of arguments to pass to the initializer.
:type idle_timeout: float, optional
:param idle_timeout: The maximum amount of time (in seconds) that a worker
thread can remain idle before it is terminated. If set to None or negative
value, workers will never be terminated. Defaults to 60 seconds.
"""
if max_workers is None:
max_workers = 262144
if not thread_name_prefix:
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ dynamic = ["version"]

[project.urls]
homepage = "https://github.com/leavers/flexexecutor"
# documentation = "URL of documentation"
documentation = "https://github.com/leavers/flexexecutor/blob/main/README.md"
# changelog = "URL of changelog"

[project.optional-dependencies]
Expand Down

0 comments on commit 68e9236

Please sign in to comment.