Skip to content

Commit

Permalink
AC - enabled changing the outcome name to work with other plugins (#48)
Browse files Browse the repository at this point in the history
* AC - added the changed needed to be able to modify the outcome of the report

* AC - removed unessasary import and adjusted to match the style of the ini config
  • Loading branch information
andrew-cleveland authored Jan 10, 2025
1 parent bb465ff commit 9d11366
Show file tree
Hide file tree
Showing 3 changed files with 56 additions and 31 deletions.
69 changes: 41 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,19 @@
![Tests](https://github.com/str0zzapreti/pytest-retry/actions/workflows/tests.yaml/badge.svg)

# pytest-retry

pytest-retry is a plugin for Pytest which adds the ability to retry flaky tests,
thereby improving the consistency of the test suite results.
thereby improving the consistency of the test suite results.

## Requirements

pytest-retry is designed for the latest versions of Python and Pytest. Python 3.9+
and pytest 7.0.0+ are required.
and pytest 7.0.0+ are required.

## Installation

Use pip to install pytest-retry:

```bash
$ pip install pytest-retry
```
Expand All @@ -23,7 +25,7 @@ There are two main ways to use pytest-retry:
### 1. Global settings

Once installed, pytest-retry adds new command line and ini config options for pytest.
Run Pytest with the command line argument --retries in order to retry every test in
Run Pytest with the command line argument --retries in order to retry every test in
the event of a failure. The following example will retry each failed up to two times
before proceeding to the next test:

Expand All @@ -41,13 +43,15 @@ $ python -m pytest --retries 2 --retry-delay 5
```

#### Advanced Options:

There are two custom hooks provided for the purpose of setting global exception
filters for your entire Pytest suite. `pytest_set_filtered_exceptions`
and `pytest_set_excluded_exceptions`. You can define either of them in your
conftest.py file and return a list of exception types. Note: these hooks are
and `pytest_set_excluded_exceptions`. You can define either of them in your
conftest.py file and return a list of exception types. Note: these hooks are
mutually exclusive and cannot both be defined at the same time.

Example:

```py
def pytest_set_excluded_exceptions():
"""
Expand All @@ -57,37 +61,46 @@ def pytest_set_excluded_exceptions():
```

There is a command line option to specify the test timing method, which can either
be `overwrite` (default) or `cumulative`. With cumulative timing, the duration of
be `overwrite` (default) or `cumulative`. With cumulative timing, the duration of
each test attempt is summed for the reported overall test duration. The default
behavior simply reports the timing of the final attempt.

```bash
$ python -m pytest --retries 2 --cumulative-timing 1
$ python -m pytest --retries 2 --cumulative-timing 1 --retry-outcome rerun
```

If you're not sure which to use, stick with the default `overwrite` method. This
generally plays nicer with time-based test splitting algorithms and will result in
more even splits.

There is an option to define the outcome of the report that is generated, when combining
with other tools such as pytest-html it expects the type of report to match a list of
options, in the case of pytest-html it expects it to be "rerun" so you can define the
output either by command line argument or in the config files defined below.

Instead of command line arguments, you can set any of these config options in your
pytest.ini, tox.ini, or pyproject.toml file. Any command line arguments will take
precedence over options specified in one of these config files. Here are some
sample configs that you can copy into your project to get started:

_pyproject.toml_

```toml
[tool.pytest.ini_options]
retries = 2
retry_delay = 0.5
cumulative_timing = false
retry_outcome = rerun
```

_config.ini/tox.ini_

```ini
[pytest]
retries = 2
retry_delay = 0.5
cumulative_timing = false
retry_outcome = rerun
```

### 2. Pytest flaky mark
Expand Down Expand Up @@ -115,7 +128,7 @@ def test_unreliable_service():

If you want to control filtered or excluded exceptions per-test, the flaky mark
provides the `only_on` and `exclude` arguments which both take a list of exception
types, including any custom types you may have defined for your project. Note that
types, including any custom types you may have defined for your project. Note that
only one of these arguments may be used at a time.

A test with a list of `only_on` exceptions will only be retried if it fails with
Expand Down Expand Up @@ -147,7 +160,7 @@ def test_only_flaky_on_some_systems():
```

Finally, there is a flaky mark argument for the test timing method, which can either
be `overwrite` (default) or `cumulative`. See **Command Line** > **Advanced Options**
be `overwrite` (default) or `cumulative`. See **Command Line** > **Advanced Options**
for more information

```py
Expand All @@ -161,24 +174,24 @@ specified when running Pytest.

### Things to consider

- **Currently, failing test fixtures are not retried.** In the future, flaky test setup
may be retried, although given the undesirability of flaky tests in general, flaky setup
should be avoided at all costs. Any failures during teardown will immediately halt
further attempts so that they can be addressed immediately. Make sure your teardowns
always work reliably regardless of the number of retries when using this plugin

- When a flaky test is retried, the plugin runs teardown steps for the test as if it
had passed. This is to ensure that any partial state created by the test is cleaned up
before the next attempt so that subsequent attempts do not conflict with one another.
Class and module fixtures are included in this teardown with the assumption that false
test failures should be a rare occurrence and the performance hit from re-running
these potentially expensive fixtures is worth it to ensure clean initial test state.
With feedback, the option to not re-run class and module fixtures may be added, but
in general, these types of fixtures should be avoided for known flaky tests.
- **Currently, failing test fixtures are not retried.** In the future, flaky test setup
may be retried, although given the undesirability of flaky tests in general, flaky setup
should be avoided at all costs. Any failures during teardown will immediately halt
further attempts so that they can be addressed immediately. Make sure your teardowns
always work reliably regardless of the number of retries when using this plugin

- When a flaky test is retried, the plugin runs teardown steps for the test as if it
had passed. This is to ensure that any partial state created by the test is cleaned up
before the next attempt so that subsequent attempts do not conflict with one another.
Class and module fixtures are included in this teardown with the assumption that false
test failures should be a rare occurrence and the performance hit from re-running
these potentially expensive fixtures is worth it to ensure clean initial test state.
With feedback, the option to not re-run class and module fixtures may be added, but
in general, these types of fixtures should be avoided for known flaky tests.

- Flaky tests are not sustainable. This plugin is designed as an easy short-term
solution while a permanent fix is implemented. Use the reports generated by this plugin
to identify issues with the tests or testing environment and resolve them.
solution while a permanent fix is implemented. Use the reports generated by this plugin
to identify issues with the tests or testing environment and resolve them.

## Reporting

Expand All @@ -187,7 +200,7 @@ update the reports as required. When a test is retried at least once, an R is pr
to the live test output and the counter of retried tests is incremented by 1. After
the test session has completed, an additional report is generated below the standard
output which lists all of the tests which were retried, along with the exceptions
that occurred during each failed attempt.
that occurred during each failed attempt.

```
plugins: retry-1.1.0
Expand Down Expand Up @@ -217,6 +230,6 @@ retried and failed. Skipped, xfailed, and xpassed tests are never retried.
Three pytest stash keys are available to import from the pytest_retry plugin:
`attempts_key`, `outcome_key`, and `duration_key`. These keys are used by the plugin
to store the number of attempts each item has undergone, whether the test passed or
failed, and the total duration from setup to teardown, respectively. (If any stage of
setup, call, or teardown fails, a test is considered failed overall). These stash keys
failed, and the total duration from setup to teardown, respectively. (If any stage of
setup, call, or teardown fails, a test is considered failed overall). These stash keys
can be used to retrieve these reports for use in your own hooks or plugins.
3 changes: 3 additions & 0 deletions pytest_retry/configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
RETRIES = "RETRIES"
RETRY_DELAY = "RETRY_DELAY"
CUMULATIVE_TIMING = "CUMULATIVE_TIMING"
RETRY_OUTCOME = "RETRY_OUTCOME"


class UnknownDefaultError(Exception):
Expand All @@ -15,6 +16,7 @@ class _Defaults:
RETRIES: 1, # A flaky mark with 0 args should default to 1 retry.
RETRY_DELAY: 0,
CUMULATIVE_TIMING: False,
RETRY_OUTCOME: "retried", # The string to use for retry outcomes
}

def __init__(self) -> None:
Expand All @@ -41,6 +43,7 @@ def load_ini(self, config: pytest.Config) -> None:
self._opts[RETRIES] = int(config.getini(RETRIES.lower()))
self._opts[RETRY_DELAY] = float(config.getini(RETRY_DELAY.lower()))
self._opts[CUMULATIVE_TIMING] = config.getini(CUMULATIVE_TIMING.lower())
self._opts[RETRY_OUTCOME] = config.getini(RETRY_OUTCOME.lower())

def configure(self, config: pytest.Config) -> None:
if config.getini("retries"):
Expand Down
15 changes: 12 additions & 3 deletions pytest_retry/retry_plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ def pytest_runtest_makereport(

# If teardown passes, send report that the test is being retried
if attempts == 1:
original_report.outcome = "retried" # type: ignore
original_report.outcome = Defaults.RETRY_OUTCOME # type: ignore
hook.pytest_runtest_logreport(report=original_report)
original_report.outcome = "failed"
retry_manager.log_attempt(attempt=attempts, name=item.name, exc=call.excinfo, result=RETRY)
Expand Down Expand Up @@ -277,8 +277,8 @@ def pytest_terminal_summary(terminalreporter: TerminalReporter) -> None:
def pytest_report_teststatus(
report: pytest.TestReport,
) -> Optional[tuple[str, str, tuple[str, dict]]]:
if report.outcome == "retried":
return "retried", "R", ("RETRY", {"yellow": True})
if report.outcome == Defaults.RETRY_OUTCOME:
return Defaults.RETRY_OUTCOME, "R", ("RETRY", {"yellow": True})
return None


Expand Down Expand Up @@ -317,6 +317,7 @@ def pytest_configure(config: pytest.Config) -> None:
RETRIES_HELP_TEXT = "number of times to retry failed tests. Defaults to 0."
DELAY_HELP_TEXT = "configure a delay (in seconds) between retries."
TIMING_HELP_TEXT = "if True, retry duration will be included in overall reported test duration"
RETRY_HELP_TEXT = "configure the outcome of retried tests. Defaults to 'retried'"


def pytest_addoption(parser: pytest.Parser) -> None:
Expand Down Expand Up @@ -344,9 +345,17 @@ def pytest_addoption(parser: pytest.Parser) -> None:
type=bool,
help=TIMING_HELP_TEXT,
)
group.addoption(
"--retry-outcome",
action="store",
dest="retry_outcome",
type=str,
help=RETRY_HELP_TEXT,
)
parser.addini("retries", RETRIES_HELP_TEXT, default=0, type="string")
parser.addini("retry_delay", DELAY_HELP_TEXT, default=0, type="string")
parser.addini("cumulative_timing", TIMING_HELP_TEXT, default=False, type="bool")
parser.addini("retry_outcome", RETRY_HELP_TEXT, default="retried")


def pytest_addhooks(pluginmanager: pytest.PytestPluginManager) -> None:
Expand Down

0 comments on commit 9d11366

Please sign in to comment.