Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use random in pytest.mark.parametrize with xdist and randomly #75

Closed
p-himik opened this issue Sep 15, 2017 · 9 comments
Closed

Comments

@p-himik
Copy link

p-himik commented Sep 15, 2017

Packages:

pytest-3.2.2
xdist-1.20.0
randomly-1.2.1

Example code:

import pytest
import random


def gen_param():
    a = random.random()
    b = random.random()
    c = a + b
    return a, b, c



@pytest.mark.parametrize('a,b,c', [gen_param() for _ in range(10)])
def test_sum(a, b, c):
    assert a + b == c

Example result:

Different tests were collected between gw1 and gw0. The difference is:
--- gw1

+++ gw0

@@ -1,10 +1,10 @@

-test_it.py::test_sum[0.21119735007187512-0.03478699051186407-0.2459843405837392]
-test_it.py::test_sum[0.19989965451085068-0.21530345609429247-0.41520311060514314]
-test_it.py::test_sum[0.5682066547612487-0.7243829926261657-1.2925896473874143]
-test_it.py::test_sum[0.5138857769400398-0.9866435513079722-1.500529328248012]
-test_it.py::test_sum[0.32391650283278506-0.39646296915151646-0.7203794719843015]
-test_it.py::test_sum[0.9573539653252039-0.46631807929040026-1.4236720446156041]
-test_it.py::test_sum[0.18758435224247982-0.4081118220534776-0.5956961742959574]
-test_it.py::test_sum[0.8300722136940875-0.24370118062201607-1.0737733943161034]
-test_it.py::test_sum[0.45416992471686735-0.5539633757267955-1.0081333004436628]
-test_it.py::test_sum[0.6404127883887936-0.07517291369462298-0.7155857020834165]
+test_it.py::test_sum[0.4235467615256703-0.6336556280381637-1.0572023895638338]
+test_it.py::test_sum[0.08598091323183876-0.9197414141632071-1.0057223273950457]
+test_it.py::test_sum[0.6499835837722387-0.08942031974171283-0.7394039035139516]
+test_it.py::test_sum[0.5982265644051936-0.4014341639946195-0.9996607283998131]
+test_it.py::test_sum[0.6108773740309141-0.39536962117174335-1.0062469952026576]
+test_it.py::test_sum[0.13520942528376823-0.36746285760417974-0.502672282887948]
+test_it.py::test_sum[0.8469134601088156-0.34936702626625926-1.196280486375075]
+test_it.py::test_sum[0.5828050759610505-0.028386017512678552-0.611191093473729]
+test_it.py::test_sum[0.1425962119341786-0.5579729193825124-0.700569131316691]
+test_it.py::test_sum[0.6183292075112786-0.5376259380555282-1.1559551455668067]

From what I could gather, it's possible to fix it simply by adding

def pytest_configure(config):
    _reseed(config)

to pytest_randomly.py. But I've never written a single plugin for pytest and I've read only some excerpts from the documentation, so I may be wrong.

@adamchainz
Copy link
Member

Ah, I see, this is because you're randomizing things at import time, outside of pytest's cycle. I'm not sure your suggestion will work in all cases, because it's probably possible for tests to be imported before the configure step even occurs 🤔

Your use case also seems a bit weird. If you want random data, why not randomize it during the test? And if you want parametrized tests, why not fix the parameters?

@p-himik
Copy link
Author

p-himik commented Sep 15, 2017

It probably won't cover all cases, yes, but I've never seen a case where some tests are imported from other files that are not gathered directly by pytest.

Randomizing data during the test doesn't show parameters in pytest output, and I don't want to put excessive logging into each test case that uses random data. Also, I think that input data shouldn't be a test's concern where it's possible.
Fixed parameters make tests biased in the case when there are many combinations - I can easily miss some corner cases that could be caught by randomized input, even if it probably wouldn't happen right away.

@adamchainz
Copy link
Member

The point of pytest-randomly is so that you can rerun with the same seed and discover the failing case. You don't need to log up front the randomly selected values, if a test fails just rerun with the same seed and e.g. --pdb to dive into it. I initially built this plugin for use with Factory Boy where the amount of random data is huge - many Django models with many fields - so up front logging is impossible.

Also by randomizing your test cases in this way you're preventing some pytest features from working such as --last-failed and --failed-first ( https://docs.pytest.org/en/2.9.1/cache.html ).

I seriously recommend just doing:

def test_foo():
    a = random.random()
    b = random.random()
    c = random.random()

That said I will consider adding the reseed on pytest_configure, since it will help in other cases too, e.g. other plugins that do random value generation at startup.

@p-himik
Copy link
Author

p-himik commented Sep 15, 2017

Could you please clarify on --last-failed and the like? by randomizing your test cases in this way - you mean your way? Because examples on the provided link use random inside pytest.mark.parametrize and --lf works.
If that's indeed the case - more points towards my approach. :)

@adamchainz
Copy link
Member

They don't use random, they use range - every import of the test file contains the same test items. In your case, every import yields different test items, so pytest sees them as new tests, and can't 'rerun' them.

@p-himik
Copy link
Author

p-himik commented Sep 15, 2017

Oh, I'm blind, sorry.

@adamchainz
Copy link
Member

Closing, as I said I don't think pytest-randomly should be catering for this kind of randomness.

@OfirAharon
Copy link

there is a real value for running random values.
Lets say you run on a machine with long runtimes, and you can't run 3000 iterations, but you want to cover as much permutations over snaps. for regression purposes.
random values can give your test flexibility and coverage over time.
about reproducing failures, you just use the log, see what failed, and debug whatever you want.

I'v just ran with this issue myself.

@adamchainz
Copy link
Member

@OfirAharon can your tests not be altered to randomize within the tests, rather than at import time, as above? Please open a new issue if not, and explain with code samples. This issue is nearly 2 years old.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants