-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected order of tests using parameterized fixtures #2846
Comments
Definitely it seems a bug, thanks for the report. Our parametrization code has some standing bugs, it would be nice for us to focus on that part of the code when we get the chance. |
Have you got any pointers where to start looking? |
I would start debugging here: Lines 731 to 732 in 5631a86
And here: Lines 166 to 192 in 5631a86
|
Hi, From preliminary research it seems like tests order is determined in If comments in the code stand true, the desired order of tests that use parametrized fixtures with different scopes is as follows: I am asking this because current solution is not easy to grasp and seems somewhat overly complicated if we just want to have tests ordered by their scope. @nicoddemus Any clarification on the subject would be more than welcome. |
@kchomski-reef thanks for the interest. You are half right: the purpose is to reorder test itemswithin a scope to minimize fixture setup/teardown, when those fixtures are parametrized. For example, consider this session fixture: @pytest.fixture(scope='session', params=[1, 2])
def session_fix(request):
# let's pretend this actually does some heavy weight setup/teardown
return request.param Consider this test file:
If we don't do any reordering, the tests would execute in this order:
IOW, this would cause A key fact to understand all this and that is not clear in the documentation is that at any given time only a single instance of The reorder algorithm thus tries to reorder the tests to minimize fixture setup/teardown:
This order now allow us to create The same is valid for the other scopes other than function. Let me know if something is not clear on my explanation. |
From looking at the code the first time, you could try this hunch of mine: The source of the non-determinism you saw could be that you are iterating through dicts. Line 206 in 5631a86
Alternatively, you could replace this {} (which becomes the argkeys above afaict) with an OrderedDict:Line 169 in 5631a86
|
OK, I just put my money where my mouth is and tested this myself - it doesn't work. 😞 |
@nicoddemus Thank you for such a detailed explanation. |
Sounds good, thanks! |
@nicoddemus: After reading your explanation, looking more into the code and checking back OP's examples there doesn't seem to be a bug at all. Taking into account what you wrote:
and
the 2nd example posted by OP seems to be perfectly valid - the order seems random at first, but after a closer look it can be seen that order of tests is optimal to avoid unnecessary fixtures setups and teardowns:
If we would like to have the same order for
then it's not optimal, because |
Ha, awesome reasoning! Didn't see the forest for the trees! |
@kchomski-reef indeed your reasoning seems correct to me, thanks for taking a look into this. @rbierbasz-gog what do you think? |
@kchomski-reef thanks for the great insight. I haven't checked import itertools
import pytest
f1_params = (1, 2, 3)
@pytest.fixture(scope="module", params=f1_params)
def f1(request):
pass
f2_params = ('a', 'b', 'c')
@pytest.fixture(scope="module", params=f2_params)
def f2(request):
pass
@pytest.mark.parametrize('f1,f2', itertools.product(f1_params, f2_params), indirect=True)
def test(f1, f2):
pass But it breaks fixtures' scopes (#570). The only working solutions for me now is moving one parametrized fixture to higher scope: import pytest
@pytest.fixture(scope="session", params=(1, 2, 3))
def f1_meta(request):
return request.param
@pytest.fixture(scope="module")
def f1(f1_meta):
pass
@pytest.fixture(scope="module", params=('a', 'b', 'c'))
def f2(request):
pass
def test(f1, f2):
pass But it seems hacky and it's not scalable. |
@rbierbasz-gog I totally agree. I came up to the same conclusion while investigating this issue. It would be really nice to have a possibility to indicate the importance/weight of the fixture and reorder tests based on this importance. Something like additional keyword argument with default value maybe. @nicoddemus do you think it is a possible feature candidate? |
I think we can cook up a hook that allows users to completely customize fixture ordering, we just have to think of a good interface for that. Anybody up for opening an issue with an initial proposal for discussion? |
Setting aside a new hook for the moment - def pytest_collection_modifyitems(session, config, items):
items.sort(key=lambda x: x._genid) It's still hacky (accessing private attribute of Item object) 😄 |
In any case I think this issue can be closed. |
Closing then.
That invitation still stands though! We would need a few use cases to come up with a hook that is useful for those cases. |
@nicoddemus @kchomski-reef, All, I seems to have a use case where I wanted to control how the tests get created with parametrized fixtures; The main reason I need this feature is because my test results are dependent on previous test result. At time I need to save previous tests result and then compare it I want to sequence the tests in a predictable order as originally described in this issue even when the fixtures scopes are at class/module level. OP 1st example resultstest_order.py::test[1-a] PASSED |
Below is a little more concrete scenario why my tests are need to be in a particular sequence->
@pytest.fixture(scope="module", autouse=True)
def listener(request):
# listenr object is capable of listening
listenerObj = listener()
request.module.listener = listenerObj
return listener
@pytest.fixture(scope="module", params=[110, 220], autouse=True)
def setInputVoltage(request):
#set input voltage
return request.param
@pytest.fixture(scope="class", params=[100, 200, 500], autouse=True)
def setInputFreq(request):
# set input frequency
return request.param
@pytest.fixture(scope="class", params=[10, 20, 30, 40])
def setVolume(request):
# set volume
return request.param
@pytest.fixture(scope="class")
def listenAudio(request, listener):
# measures the output all channels out as list
listener.listen()
def fin():
listener.store()
request.addfinalizer(fin)
@pytest.fixture(scope="function", params=[1,2,3])
def varChannel(request):
# declaring it as fixture to create muliple tests for each channel
return request.param
def test_amplifierIsOn():
@pytest.mark.usefixtures("listenAudio", "setVolume")
class Test_AudioLevel:
def test_outputFreq(self, setInputFreq):
assert
def test_volumeLevel(request, setVolume, varChannel):
if (listner.storedData):
assert listener.currentdata[varChannel] > listner.storedData[varChannel] |
Totally agree with @rbierbasz-gog. The equal-weighted cost of setup/teardown all fixtures is not a pragmatic assumption while the users would like to customize the ordering if they really consider the performance. Could we have one more parameter, e.g. priority (between 0 and 100), for the users the customize the cost of setup/teardown fixture? For example,
The expected order is then
|
@satishmovvar and gavincyi perhaps we should create a new issue? |
Good point to have a new issue rather than stuck in a closed ticket. #3393 created. |
Consider following code:
The order of the tests meets my expectations:
But when I changed fixtures' scope from
function
tomodule
:it seems to get random:
The same happens for
class
andsession
scopes. It leads to fixtures being set up and torn down more times than necessary.The text was updated successfully, but these errors were encountered: