-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
silently skip a test #3730
Comments
See also #3844. |
[this is copied from a comment in #3844 that is now closed for unrelated reasons; I would suggest the syntax This would be extremely useful to me in several scenarios. Pytest makes it easy (esp. through parametrization and parametrized fixtures) to test a cartesian product of parameter combinations. It's slightly less easy (not least because fixtures can't be reused as parameters) to reduce that cartesian product where necessary. Often, certain combination simply do not make sense (for what's being tested), and currently, one can only really But skips and xfails get output to the log (and for good reason - they should command attention and be eventually fixed), and so it's quite a simple consideration that one does not want to pollute the result with skipping invalid parameter combinations. For this task, |
at a work project we have the concept of "uncollect" for tests, which can take a look at fixture values to conditionally remove tests from the collection at modifyitems time |
What's so bad a bout "lying" if it's in the interest of reporting/logging, and directly requested by the user?
"At work" sounds like "not in pytest (yet)". Can you elaborate how that works? |
@h-vetinari the lies have complexity cost - a collect time "ignore" doesnt have to mess around with the other reporting, it can jsut take the test out cleanly - a outcome level ignore needs a new special case for all report outcome handling and in some cases cant correctly handle it to begin with for example whats the correct way to report an ignore triggered in the teardown of a failed test - its simply insane as for the work project, it pretty much just goes off at pytest-modifyitems time and partitions based on a marker and conditionals that take the params |
@h-vetinari |
@blueyed Is there a way to add a hook that modifies the collection directly at the test itself, without changing global behaviour? |
@RonnyPfannschmidt
You're right, that skips only make sense at the beginning of a test - I would not have used it anywhere else in a test (e.g. after something that can fail), but I can see the problem from an API design perspective. This is then getting closer again to the question I just asked to @blueyed, of having a per-test post-collection (or rather pre-execution) hook, to uncollect some tests. The essential part is that I need to be able to inspect the actual value of the parametrized fixtures per test, to be able to decide whether to ignore or not.
This sounds great (if the params are the fixtures), but I'd need this on a per-test basis (maybe as a decorator that takes a function of the same signature as the test?). |
@h-vetinari an extracted solution of what i did at work would have 2 components a) a hook to determine the namespace/kwargs for maker conditionals CC @psav |
@RonnyPfannschmidt @h-vetinari This has been stale for a while, but I've just come across the same issue - how do you 'silently unselect' a test? Are there any new solutions or propositions? If you have a large highly-dimensional parametrize-grid, this is needed quite often so you don't run (or even collect) the tests whose parameters don't make sense. |
@aldanor |
the only way to completely "unselect" is not to generate the next best thing is to deselect at collect time |
As @h-vetinari pointed out, sometimes "not generate" is not really an option, e.g. you have a highly-dimensional grid of parameters and you need to unselect just a few that don't make sense, and you don't want for them to show up as 'skipped', because they weren't really skipped. So, as noted above, perhaps |
@aldanor @pytest.mark.parametrize('test_name',[parameters for parameters in
existing_parameter_generation() if make_sense(parameters)] and that way not generate them |
@Tadaboody's suggestion is on point I believe. You can always preprocess the parameter list yourself and deselect the parameters as appropriate. @aldanor @h-vetinari @notestaff |
Not always, as I've mentioned above:
For instance, @pytest.mark.parametrize('x1', list(range(10)), ids='x1={}'.format)
@pytest.mark.parametrize('x2', list(range(10)), ids='x2={}'.format)
@pytest.mark.parametrize('x3', list(range(10)), ids='x3={}'.format)
@pytest.mark.parametrize('x4', list(range(10)), ids='x4={}'.format)
def test_foo(x1, x2, x3, x4):
# we want to silently ignore some of these 10k points Yes, you could argue that you could rewrite the above using a single list comprehensions, then having to rewrite formatting, the whole thing becoming more ugly, less flexible to extend, and your parameter generation now being mixed up with deselection logic.
If in the example above it would have sort of worked as a hack, but it fails completely once you have reusable parametrization: @pytest.fixture(params=list(range(10)), ids='x1={}'.format)
def x1(request):
return request.param
@pytest.fixture(params=list(range(10)), ids='x2={}'.format)
def x2(request):
return request.param
@pytest.fixture(params=list(range(10)), ids='x3={}'.format)
def x3(request):
return request.param
@pytest.fixture(params=list(range(10)), ids='x4={}'.format)
def x4(request):
return request.param
def test_123(x1, x2, x3):
# we want to to deselect some of these 1k points
def test_234(x2, x3, x4):
# we want to ignore some of these 1k points too |
@aldanor thanks for the example. What do you suggest the API for skipping tests be? |
How about something like @pytest.fixture.unselectif(function_of_x1_to_x3)
def test_123(x1, x2, x3):
# we want to to deselect some of these 1k points resp. @pytest.fixture.unselectif(function_of_x1_to_x4)
@pytest.mark.parametrize('x1', list(range(10)), ids='x1={}'.format)
@pytest.mark.parametrize('x2', list(range(10)), ids='x2={}'.format)
@pytest.mark.parametrize('x3', list(range(10)), ids='x3={}'.format)
@pytest.mark.parametrize('x4', list(range(10)), ids='x4={}'.format)
def test_foo(x1, x2, x3, x4):
# we want to silently ignore some of these 10k points This Note: the name is just an example, and obviously completely up for bikeshedding. After pressing "comment" I immediately thought it should rather be |
IIUC how pytest works, once you've entered the test function body, it's already too late. So there's not a whole lot of choices really, it probably has to be something like @pytest.mark.select(lambda x: x[0] > x[1])
def test_123(x1, x2, x3):
...
|
@h-vetinari @aldanor it seems what you want is already possible with a custom hook and mark: # contents of conftest.py
def pytest_configure(config):
config.addinivalue_line(
"markers", "uncollect_if(*, func): function to unselect tests from parametrization"
)
def pytest_collection_modifyitems(config, items):
removed = []
kept = []
for item in items:
m = item.get_closest_marker('uncollect_if')
if m:
func = m.kwargs['func']
if func(**item.callspec.params):
removed.append(item)
continue
kept.append(item)
if removed:
config.hook.pytest_deselected(items=removed)
items[:] = kept
# test_foo.py
import pytest
def uncollect_if(x, y, z):
if x > 5:
return True
@pytest.mark.uncollect_if(func=uncollect_if)
@pytest.mark.parametrize('x', range(10))
@pytest.mark.parametrize('y', range(10, 100, 10))
@pytest.mark.parametrize('z', range(1000, 1500, 100))
def test_foo(x, y, z, tmpdir):
pass This produces:
This can also easily be turned into a plugin. |
@nicoddemus |
A built-in feature is not out of question, but I'm personally not very keen on adding this to the core myself, as it is a very uncommon use case and a plugin does the job nicely. Note also that if a project for some reason doesn't want to add a plugin as dependency (and a test dependency at that, so I think it should not have as much friction as adding a new dependency to the project itself), it can always copy that code into their We can definitely thought add the example above to the official documentation as an example of customization. I would be happy to review/merge a PR to that effect. 😁 |
@nicoddemus : It would be convenient if the metafunc.parametrize function Currently though if metafunc.parametrize(...,argvalues=[],...) is called in the pytest_generate_tests(metafunc) hook it will mark the test as ignored. |
Hi @SaturnIC, Not sure, users might generate an empty parameter set without realizing it (a bug in a function which produces the parameters for example), which would then make pytest simply not report anything regarding that test, as if it didn't exist; this will probably generate some confusion until the user can figure out the problem. I think adding a mark like we do today is a good trade-off: at least the user will have some feedback about the skipped test, and see the reason in the summary. |
we dont mark a test ignored, we mark it skip or xfail with a given reason |
Hi @nicoddemus, thanks for the fast reply. Is there another good reason why an empty argvalues list should mark the test as skip (thanks @RonnyPfannschmidt) instead of not running it at all ? |
Off hand I am not aware of any good reason to ignore instead of skip /xfail |
Some good reasons (I'm biased, I'll admit) have come up in this very thread. In short: Having lots of parametrisation, but not polluting (skip-)log with tests that represent impossible parameter combinations. |
I apologise, I should have been more specific. there are good reasons to deselect impossible combinations, this should be done as deselect at modifyitems time The skip/xfail for a actually empty matrix seems still absolutely necessary for parameterization. The missing capability of fixtures at modifyitems time gives this unnecessary hardship |
No need to apologise. :) Would just be happy to see this resolved eventually, but I understand that it's a gnarly problem. |
Another approach, that also might give some additional functionality might be:
|
@xeor that proposal looks alien to me and with a description of the semantics there's a lot of room for misunderstanding |
@RonnyPfannschmidt Why though? The empty matrix, implies there is no test, thus also nothing to ignore? |
It's typically a user error at parameterization, thus a required indication |
Isn't a skipped test a bad warning, those two are very different things? Warnings could be sent out using the python logger? If one uses the same test harness for different test runs, |
What some of us do deliberately many more of us do accidentially, silent omissions are a much more severe error class than non silent ignore The correct structural way to mark a parameter set as correct to ignore is to generate a one element matrix with the indicative markers |
@RonnyPfannschmidt Thanks for the feedback. Just let me clarify the proposal if it looked weird (i'm not good at naming things). The gist was to:
It might not fit in at all tho, but it seams like a good idea to support something like this in my case. |
I would prefer to see this implemented as a callable parameter to Parametrize Taking the node, and eventually fixtures of a scope available at collect time |
Hi, I think I am looking at the same problem right now. I described it it more detail here: https://stackoverflow.com/questions/63063722/how-to-create-a-parametrized-fixture-that-is-dependent-on-value-of-another-param. What i'd like to achieve is stacking parametrized fixtures as described at the SO link (can reproduce here if the link is an issue) and I think it would be solution for other people here as well as the proposal to ignore / deselect testcases looks like a XY problem to me, where the real solution would be to not generate the invalid test case combinations in the first place. |
@nicoddemus thanks for the solution. I too want a simple way to deselect a test based on a fixture value or parametrized argument value(s) without adding to the "test skipped" list, and this solution's API is definitely adequate. I understand that this might be a slightly uncommon use case, but I support adding something like this to the core because I strongly agree with @notestaff that there is value in differentiating tests that are SKIPPED vs tests that are intended to NEVER BE RUN. Solely relying on @pytest.mark.skip() pollutes the differentiation between these two and makes knowing the state of my test set harder. ... I personally use @pytest.mark.skip() to primarily to instruct pytest "don't run this test now", mostly in development as I build out functionality, almost always temporarily. In contrast, as people have mentioned, there are clearly scenarios where some combinations of fixtures and/or parametrized arguments are never intended to run. Sure, you could do this by putting conditions on the parameters, but that can hinder readability: sometimes code to remove a few items from a group is much clearer than the code to not add them to the group in the first place. Plus the "don't add them to the group" argument doesn't solve the scenario where I want to deselect/ignore a test based on the value of a fixture cleanly as far as I can tell. If all the tests I want to run are being run, I want to see an all-green message, that way the presence "X tests skipped" tells me if something that should be tested is currently being skipped. As someone with an embedded background, the "X tests skipped" message feels like a compiler warning to me, and please forgive those of us don't like living with projects that feel as if they have "compiler warnings" :) I understand @RonnyPfannschmidt's concern with silent skips and think designating them as 'deselected' (1) gives good visibility, (2) differentiates them from skipped tests, and (3) does not attract excessive attention to tests that should never run. Perhaps another solution is adding another optional parameter to skip, say 'ignore', to give the differentiation that some of use would like to see in the result summary. def test_function(): Results (1.39s): |
This way worked for me, I was able to ignore some parameters using pytest_generate_tests(metafunc) |
+1 for this feature, but it seems this thread hasn't been active for a while... |
Until the feature is implemented: The following code successfully uncollect and hide the the tests you don't want. @pytest.hookimpl(hookwrapper=True)
def pytest_collectreport(report):
report.result = filter(should_hide, report.result)
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_collection_modifyitems(config, items):
yield
items[:] = filter(should_hide, items) I used it with the next filtering function ( def is_not_skipped(test) -> bool:
if hasattr(test, 'callspec') and hasattr(test.callspec, 'params'):
params = test.callspec.params
for fixture_name, fixture_type in fixtures_name_to_type.items():
if fixture_name in params:
return isinstance(params[fixture_name], fixture_type)
return True |
I've just published a plugin that implements a The reason I used the Edit: actually I went ahead and did the deselect_if version too - pytest-deselect-if |
Sometimes a test should always be skipped, e.g. because we generate tests by first generating all possible combinations of parameters and then calling pytest.skip inside the test function for combinations that don't make sense. That's different from tests that are skipped in a particular test run, but that might be run in another test run (e.g. on different hardware or when a particular feature is added). Could you add a way to mark tests that should always be skipped, and have them reported separately from tests that are sometimes skipped? When a report says "400 tests skipped" it looks like something is wrong, but maybe 390 of these should always be skipped, and just 10 should be tested later when conditions permit. Also, the "% of tests done" status message becomes distorted when always-skipped tests are included.
The text was updated successfully, but these errors were encountered: