-
-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to stop Pants test from marking skipped file as failed? #12179
Comments
Check out pytest-dev/pytest#2393. It looks like Pytest doesn't have an option for this, but there's a plugin. Or Pytest maintainers recommend adding a dummy test function to workaround it 🤷♂️ Does that work? Pants can't do much magical here - we don't know what tests are skipped or not, we only shell out to Pytest. |
Thanks for the quick reply @Eric-Arellano! I currently use a dummy function - literally called "test_dummy" right now. This is strange though, because if you shell out to pytest, then wouldn't my |
Yes, but only because you would be running Pytest in a single swoop over your whole repo, whereas Pants runs per-file for fine-grained caching and parallelism. Really, Pants is ~running |
Shows a skipped test, but no failures (.venv) me@computer foo % pytest foo_linux_test.py
======================================================================================================================================== test session starts ========================================================================================================================================
platform darwin -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /Users/me/scratch/foo
plugins: asyncio-0.15.1, icdiff-0.5
collected 0 items / 1 skipped
======================================================================================================================================== 1 skipped in 0.06s ========================================================================================================================================= |
Hmm, let me dig deeper - maybe there are other shenanigans going on |
Okay, so - confirmed that there seems to be a discrepancy here - because I tried two ways of skipping a module. An older way that apparently I was still using in a test, vs a "cleaner" way. In both cases, pytest shows skipped tests and no errors. So, I've worked around my problem - but just strange that Pants doesn't treat them the same. However, I think using Pants marks one a success, the other a failure. Pants failure: if not sys.platform.startswith("linux"):
pytest.skip(
"Skipping Linux-only tests (Platform/OS commands only really testable in Linux)",
allow_module_level=True,
) Pants success: pytestmark = pytest.mark.skipif(
not sys.platform.startswith("linux"),
reason="Skipping Linux-only tests (Platform/OS commands only really testable in Linux)",
) |
Hm, I wonder if this is from something like Pytest version being different? We're not doing anything special. Also, to double check, are you running |
I'm not writing anything other than "pytest" or "pants test" So, since there is a workaround, and now my pytest.mark usage is arguably better - this isn't really an open ticket for me anymore, so I'll leave it to you if you want to close it or keep it going? It's strange, and I can help with any testing, but doesn't seem to be remotely a blocker |
Sorry, just re-read your comment: It makes sense that Pants is failing the initial skip type, as pytest is returning exit code 5. Whereas, using the second mechanism, it returns 0. Confirmed that at the command line, and also in the pytest_runner.py So, I guess this just a straight pytest issue |
Okay, great. Thanks for following up with that! |
For some more context: https://docs.pytest.org/en/6.2.x/usage.html
So, it appears that pytest is marking 0 items collected/1 item skipped differently depending on how the test was skipped. |
pytest -m still fails in pants? I'll have to use uv then |
Using pants 2.5.0 (or, any version I can recall) - running
./pants test ::
will mark empty test files (e.g. a file that pytest runs over, but contains no tests) as failed.While this conflicts with the default pytest behaviour (e.g. I run
pytest
from the command line, it all passes - I runpants test
, it fails) - failing the test is preferable to me, because usually an empty test file is an oversight from me (or I'm using it to force coverage reports or something).Anyways, that's all fine, however - I was wondering if there is a flag or option to disable this behaviour in the event of skipped tests?
For example, I have some tests that only run on Linux - so that the module level of these tests, I have something like
pytest.mark.skipif(not sys.platform.startswith("linux"))
Again, this works fine running from pytest natively, however,
./pants test ::
gives me this:The text was updated successfully, but these errors were encountered: