Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to stop Pants test from marking skipped file as failed? #12179

Closed
sureshjoshi opened this issue Jun 7, 2021 · 12 comments
Closed

How to stop Pants test from marking skipped file as failed? #12179

sureshjoshi opened this issue Jun 7, 2021 · 12 comments

Comments

@sureshjoshi
Copy link
Member

Using pants 2.5.0 (or, any version I can recall) - running ./pants test :: will mark empty test files (e.g. a file that pytest runs over, but contains no tests) as failed.

While this conflicts with the default pytest behaviour (e.g. I run pytest from the command line, it all passes - I run pants test, it fails) - failing the test is preferable to me, because usually an empty test file is an oversight from me (or I'm using it to force coverage reports or something).

Anyways, that's all fine, however - I was wondering if there is a flag or option to disable this behaviour in the event of skipped tests?

For example, I have some tests that only run on Linux - so that the module level of these tests, I have something like
pytest.mark.skipif(not sys.platform.startswith("linux"))

Again, this works fine running from pytest natively, however, ./pants test :: gives me this:

10:41:54.18 [WARN] Completed: test - services/foo/foo_linux_test.py:../tests failed (exit code 5).
============================= test session starts ==============================
platform darwin -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/me/.cache/pants/named_caches/pex_root/venvs/63d35bc492431ad20f713d867860e8ed23975ce6/8528cd9c5089bdf96ee11e32c908dc688186b509/bin/python3.9
cachedir: .pytest_cache
rootdir: /private/var/folders/18/q1r7phps28nc9rx5j_0t3jmm0000gp/T/process-executionLGcwkR
plugins: asyncio-0.15.1, cov-2.11.1, icdiff-0.5
collecting ... collected 0 items / 1 skipped
@Eric-Arellano
Copy link
Contributor

Check out pytest-dev/pytest#2393. It looks like Pytest doesn't have an option for this, but there's a plugin. Or Pytest maintainers recommend adding a dummy test function to workaround it 🤷‍♂️

Does that work? Pants can't do much magical here - we don't know what tests are skipped or not, we only shell out to Pytest.

@sureshjoshi
Copy link
Member Author

Thanks for the quick reply @Eric-Arellano!

I currently use a dummy function - literally called "test_dummy" right now.

This is strange though, because if you shell out to pytest, then wouldn't my pants test :: match the results of pytest? My native pytest does not return failures.

@Eric-Arellano
Copy link
Contributor

This is strange though, because if you shell out to pytest, then wouldn't my pants test :: match the results of pytest? My native pytest does not return failures.

Yes, but only because you would be running Pytest in a single swoop over your whole repo, whereas Pants runs per-file for fine-grained caching and parallelism. Really, Pants is ~running pytest f1.py; pytest f2.py; pytest f3.py, and so on. If you did that, Pytest and Pants should be behaving the same way.

@sureshjoshi
Copy link
Member Author

Shows a skipped test, but no failures

(.venv) me@computer foo % pytest foo_linux_test.py 
======================================================================================================================================== test session starts ========================================================================================================================================
platform darwin -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /Users/me/scratch/foo
plugins: asyncio-0.15.1, icdiff-0.5
collected 0 items / 1 skipped                                                                                                                                                                                                                                                                       

======================================================================================================================================== 1 skipped in 0.06s =========================================================================================================================================

@sureshjoshi
Copy link
Member Author

Hmm, let me dig deeper - maybe there are other shenanigans going on

@sureshjoshi
Copy link
Member Author

Okay, so - confirmed that there seems to be a discrepancy here - because I tried two ways of skipping a module. An older way that apparently I was still using in a test, vs a "cleaner" way. In both cases, pytest shows skipped tests and no errors.

So, I've worked around my problem - but just strange that Pants doesn't treat them the same. However, I think using pytestmark is the preferred option anyways.

Pants marks one a success, the other a failure.

Pants failure:

if not sys.platform.startswith("linux"):
    pytest.skip(
        "Skipping Linux-only tests (Platform/OS commands only really testable in Linux)",
        allow_module_level=True,
    )

Pants success:

pytestmark = pytest.mark.skipif(
    not sys.platform.startswith("linux"),
    reason="Skipping Linux-only tests (Platform/OS commands only really testable in Linux)",
)

@Eric-Arellano
Copy link
Contributor

Eric-Arellano commented Jun 7, 2021

Hm, I wonder if this is from something like Pytest version being different? We're not doing anything special.

Also, to double check, are you running echo $? after to get the exit code? Iirc Pytest uses a special code if a test was skipped, rather than that it was empty.

@sureshjoshi
Copy link
Member Author

I'm not writing anything other than "pytest" or "pants test"

So, since there is a workaround, and now my pytest.mark usage is arguably better - this isn't really an open ticket for me anymore, so I'll leave it to you if you want to close it or keep it going?

It's strange, and I can help with any testing, but doesn't seem to be remotely a blocker

@sureshjoshi
Copy link
Member Author

Sorry, just re-read your comment:

It makes sense that Pants is failing the initial skip type, as pytest is returning exit code 5. Whereas, using the second mechanism, it returns 0.

Confirmed that at the command line, and also in the pytest_runner.py

So, I guess this just a straight pytest issue

@Eric-Arellano
Copy link
Contributor

Okay, great. Thanks for following up with that!

@sureshjoshi
Copy link
Member Author

For some more context:

https://docs.pytest.org/en/6.2.x/usage.html

Possible exit codes
Running pytest can result in six different exit codes:

Exit code 0
All tests were collected and passed successfully

Exit code 1
Tests were collected and run but some of the tests failed

Exit code 2
Test execution was interrupted by the user

Exit code 3
Internal error happened while executing tests

Exit code 4
pytest command line usage error

Exit code 5
No tests were collected

So, it appears that pytest is marking 0 items collected/1 item skipped differently depending on how the test was skipped.

@MrMegaMango
Copy link

pytest -m still fails in pants? I'll have to use uv then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants