Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pytest.mark.skip shortcut (Issue #607) #1040

Closed
wants to merge 35 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
beaa8e5
Fixes #653 use deprecated_call as context_manager
chiller Sep 21, 2015
9f77a85
removed mutation of global state, changed filter addition in Warnings…
chiller Sep 21, 2015
d8fbb0b
start features branch
hpk42 Sep 22, 2015
97f7815
also change pytest version to target 2.9.0
hpk42 Sep 22, 2015
8a4517f
re-add 2.8.x changelog so that MASTER can be merged into features wrt
hpk42 Sep 22, 2015
79587d4
Merge branch 'master' into features
hpk42 Sep 22, 2015
48df2f6
Merge branch 'master' into features
hpk42 Sep 22, 2015
7c088d1
remove nonsense line
hpk42 Sep 22, 2015
36924b5
Merge branch '653-deprecated-context-manager' of https://github.com/c…
RonnyPfannschmidt Sep 22, 2015
4867554
Merge branch 'master' into features
hpk42 Sep 23, 2015
0c05ca1
Merge branch 'master' into features
hpk42 Sep 26, 2015
cb58eaa
Merge remote-tracking branch 'upstream/master' into features
hpk42 Sep 29, 2015
b71add2
Add MarkEvaluator for skip
MichaelAquilina Sep 21, 2015
4e94135
Remove incorrect use of pytest.mark.skip
MichaelAquilina Sep 21, 2015
f144666
Work towards test coverage of mark.skip
MichaelAquilina Sep 21, 2015
ad0b8e3
Fix case where skip is assigned to as an attribute directly
MichaelAquilina Sep 21, 2015
61b8443
Update docs with new skip marker
MichaelAquilina Sep 21, 2015
5ec08d3
Delete trailing whitespace
MichaelAquilina Sep 21, 2015
dc7153e
Spelling and grammar fixes
MichaelAquilina Sep 21, 2015
771aef9
Add a test_skip_class test
MichaelAquilina Sep 21, 2015
abc27f5
Update skipping.rst with correct version marker
MichaelAquilina Sep 23, 2015
d162894
Update skippings tests for better coverage
MichaelAquilina Sep 27, 2015
04545f8
classes inherit from object
MichaelAquilina Oct 1, 2015
eee2413
Fix failing test
MichaelAquilina Oct 1, 2015
1b5aa28
Check no reason displayed if none specified
MichaelAquilina Oct 1, 2015
9e57954
First argument in pytest.mark.skip is a reason
MichaelAquilina Oct 1, 2015
213dbe7
newlines
MichaelAquilina Oct 1, 2015
25d74a5
Dont explicitly inherit from object
MichaelAquilina Oct 3, 2015
5ff9a0f
Remove redundant comments
MichaelAquilina Oct 3, 2015
fc0bd94
Test that "unconditional skip" is the default reason if none given
MichaelAquilina Oct 3, 2015
122980e
Add myself to AUTHORS
MichaelAquilina Oct 3, 2015
00d0c74
Update reason in test to prevent confusing with test_no_reason
MichaelAquilina Oct 3, 2015
df874db
Update default reason to "unconditional skip"
MichaelAquilina Oct 3, 2015
7504429
Add unconditional skip entry to CHANGELOG
MichaelAquilina Oct 3, 2015
8984177
TestXFail also shouldnt explicitly inherit from object
MichaelAquilina Oct 3, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions AUTHORS
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ Dave Hunt
David Mohr
Edison Gustavo Muenz
Eduardo Schettino
Endre Galaczi
Elizaveta Shashkova
Eric Hunsberger
Eric Siegerman
Expand All @@ -51,6 +52,7 @@ Marc Schlaich
Mark Abramowitz
Markus Unterwaditzer
Martijn Faassen
Michael Aquilina
Michael Droettboom
Nicolas Delaby
Pieter Mulder
Expand Down
23 changes: 15 additions & 8 deletions CHANGELOG
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
2.9.0.dev
---------

* Add unconditional skip mechanism (`pytest.mark.skip`)

2.8.1.dev
2.8.2.dev
---------

Expand Down Expand Up @@ -25,9 +31,9 @@
"pytest-xdist" plugin, with test reports being assigned to the wrong tests.
Thanks Daniel Grunwald for the report and Bruno Oliveira for the PR.

- (experimental) adapt more SEMVER style versioning and change meaning of
master branch in git repo: "master" branch now keeps the bugfixes, changes
aimed for micro releases. "features" branch will only be be released
- (experimental) adapt more SEMVER style versioning and change meaning of
master branch in git repo: "master" branch now keeps the bugfixes, changes
aimed for micro releases. "features" branch will only be be released
with minor or major pytest releases.

- Fix issue #766 by removing documentation references to distutils.
Expand All @@ -42,6 +48,7 @@

- Fix issue #411: Add __eq__ method to assertion comparison example.
Thanks Ben Webb.
- Fix issue #653: deprecated_call can be used as context manager.

- fix issue 877: properly handle assertion explanations with non-ascii repr
Thanks Mathieu Agopian for the report and Ronny Pfannschmidt for the PR.
Expand All @@ -52,7 +59,7 @@
-----------------------------

- new ``--lf`` and ``-ff`` options to run only the last failing tests or
"failing tests first" from the last run. This functionality is provided
"failing tests first" from the last run. This functionality is provided
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just as an FYI to why these are here, my editor automatically trims out extra whitespace at the end of lines.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries, thanks

through porting the formerly external pytest-cache plugin into pytest core.
BACKWARD INCOMPAT: if you used pytest-cache's functionality to persist
data between test runs be aware that we don't serialize sets anymore.
Expand Down Expand Up @@ -158,17 +165,17 @@

- fix issue735: assertion failures on debug versions of Python 3.4+

- new option ``--import-mode`` to allow to change test module importing
behaviour to append to sys.path instead of prepending. This better allows
to run test modules against installated versions of a package even if the
- new option ``--import-mode`` to allow to change test module importing
behaviour to append to sys.path instead of prepending. This better allows
to run test modules against installated versions of a package even if the
package under test has the same import root. In this example::

testing/__init__.py
testing/test_pkg_under_test.py
pkg_under_test/

the tests will run against the installed version
of pkg_under_test when ``--import-mode=append`` is used whereas
of pkg_under_test when ``--import-mode=append`` is used whereas
by default they would always pick up the local version. Thanks Holger Krekel.

- pytester: add method ``TmpTestdir.delete_loaded_modules()``, and call it
Expand Down
2 changes: 1 addition & 1 deletion _pytest/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
#
__version__ = '2.8.2.dev1'
__version__ = '2.9.0.dev1'
12 changes: 10 additions & 2 deletions _pytest/recwarn.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,17 @@ def pytest_namespace():
'warns': warns}


def deprecated_call(func, *args, **kwargs):
def deprecated_call(func=None, *args, **kwargs):
"""Assert that ``func(*args, **kwargs)`` triggers a DeprecationWarning.

This function can be used as a context manager::

>>> with deprecated_call():
... myobject.deprecated_method()
"""
if not func:
return WarningsChecker(expected_warning=DeprecationWarning)

wrec = WarningsRecorder()
with wrec:
warnings.simplefilter('always') # ensure all warnings are triggered
Expand Down Expand Up @@ -150,8 +158,8 @@ def showwarning(message, category, filename, lineno,
self._module.showwarning = showwarning

# allow the same warning to be raised more than once
self._module.simplefilter('always', append=True)

self._module.simplefilter('always')
return self

def __exit__(self, *exc_info):
Expand Down
30 changes: 26 additions & 4 deletions _pytest/skipping.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,16 @@

import py
import pytest
from _pytest.mark import MarkInfo


def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption('--runxfail',
action="store_true", dest="runxfail", default=False,
help="run tests even if they are marked xfail")


def pytest_configure(config):
if config.option.runxfail:
old = pytest.xfail
Expand All @@ -38,18 +41,22 @@ def nop(*args, **kwargs):
"See http://pytest.org/latest/skipping.html"
)


def pytest_namespace():
return dict(xfail=xfail)


class XFailed(pytest.fail.Exception):
""" raised from an explicit call to pytest.xfail() """


def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed


class MarkEvaluator:
def __init__(self, item, name):
self.item = item
Expand Down Expand Up @@ -147,10 +154,25 @@ def getexplanation(self):

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
evalskip = MarkEvaluator(item, 'skipif')
if evalskip.istrue():
item._evalskip = evalskip
pytest.skip(evalskip.getexplanation())
# Check if skip or skipif are specified as pytest marks

skipif_info = item.keywords.get('skipif')
if isinstance(skipif_info, MarkInfo):
eval_skipif = MarkEvaluator(item, 'skipif')
if eval_skipif.istrue():
item._evalskip = eval_skipif
pytest.skip(eval_skipif.getexplanation())

skip_info = item.keywords.get('skip')
if isinstance(skip_info, MarkInfo):
item._evalskip = True
if 'reason' in skip_info.kwargs:
pytest.skip(skip_info.kwargs['reason'])
elif skip_info.args:
pytest.skip(skip_info.args[0])
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not very happy with the way args are handled in this and was hoping for some feedback.

for example, what if the user passes more than 1 arg? Should we raise an exception? What if an invalid kwarg is passed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, I also think the argument handling for marks in general is a little clunky. I say this looks good enough, but we should probably look at a way to improve argument handling in general for marks in the future.

else:
pytest.skip("unconditional skip")

item._evalxfail = MarkEvaluator(item, 'xfail')
check_xfail_no_run(item)

Expand Down
6 changes: 6 additions & 0 deletions doc/en/recwarn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,3 +114,9 @@ command ``warnings.simplefilter('always')``::
warnings.warn("deprecated", DeprecationWarning)
assert len(recwarn) == 1
assert recwarn.pop(DeprecationWarning)

You can also use it as a contextmanager::

def test_global():
with pytest.deprecated_call():
myobject.deprecated_method()
16 changes: 13 additions & 3 deletions doc/en/skipping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,18 @@ corresponding to the "short" letters shown in the test progress::
Marking a test function to be skipped
-------------------------------------------

.. versionadded:: 2.9

The simplest way to skip a test function is to mark it with the `skip` decorator
which may be passed an optional `reason`:

@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...

.. versionadded:: 2.0, 2.4

If you wish to skip something conditionally then you can use `skipif` instead.
Here is an example of marking a test function to be skipped
when run on a Python3.3 interpreter::

Expand Down Expand Up @@ -168,12 +178,12 @@ Running it with the report-on-xfail option gives this output::
platform linux -- Python 3.4.3, pytest-2.8.1, py-1.4.30, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items

xfail_demo.py xxxxxxx
======= short test summary info ========
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
Expand All @@ -183,7 +193,7 @@ Running it with the report-on-xfail option gives this output::
XFAIL xfail_demo.py::test_hello6
reason: reason
XFAIL xfail_demo.py::test_hello7

======= 7 xfailed in 0.12 seconds ========

.. _`skip/xfail with parametrize`:
Expand Down
1 change: 0 additions & 1 deletion testing/test_capture.py
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,6 @@ def test_a():
import subprocess
subprocess.call([sys.executable, __file__])

@pytest.mark.skip
def test_foo():
import os;os.write(1, b'\xc3')

Expand Down
11 changes: 11 additions & 0 deletions testing/test_recwarn.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ def dep_explicit(i):
filename="hello", lineno=3)

class TestDeprecatedCall(object):

def test_deprecated_call_raises(self):
excinfo = pytest.raises(AssertionError,
"pytest.deprecated_call(dep, 3)")
Expand Down Expand Up @@ -111,6 +112,16 @@ def test_deprecated_explicit_call(self):
pytest.deprecated_call(dep_explicit, 0)
pytest.deprecated_call(dep_explicit, 0)

def test_deprecated_call_as_context_manager_no_warning(self):
with pytest.raises(pytest.fail.Exception) as ex:
with pytest.deprecated_call():
dep(1)
assert str(ex.value) == "DID NOT WARN"

def test_deprecated_call_as_context_manager(self):
with pytest.deprecated_call():
dep(0)

def test_deprecated_call_pending(self):
f = lambda: py.std.warnings.warn(PendingDeprecationWarning("hi"))
pytest.deprecated_call(f)
Expand Down
85 changes: 85 additions & 0 deletions testing/test_skipping.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from _pytest.skipping import MarkEvaluator, folded_skips, pytest_runtest_setup
from _pytest.runner import runtestprotocol


class TestEvaluator:
def test_no_marker(self, testdir):
item = testdir.getitem("def test_func(): pass")
Expand Down Expand Up @@ -382,6 +383,90 @@ def test_func():
])


class TestSkip:
def test_skip_class(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.skip
class TestSomething(object):
def test_foo(self):
pass
def test_bar(self):
pass

def test_baz():
pass
""")
rec = testdir.inline_run()
rec.assertoutcome(skipped=2, passed=1)

def test_skips_on_false_string(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.skip('False')
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would you expect this arg to be? Would this be assumed to be a reason? Should it be ignored? I guess reason makes the most sense...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I would expect to be the reason for the skip: an optional informative message. If not given, we can use a default message like I used in my example ("explicit skip"). 😄

def test_foo():
pass
""")
rec = testdir.inline_run()
rec.assertoutcome(skipped=1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also check the reason here please

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would I check the reason here if it's not specified though? Is matching the string "skipped instance" good enough?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the reason in this case was "False", right? I mean, the only possible parameter for skip is a reason, differently from skipif which receives (condition, reason). So I meant that you should also use fnmatch_linesto ensure the expected message is being displayed.


def test_arg_as_reason(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.skip('testing stuff')
def test_bar():
pass
""")
result = testdir.runpytest('-rs')
result.stdout.fnmatch_lines([
"*testing stuff*",
"*1 skipped*",
])

def test_skip_no_reason(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.skip
def test_foo():
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was actually referring to this test @nicoddemus, github moved our comments when I updated the code

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh!

Well, checking Skipped instance is enough, except that I think we should issue a better message. 😄

pass
""")
result = testdir.runpytest('-rs')
result.stdout.fnmatch_lines([
"*unconditional skip*",
"*1 skipped*",
])

def test_skip_with_reason(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.skip(reason="for lolz")
def test_bar():
pass
""")
result = testdir.runpytest('-rs')
result.stdout.fnmatch_lines([
"*for lolz*",
"*1 skipped*",
])

def test_only_skips_marked_test(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.mark.skip
def test_foo():
pass
@pytest.mark.skip(reason="nothing in particular")
def test_bar():
pass
def test_baz():
assert True
""")
result = testdir.runpytest('-rs')
result.stdout.fnmatch_lines([
"*nothing in particular*",
"*1 passed*2 skipped*",
])

class TestSkipif:
def test_skipif_conditional(self, testdir):
item = testdir.getitem("""
Expand Down