Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add example scripts for issue #519 #2207

Merged
merged 5 commits into from
Jun 15, 2018

Conversation

RonnyPfannschmidt
Copy link
Member

@RonnyPfannschmidt RonnyPfannschmidt commented Jan 18, 2017

this pr simply adds a script thats demonstrating issue #519

as far as i can tell by now it shows that the issue is no more

this is also a starting point for adding verbatim acceptance tests to the folder tree in order to execute py.test directly on those (in order to get direct feedback and better debugging)

opinions by @nicoddemus and @The-Compiler appreciated

@coveralls
Copy link

Coverage Status

Coverage remained the same at 92.819% when pulling 6d318dd on RonnyPfannschmidt:fix/519 into d15724f on pytest-dev:master.

@coveralls
Copy link

coveralls commented Jan 18, 2017

Coverage Status

Coverage increased (+0.05%) to 92.711% when pulling 3ac2ae3 on RonnyPfannschmidt:fix/519 into 3dcdaab on pytest-dev:features.

@nicoddemus
Copy link
Member

Hmm seems like an interesting idea, it certainly makes it easy to contribute new tests and facilitates debugging like you mention.

Should those tests/acceptance tests always pass? I feel that would be too limiting, plenty of times we want to ascertain a different outcome (a failure or skip).

Perhaps an additional file which describes the expectation of the test? It could even be easily implemented in terms of what we already have, consider the test:

def test_1():
    pass

def test_2():
    assert 0

We could associate the expected outcome to that test file (using the same filename perhaps):

- passed: 1
- failed: 0

Or:

- fnmatch_lines:
  - '*1 passed, 1 failed*'

And so on.

There's a risk of creating an entire meta-language to what is already relatively simple in Python, because some tests require more complex logic.

Just some quick brainstorming, would like to hear if you have any other ideas.

@RonnyPfannschmidt
Copy link
Member Author

if they are expected to fail should be in the filename, we should use them from tests in real tests folders, or be able to run them directly

and their folder should be in collect_ignore

so we can only run them explicitly

@Lothiraldan
Copy link
Contributor

I'm writing a pytest plugin to print json for each test report and I'm very interested into an examples directory! I was thinking having at least these tests:

  • Passing function/class tests
  • Failing function/class tests
  • Test cases that randomly fails
  • Tests with fixtures
  • Tests with xfail

About the running of acceptance tests, I'm contributing to the Mercurial project where we have an internal testing tool that let us write tests like this:

Test pytest output:

  $ pytest $TESTDIR/test.py

We can then run the test-runner with the -i option so we can accept the differences:

+  ============================= test session starts ==============================
+  platform linux2 -- Python 2.7.13, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
+  rootdir: /home/lothiraldan/project/mercurial/evolve, inifile:
+  plugins: sugar-0.8.0
+  collected 1 item
+  
+  ../../../../home/lothiraldan/project/mercurial/evolve/tests/test.py F
+  
+  =================================== FAILURES ===================================
+  _________________________________ test_answer __________________________________
+  
+      def test_answer():
+  >       assert inc(3) == 5
+  E       assert 4 == 5
+  E        +  where 4 = inc(3)
+  
+  /home/lothiraldan/project/mercurial/evolve/tests/test.py:5: AssertionError
+  =========================== 1 failed in 0.01 seconds ===========================
+  [1]
Accept this change? [n] y
.
# Ran 1 tests, 0 skipped, 0 failed.

The test file then is:

Test pytest output:

  $ pytest $TESTDIR/test.py
  ============================= test session starts ==============================
  platform linux2 -- Python 2.7.13, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
  rootdir: /home/lothiraldan/project/mercurial/evolve, inifile:
  plugins: sugar-0.8.0
  collected 1 item
  
  ../../../../home/lothiraldan/project/mercurial/evolve/tests/test.py F
  
  =================================== FAILURES ===================================
  _________________________________ test_answer __________________________________
  
      def test_answer():
  >       assert inc(3) == 5
  E       assert 4 == 5
  E        +  where 4 = inc(3)
  
  /home/lothiraldan/project/mercurial/evolve/tests/test.py:5: AssertionError
  =========================== 1 failed in 0.02 seconds ===========================
  [1]

Any backward incompatible change will change the output and makes the test-runner to mark the test file as failed.

We use this test-runner extensively for testing Mercurial and there has been some interest into extracting this test-runner from Mercurial into an independent project. I think it could be a good fit for pytest acceptance tests, what do you think?

@RonnyPfannschmidt
Copy link
Member Author

@Lothiraldan i am fairly interested in something like the mercurial acceptance test framework,

before i just didn't have the time to reimplement, and there is the licensing issue with the GPL

@Lothiraldan Lothiraldan mentioned this pull request Aug 7, 2017
@nicoddemus
Copy link
Member

@RonnyPfannschmidt is there still interest in this?

@RonnyPfannschmidt
Copy link
Member Author

@nicoddemus yes, but i need to start it in a different manner i think

@RonnyPfannschmidt RonnyPfannschmidt changed the title [wip] add example scripts for issue #519 add example scripts for issue #519 Apr 26, 2018
@RonnyPfannschmidt
Copy link
Member Author

@nicoddemus i'd like to put the scripts in simply to have them there and something with them

@nicoddemus
Copy link
Member

i'd like to put the scripts in simply to have them there and something with them

Can you elaborate what you mean by "something"?

If you are thinking of only have a set of verbatim test files that should pass, I think this would be a nice start. If one needs something more elaborate (to check for a specific message or if a certain test that should fail instead) then we can always fallback to using testdir as we do now.

@RonnyPfannschmidt
Copy link
Member Author

@nicoddemus mainly i want to start adding a set of test files that demonstrate more elaborate/strange issues, in future i'd then like to use those in actual tests of pytest using the normal mechanisms - but right now i want to have them available in repo for direct usage

@nicoddemus
Copy link
Member

@RonnyPfannschmidt got it.

The test is currently failing, so it seems it needs some work yet as we can't have failing tests in the suite.

@RonnyPfannschmidt
Copy link
Member Author

@nicoddemus the test is a known failure and shouldn't execute at all

@RonnyPfannschmidt RonnyPfannschmidt changed the base branch from master to features June 15, 2018 16:05
@nicoddemus
Copy link
Member

LGTM, please just add a README.md to the folder to explain why it is there, and future plans for it.

After that feel free to merge it! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants