-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests should run with or without the data from vcr's fixtures #123
Comments
Obviously we should fix the failing tests. Second part of the issue is that the fixtures are masking problems with the tests and plugins. I thought about adding an option to the Makefile to clear out the fixtures, but the more I thought about it... should they be committed at all? If we don't commit the output from vcr (and we add it to .gitignore instead), TravisCI will test with real APIs each run. |
the fixtures are there to avoid testing with real APIs with each run; ideally a test run would not hit the network at all. (Right now I think a couple of the tests do and I've been too lazy to fix them) |
Having each test run NOT hit the real APIs is awesome and a great feature for this project. I didn't realize until now that some of the tests were written to check specifically against the output that the fixture happened to save. Take
I feel like that's selling the tests short! There's nothing wrong with the plugin, the API, or the test... the test should pass. If it doesn't, that means that we can never use our plugin tests to check against changes to external APIs. Issues like #122 may stay hidden forever. Could a good balance be...
If yes, I'll make the necessary modifications to the current tests. |
Let's take the commit plugin test as an example; if we don't use a fixture to specify the return value of the commit API, what can we test? Just that the website returned a 200? How do we know that the plugin worked correctly? |
Great discussion question! I'd do this: diff --git a/test/test_plugins/test_commit.py b/test/test_plugins/test_commit.py
index 9e44733..a5e725f 100644
--- a/test/test_plugins/test_commit.py
+++ b/test/test_plugins/test_commit.py
@@ -2,6 +2,7 @@
import os
import sys
+import six
import vcr
DIR = os.path.dirname(os.path.realpath(__file__))
@@ -12,4 +13,5 @@ from commit import on_message
def test_commit():
with vcr.use_cassette('test/fixtures/commit.yaml'):
ret = on_message({"text": u"!commit"}, None)
- assert 'stuff' in ret
+ assert isinstance(ret, six.string_types), ret
+ assert len(ret) > 0 That code answers the questions
|
To ask the question differently, what is the expected path to find issues like #122? How do we know when an API has changed? |
OK, that's a good question, and one I'd thought about but not deeply enough yet. I think the right answer is to have a separate "network" test suite. So, if you run I'm currently not convinced that travis should run the network tests, just due to the fact that they'll be much slower than the non-network tests. How does this solution strike you? |
That solution sounds great! I'll work on it when I get the chance. |
vcr
's fixtures
vcr
's fixturesNormally, when tests are run we use vcr fixtures for any external API requests that our test calls make. This is great! It lets us run our tests quickly and deterministically, without worrying about the slowness or availablity of external services. However, this means our tests can't alert us to external APIs changing. If an API changes, our vcr fixtures would continue to provide us with old API data, giving us false-positives that our plugins were working. `make test-network` lets us run the same tests but ignores the vcr fixtures. This means that tests will hit the external network. It uses an environment variable, set by the Makefile, to alert our test utils whether or not to use vcr for a given run. Closes llimllib#123
vcr
seems like a great tool for rapid and offline testing! However, it seems to be masking changes to the underlying APIs that plugins are depending on. If I clear out its fixtures, many tests start failing for me. Here's my test results:(my stockphoto.py isn't failing because of #122)
Steps to reproduce:
make test
✅ tests pass!
rm test/fixtures/*.yaml
make test
❌ observe many failing tests
The text was updated successfully, but these errors were encountered: