Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting without pytest and without starting the server #251

Open
palotasb opened this issue May 22, 2023 · 6 comments
Open

Waiting without pytest and without starting the server #251

palotasb opened this issue May 22, 2023 · 6 comments

Comments

@palotasb
Copy link

Hi Zsolt!

As someone just getting started with this library, I ran into this issue:

import unittest
import requests
from pytest_httpserver import HTTPServer

class TestMyStuff(unittest.TestCase):
    def test_1(self):
        server = HTTPServer()
        server.expect_oneshot_request("/foobar").respond_with_json({"foo": "bar"})
        with server.wait(stop_on_nohandler=True, raise_assertions=True, timeout=1.0) as waiting:
              requests.get(server.url_for("/foobar"))  # raises requests.exceptions.ConnectionError
        assert waiting.result

I figured out the issue was with my code, the server needs to be started, and since my project doesn't use pytest, that needs to be done with the context manager manually.

          server.expect_oneshot_request("/foobar").respond_with_json({"foo": "bar"})
-         with server.wait(stop_on_nohandler=True, raise_assertions=True, timeout=1.0) as waiting:
-               requests.get(server.url_for("/foobar"))  # raises requests.exceptions.ConnectionError
+         with server:
+             with server.wait(stop_on_nohandler=True, raise_assertions=True, timeout=1.0) as waiting:
+                   requests.get(server.url_for("/foobar"))  # OK now

or more briefly:

          server.expect_oneshot_request("/foobar").respond_with_json({"foo": "bar"})
-         with server.wait(stop_on_nohandler=True, raise_assertions=True, timeout=1.0) as waiting:
+         with server, server.wait(stop_on_nohandler=True, raise_assertions=True, timeout=1.0) as waiting:
-               requests.get(server.url_for("/foobar"))  # raises requests.exceptions.ConnectionError
+               requests.get(server.url_for("/foobar"))  # OK now

I first thought I'd just let you know that I got confused about this part of the API.

Then I was wondering, what would you think about the .wait() context manager also doing the work of the HTTPServer context manager if the server isn't started yet? That would make the original code work, but I'm not sure about potential downsides. Maybe there are situations where you want to start .wait()-ing before you start a server?

@csernazs
Copy link
Owner

Hi Boldizsár,

For the first sight I thing doing server stop/start from the wait() method would be a layering violation, as wait() operates at how the requests are handled, while the server stopping and starting manages the server itself.

In pytest we also don't stop the server after each test as it causes some time penalty (0.1 sec or similar), so instead of stopping/starting we are clearing the server state instead. Because of this, it would be complicated to implement the stopping logic in the wait method.

I think with the unittest code you have, there are still possibilities.

  1. you can run unittest code with pytest (although I have no experience about how the fixture works in that case)
  2. you can put server starting to setUp and stopping to tearDown methods, so it will be started and stopped. You can store the httpserver instance to an attribute in such case. In this, case the server will be started/stopped for earch test.
  3. you can put server starting and stopping to setUpClass and tearDownClass (which are classmethods) so the server will be started for the class and it will survive the tests. In such case you need to cleanup the server state by calling server.clear() call, this ensures that there's no cross-talk between the tests (removes the handlers, etc).
  4. if there's no reason to use the wait method, you don't have to. If your code you test will finish, you can still examine the log attribute of the server object for the requests (and raise an error if there were none).
  5. you can start the httpserver as a module variable, and then use it for the whole process, and let the test process to do the cleanup (this is not the nicest one, but still would work). In such case you would just need to call the clear() method descibed in step 3 between the tests.

What do you think, would any of these work for you?

@palotasb
Copy link
Author

put server starting to setUp and stopping to tearDown methods, so it will be started and stopped. You can store the httpserver instance to an attribute in such case. In this, case the server will be started/stopped for earch test.

This is what I actually have currently but maybe I'll switch to the classmethods and .clear()-ing to speed up things a bit. I agree that making the server a module level object wouldn't be very nice so I think I'll avoid that.

I haven't considered not wait()-ing at all, and looking at the .log instead, but that might make the test methods even simpler so I'll give it a shot. Actually we have a few places where our client code is supposed to not make any requests at all, so we might need to inspect the log anyway.

I think I mostly wanted to .wait() because of the stop_on_nohandler=True, raise_assertions=True options, i.e., turning unexpected requests into exceptions.

We do run the unittest test code via pytest (and tox) but as far as I know we can't use pytest test fixtures in this case.

@csernazs
Copy link
Owner

I found that there's setUpModule and tearDownModule also, for module level setup and teardown.
So this would allow you to define more classes in the module, all receiving httpserver.

Example:

import unittest
from pytest_httpserver import HTTPServer
import requests

httpserver = HTTPServer()


def setUpModule():
    httpserver.start()


def tearDownModule():
    httpserver.stop()


class TestMyStuff(unittest.TestCase):
    def test_foo(self):
        httpserver.expect_request("/foo").respond_with_data("OK")
        self.assertEqual(requests.get(httpserver.url_for("/foo")).text, "OK")

    def test_bar(self):
        # no cross-talk
        self.assertNotEqual(requests.get(httpserver.url_for("/foo")).text, "OK")

    def tearDown(self):
        httpserver.clear()

There's some limited support for fixtures: https://docs.pytest.org/en/7.1.x/how-to/unittest.html#mixing-pytest-fixtures-into-unittest-testcase-subclasses-using-marks

But using setUp and tearDown (module and class levels, as well) is the idiomatic way to use the unittest module, I think.

@palotasb
Copy link
Author

For some reason, having this code without any server.wait() hangs tox/pytest/unittest indefinitely after the tests have completed (the process doesn't quit).

# test_pytest_httpserver.py
# python -m unittest test_pytest_httpserver

import unittest
import requests
from pytest_httpserver import HTTPServer

class TestMyStuff(unittest.TestCase):
    def setUp(self):
        self.server = HTTPServer()
        self.server.start()

    def tearDown(self):
        self.server.check()
        self.server.stop()

    def test_the_test_server_setup(self):
        self.server.expect_oneshot_request("/foobar").respond_with_json({"foo": "bar"})
        requests.get(self.server.url_for("/UNEXPECTED"))
        self.assertEqual(len(self.server.log), 1)
$ python -m unittest test_pytest_httpserver
127.0.0.1 - - [23/May/2023 11:10:53] "GET /UNEXPECTED HTTP/1.1" 500 -
F
======================================================================
FAIL: test_the_test_server_setup (test_pytest_httpserver.TestMyStuff)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/example/test_pytest_httpserver.py", line 11, in tearDown
    self.server.check()
  File "/Users/example/venv/venv-pytest-httpserver/lib/python3.8/site-packages/pytest_httpserver/httpserver.py", line 776, in check
    self.check_assertions()
  File "/Users/example/venv/venv-pytest-httpserver/lib/python3.8/site-packages/pytest_httpserver/httpserver.py", line 795, in check_assertions
    raise AssertionError(assertion)
AssertionError: No handler found for request <Request 'http://localhost:53222/UNEXPECTED' [GET]>.
Ordered matchers:
    none

Oneshot matchers:
    <RequestMatcher uri='/foobar' method='__ALL' query_string=None headers={} data=None json=<UNDEFINED>>

Persistent matchers:
    none

----------------------------------------------------------------------
Ran 1 test in 0.017s

FAILED (failures=1)
^CException ignored in: <module 'threading' from '/opt/homebrew/Caskroom/miniconda/base/lib/python3.8/threading.py'>
Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.8/threading.py", line 1388, in _shutdown
    lock.acquire()
KeyboardInterrupt:

Is there something I'm doing wrong here? Note that after the line FAILED (failures=1) the process hung and I hit ^C to quit.

@palotasb
Copy link
Author

Now I think I remember better, this is why I started using .wait() in the first place, to avoid the process hanging in some test cases (even if they are failed test cases). Maybe I can wrap each test case in a .wait() automatically by creating a waiter in the setUp() and finishing it in the tearDown().

@palotasb
Copy link
Author

palotasb commented May 23, 2023

This is the current best setup I have:

# test_pytest_httpserver.py
# python -m unittest test_pytest_httpserver

import contextlib
import unittest
import requests
from pytest_httpserver import HTTPServer

class TestMyStuff(unittest.TestCase):
    def setUp(self):
        self.server = HTTPServer()
        self._exit_stack = contextlib.ExitStack()
        self.server.start()
        self._exit_stack.callback(self.server.stop)
        self._exit_stack.callback(self.server.check)
        waiter = self.server.wait()
        self._exit_stack.enter_context(waiter)

    def tearDown(self):
        self._exit_stack.close()

    def test_the_test_server_setup(self):
        self.server.expect_oneshot_request("/foobar").respond_with_json({"foo": "bar"})
        response = requests.get(self.server.url_for("/foobar"))
        response.raise_for_status()
        self.assertDictEqual(response.json(), {"foo": "bar"})

    @unittest.expectedFailure
    def test_the_test_server_setup_for_failed_test_cases(self):
        self.server.expect_oneshot_request("/foobar").respond_with_json({"foo": "bar"})
        requests.get(self.server.url_for("/UNEXPECTED"))

        # We need this call, otherwise the @unittest.expectedFailure won't have any effect
        # on the exception coming from the tearDown() after this test case
        self._exit_stack.close()

The actual test case code is very minimal, only server.expect_...(...).respond_...(...) and requests.request(...) calls, and if I expect one-shot requests, and they don't happen, the test case will still fail without needing to assert the server logs len. The test case doesn't hang.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants