-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Acceptability of requiring external 'mock' package for python2 testing? #2056
Comments
@billsacks , would it be possible to just include it with cime for now (like @jedwards4b did for six)? |
…ests Add unit tests of compare_two's handling of pass/fail in comparison Previously, I had figured that it was sufficient to ensure that _component_compare_test was actually being called, figuring that the tests of _component_compare_test belong elsewhere. But, since this is such a critical aspect of the system_test_compare_two infrastructure, I'm adding some tests covering _component_compare_test here. These new tests cover the logic related to in-test comparisons in system_tests_compare_two and system_tests_common. However, they will NOT cover the logic in hist_utils: I'm stubbing out the actual call into hist_utils, under the assumption that this is - or should be - covered by other tests. But I'm not sure if hist_utils is actually covered sufficiently by unit tests. If it isn't, it should be. This also required some minor refactoring of system_tests_common in order to allow stubbing out the call into hist_utils. (This refactoring would not have been needed if we allowed use of the mock module: see #2056.) Test suite: scripts_regression_tests on yellowstone Passes other than the issues documented in #2057 and #2059 Also ensured that comparison failures are still reported correctly by running a REP test that forces a comparison failure (due to missing cprnc). Test baseline: n/a Test namelist changes: none Test status: bit for bit Fixes: Partially addresses #1640 User interface changes?: none Update gh-pages html (Y/N)?: N Code review:
@jgfouca it looks like that could be possible, but I'm not sure it's a good idea to go too far down that path: six is designed for that use case, but I think things are going to get messy if we start bringing in too many packages this way. That said, I'm open to the idea if others prefer it. I'm not positive this is possible, though: there's a bunch of other stuff in the repo https://github.com/testing-cabal/mock and while it looks like the only essential thing may be https://github.com/testing-cabal/mock/blob/master/mock/mock.py I can't tell for sure, because I'm not very familiar with what gets pulled in with python packages. |
@billsacks and @jgfouca everything needed in the mock package is contained in the https://github.com/testing-cabal/mock/blob/master/mock directory ( However, it could be pulled into CIME similar to how
I would caution that doing that is pretty a-typical for python dependency management, and will require manual updates of the package. |
Thanks @jhkennedy
That was my worry, too. A big part of me would prefer to just not use mock until we have the go-ahead to bring in some limited number of dependencies outside the standard library.... |
We probably won't ever need to update the package if the current version works for us. I think this situation is also somewhat mitigated by the fact that python3 has mock in the standard library; so this (snapshotting mock into cime) isn't a permanent solution. |
Okay, good points @jgfouca . I'll consider this approach the next time I want to use the functionality of mock. I'll go ahead and close this issue for now. |
I repeatedly find myself wanting to stub/mock out some function when writing unit tests for the cime python code. For example, if I'm testing method
foo
which callsbar
, wherebar
is a really ugly mess in a unit testing context, I'd like to replacebar
with a stub version for these unit tests (e.g., forcingbar
to simply return aTrue
orFalse
value as needed in this test).A specific example is: I'd like to add some unit tests of the system_test_compare_two infrastructure that verify that the COMPARE result is set correctly, depending on the result of
hist_utils.py: compare_test
. Actually havingcompare_test
compare netcdf files gets us outside of the unit testing realm (that's the job of unit tests of thehist_utils.py
module). So for these tests, I'd like to use a stub version ofcompare_test
which I can force to return whatever value I want, to make sure that the code under test (system_tests_compare_two
andsystem_tests_common
) responds appropriately.The best way I know of doing this is with the
mock
package. This is part of the standard library starting in python3.3 (https://docs.python.org/3.3/library/unittest.mock.html), but needs to be installed separately (e.g.,pip install mock
: https://pypi.python.org/pypi/mock) for earlier versions of python, including python2.7.Do people feel it's acceptable to add a dependency on this mock package? This would require installation of mock by anyone wanting to run scripts_regression_tests with python2 (or one system-wide installation), but this would not add any dependencies for a standard user of cime.
The text was updated successfully, but these errors were encountered: