Verify post-test cleanup for model mapping unit tests; fix a few errors #12714
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This fixes a few errors in main model and TS model mapping tests; and adds a test that verifies that such errors don't happen.
Model mapping tests (
test/unit/shed_unit/model/test_mapping
andtest/unit/data/model/test_mapping
) verify the correctness of the mapping specified in the model classes (more details in "Tests" section in the description of #12064). They use an in-memory sqlite database as a test double. The database is created once for each module (there are 2 modules (one for Galaxy, one for TS). Each test receives an empty database (with the tables created, but empty); it persists some objects, then cleans-up by removing anything it has added to the database.However, it appears that in a few cases the cleanup was not properly handled (discovered thanks to a discussion with @mvdbeek in #12666 (review)). This PR fixes these errors and adds a fixture that auto-runs before each test and any unscoped fixtures (except session and model on which it depends) and verifies that all model tables in the database are empty. Thus, it ensures that a test is not affected by data leftover from a previous test run.
Limitations and tradeoffs
HistoryAudit
andJobStateHistory
models. These items are persisted indirectly (one is created by a database trigger, the other one - by aJob
, upon Job instantiation). As a result, a test where either one ends up being added to the database has no way of knowing whichHistoryAudit
and/orJobStateHistory
rows have been created (cleanup is handled by primary key: a test adding a Foo row (pkey=n) to the database, should delete only that Foo row (pkey=n) upon exit, not all Foo rows.). However, these two classes do not cause any test overlap issues, so I think this is a non-issue.SELECT count(*) FROM Foo
where
N = |models under test| * |tests| = 153 * 443 = 67,779
. This increases the runtime by approx. 350% (on my desktop 4 sec. becomes 14 sec.). It's annoying, but negligible, I think.EDIT: the added test execution time is sufficiently annoying (even on my relatively fast desktop), so I'm wondering if this assertion is not too high a price to pay? It only asserts that the tests are setup correctly. If anyone feels the same, we could (a) remove the
autouse=True
- so it won't run - and add a comment that this can be enabled for test debugging if needed; or (b) remove it completely, and only leave the fixes.How to test the changes?
(Select all options that apply)
License