You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are several common tests that we rewrite for several different sagas, e.g.:
test that the saga runs correctly
test that the saga unwinds correctly from each possible error injection point
test that the saga runs correctly if an arbitrary node is repeated
test that the saga unwinds correctly if arbitrary undo nodes are repeated
An important lesson from #3265 and #3894 is that these tests, especially the second kind, are hard to write correctly: it's easy to get the saga execution boilerplate wrong so that the test passes but not all the undo actions are properly tested. It's also easy to forget to regenerate saga parameters that may have been changed by a prior attempt to execute the saga.
Instead of rewriting these tests by hand each time we write a new saga, we should write a set of templated harnesses that let the caller specify how to produce a DAG for each execution of the saga and what (if any) actions should be taken after each execution. Then the harness does the work of executing the saga, injecting retries/undo steps, checking for the correct errors, and the like, calling the caller-supplied closures on either side to ensure the correct parameters are generated for each test run.
The text was updated successfully, but these errors were encountered:
Several of our saga undo tests have previously been found to have
defects in which the test verifies that all attempts to execute a saga
fail but doesn't verify that saga executions fail at the expected
failure point. This loses test coverage, since the tests don't actually
execute all of the undo actions of the saga under test.
To try to prevent this problem, add a set of saga test helpers that
provide common logic for writing undo tests. The scaffold functions
handle the business of constructing a DAG, deciding where to inject an
error, and verifying that errors occur in the right places. The callers
provide a saga type and factory functions that run before and after each
saga execution to set up state, provide saga parameters, check
invariants, and clean up after each test iteration.
Convert existing tests of these types to use the new scaffolds. This
revealed that the no-pantry version of the disk snapshot test has a
similar bug: the saga requires the disk being snapshotted to be attached
to an instance, but the test wasn't creating an instance, so the saga
never got past its "look up the instance that owns the disk" step. Fix
this issue.
Add an additional scaffold that repeats nodes in a successful saga. This
case is less fragile than the undo case, but this gets rid of some
copy-pasted code.
Finally, coalesce some common instance operations (start, stop, delete,
simulate) that were used by multiple saga tests into the test helpers.
This is a test-only change, tested both by running `cargo test` and
finding no errors and by introducing bugs into some sagas and verifying
that tests run with the scaffolds catch those bugs.
Fixes#3896.
There are several common tests that we rewrite for several different sagas, e.g.:
An important lesson from #3265 and #3894 is that these tests, especially the second kind, are hard to write correctly: it's easy to get the saga execution boilerplate wrong so that the test passes but not all the undo actions are properly tested. It's also easy to forget to regenerate saga parameters that may have been changed by a prior attempt to execute the saga.
Instead of rewriting these tests by hand each time we write a new saga, we should write a set of templated harnesses that let the caller specify how to produce a DAG for each execution of the saga and what (if any) actions should be taken after each execution. Then the harness does the work of executing the saga, injecting retries/undo steps, checking for the correct errors, and the like, calling the caller-supplied closures on either side to ensure the correct parameters are generated for each test run.
The text was updated successfully, but these errors were encountered: