-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testing: extend regression tests to item analyzers #519
Conversation
90974ee
to
bdb6e47
Compare
5b0c749
to
fc36464
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ooh I'm really looking forward to this :)
@@ -0,0 +1,42 @@ | |||
analysis: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried about the _1 and _2 config files drifting apart, leading to hard-to-understand failures in e2e tests.
One way to avoid it would be to derive them programmatically from some common template. But I think it would be relatively verbose to do it, and potentially awkward to maintain.
A more light-weight option is to add a step to the test itself that makes sure the two configs are both talking to the same cache, both expecting mainnet (vs testnet), etc. Something like this:
#!/bin/bash
set -euo pipefail
# Our e2e regression test runs the analyzer twice in close succession. This is a slightly hacky but
# simple way to ensure block analyzers run first, and non-block analyzers perform EVM queries always
# at the same height, thereby hitting the offline response cache.
#
# This script compares the key parameters of the two config files used in the two runs. If any of
# those parameters differ, it shows the diff and exits with an error.
# Elements of the config files that we'll compare
important_attrs='{"cache": .analysis.source.cache.cache_dir, "chain_name": .analysis.source.chain_name, "db": .analysis.storage.endpoint}'
# A converter whose only dependency is stock python, which is likely to be preinstalled
alias yaml2json="python -c 'import sys,yaml,json; print(json.dumps(yaml.safe_load(str(sys.stdin.read()))))'"
# Compare
cat tests/e2e_regression/e2e_config_1.yml | yaml2json | jq "$important_attrs" > /tmp/e2e_config_1.summary
cat tests/e2e_regression/e2e_config_2.yml | yaml2json | jq "$important_attrs" > /tmp/e2e_config_2.summary
diff /tmp/e2e_config_1.summary /tmp/e2e_config_2.summary || { echo "The two config files for e2e tests differ in key parameters! See diff above."; exit 1; }
Then in the Makefile, at the beginning of the regression test target, we just need to call it:
@ensure_consistent_config.sh
The script is untested apart from the jq part. Please rethink important_config
with a critical eye.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it, thanks for the example! Added with a couple minor tweaks
shopt -s expand_aliases
to make the aliases work in script- changed
python
->python3
to be clearer
a2ab5dc
to
8fa0f8f
Compare
fc36464
to
ded19c7
Compare
I pushed
|
96e1842
to
79bf4b5
Compare
stable sort update regression tests tmp regenerate rpc cache api: /consensus/entities/{id}: Stabilize output ordering update regression outputs test python3 ci nits nit
79bf4b5
to
a70d004
Compare
Task
From #410 (comment):
Our non-block-based analyzers also have a fair bit of complexity in them, and it would be good to test them for e2e regressions also; currently, we only test the block-based analyzers.
We can probably fit all analyzers except the git-based
metadata_registry
into our existing regression framework. I think what we want to do is have a way to run all the block analyzers to the end, then run all theevm_tokens_*
andevm_token_balances_*
analyzers and theaggregate_stats
analyzer. (That way, they'll query the node at a predictable height, so they can always hit the cache.) The first part is easy; for the second part, we don't have a way yet of telling the analyzers "exit as soon as there is no more work for you"; add it in the scope of this work.The second part would ideally happen _after_ because it will be cleaner to make this change once in a framework, rather than copy-pasted across 5 analyzers.
This PR
Tests functionality of the
evm_tokens
,evm_token_balances
, andevm_contracts
analyzers