-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated VCR re-recording #75
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, LGTM 👍
This pull request is not mergeable. Please rebase and repush. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending green tests
@cben looking e.g at ContainerReplicator.count failure (byebug) pp ContainerReplicator.pluck(:ems_ref, :name)
[["7b36741b-dd83-11e7-8319-169d0f7fea7d", "docker-registry-1"],
["7beb37af-dd83-11e7-8319-169d0f7fea7d", "router-1"],
["bf9b8058-dd84-11e7-8319-169d0f7fea7d", "my-replicationcontroller-2"],
["ea539a31-dd83-11e7-8319-169d0f7fea7d", "hawkular-cassandra-1"],
["e518dc66-dd83-11e7-8319-169d0f7fea7d", "hawkular-metrics"],
["e61f1832-dd83-11e7-8319-169d0f7fea7d", "heapster"]] this looks correct, but spec thinks we deleted 2, so there should be 3 total. Is it possible the env. is either not deterministic, or there are more changes happening? because
would pass, it's just something/somebody created a plenty more ContainerReplicator records in the meantime :-) I expect that is the reason why other counts do not match. |
- compare together to get all differences printed together - express counts after deletions as subtractions
Prevents ContainerProject.count "expected: 10 got: 11" failure.
Seems to avoid error that I saw in most runs with 3.7.0 (minishift): Error from server (Forbidden): persistentvolumeclaims "my-persistentvolumeclaim-1" is forbidden: status unknown for quota: my-resource-quota-1 I guess if quota is created last, it's not really enforced, but we don't care.
Env created with minishift v1.9.0: minishift start --vm-driver virtualbox --openshift-version v3.6.1 --metrics --memory 5G Then waited for metrics to come up, as confirmed by: oc get pods -n openshift-infra curl --insecure "https://$(oc get route -n openshift-infra hawkular-metrics --template '{{.spec.host}}')"
PTAL. Still getting small count difference between recordings, but it's reasonably stable and I want to move on. Added text files which will make "what changed between recordings" much easier to track with future changes...
|
expect(ContainerProject.count).to eq(object_counts['ContainerProject']) | ||
expect(ContainerProject.active.count).to eq(object_counts['ContainerProject'] - 1) | ||
|
||
pending("why graph refresh DELETES 1 image from DB? why old refresh archives 2?") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
token = 'theToken' | ||
# env vars for easier VCR recording, see test_objects_record.sh | ||
hostname = ENV["API_HOST"] || "host.example.com" | ||
token = ENV["API_TOKEN"] || "theToken" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have some automation around cluster creation and there these are named:
OPENSHIFT_MASTER_HOST
OPENSHIFT_MANAGEMENT_ADMIN_TOKEN
IT could prove useful to have the same names, up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#!/bin/bash | ||
|
||
set -e # abort on error | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this isn't using functions can we have section headers for readability? e.g
#
# Ensure API Server
#
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an echo
, I'm using these as both headings in code and headings in script output.
echo 'or have minishift in $PATH and already running, e.g.' | ||
echo ' minishift addons enable manageiq' | ||
echo ' minishift start --vm-driver virtualbox --openshift-version v3.7.0' | ||
exit 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could add a link to this page explaining deployment options:
http://manageiq.org/docs/guides/providers/openshift
if which minishift && minishift status | grep -i 'openshift:.*running'; then | ||
export API_HOST="$(minishift ip)" | ||
eval $(minishift oc-env --shell bash) # Ensure oc in PATH | ||
oc login -u system:admin # With minishift, we know we can just do this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where do we login to a non minishift cluster?
or do we assume we are already logged in?
I'd say running on the openshift master is a good assumption (API_HOST=localhost)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for non-minishift BYO, script expects you to already have logged in.
Running on openshift master is inconvenient — you need manageiq checked out & working (at least test DB) to run this script, and script leaves updated files you'll want to git commit.
I should document how to oc adm policy add-cluster-role-to-user cluster-admin you
for remote login.
I've never actually tested the script in BYO mode. Let me try...
|
||
VCR_DIR="$(dirname "$(realpath "$0")")" | ||
|
||
cd "$(realpath "$0" | sed 's:\(manageiq-providers-openshift/\).*$:\1:')" # repo root |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cd "$(git rev-parse --show-toplevel)"
oc start-build my-build-config-$ind | ||
done | ||
|
||
while out="$(oc get build --all-namespaces)"; echo "$out"; [ "$(echo "$out" | egrep --count --word 'Complete|Failed|Error')" -ne 3 ]; do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please upcase all the variables
Looks great and taking us a long way. |
Right, however it's very close to what we'd need for an actually frequently run integration test, if we'll want that. |
Checked commits cben/manageiq-providers-openshift@d80e778~...466a85c with ruby 2.3.3, rubocop 0.47.1, haml-lint 0.20.0, and yamllint 1.10.0 **
|
Oops, I was pushing to different branch, you didn't see changes I was claiming, sorry :-(
I believe all comments addressed (see recent commits). @miq-bot add-label gaprindashvili/yes |
sleep 3 | ||
done | ||
|
||
for ind in 0 1 2; do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upcase var please
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
rm -v "$VCR_DIR"/refresher_after_deletions.{yml,txt} || true | ||
describe_vcr > "$VCR_DIR"/refresher_after_deletions.txt | ||
env RECORD_VCR=after_deletions bundle exec rspec "$SPEC" || echo "^^ FAILURES ARE POSSIBLE, YOU'LL HAVE TO EDIT THE SPEC" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are FAILURES expected due to the status issue above? something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to recordings not being 100% reproducible — it's quite stable now but still usually at least something is off by one. "An ideal world is left as an exercise to the reader."
Additionally, if you switch Openshift versions, or change template deliberately, you'll get certain differences.
In any case, this script should not abort on test fail — it should let you get both recordings, and then you can edit & run spec again and again...
Better wording welcome.
ping @moolitayer anything else needed here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
Awesome 💪 |
Automated VCR re-recording (cherry picked from commit 6e8842e)
Gaprindashvili backport details:
|
Something I've long wanted, a script for hands-off VCR re-recording 🎉
Can use minishift or (todo: untested) bring your own openshift setting OPENSHIFT_MASTER_HOST env var.
Steps I took for new VCR recording:
--metrics
for v3.7.0 doesn't work yet.)Issues I didn't fix
We already had a not-fully-explained off by 1 there.
I would like to merge this as pending test and debug later (will open issue).
However it's mostly stable
This may be a problem if somebody passes a long-lived env.
cc @yaacov @enoodle @Ladas
Need this gaprindashvili/yes for improving quotas test
https://bugzilla.redhat.com/show_bug.cgi?id=1504560