Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scripts/ami-copy-regions: Try waiting on all AMIs #341

Merged

Conversation

cgwalters
Copy link
Member

  • Unbuffer stdout so stderr isn't confusingly first
  • Try a --dry-run first so that we verify we have permissions
    (this may be the issue?)
  • Finally and perhaps most importantly, wait for every AMI at
    least once. This way if the first one is slow, we'll move
    on to the next.

@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Oct 2, 2018
@cgwalters
Copy link
Member Author

(Tested lightly locally)

@cgwalters
Copy link
Member Author

/hold

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 2, 2018
@cgwalters
Copy link
Member Author

/hold cancel

Now tested better

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 2, 2018
# this could take a long time, but in practice EC2 parallelizes
# so this way we avoid failing if the first image or two happens
# to take too long.
for _ in range(1):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is a really dumb question but what's the point of this line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment above was attempting to explain that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh except I meant range(2) here...fixed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This by the way is why we have code review 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was slightly confused and thought you were trying to do something magical with a single loop that I wasn't aware of 😄

The copying seems to have added ~14 minutes to the job run, which I guess is normal given that one upload in the cloud job takes about the same time.

 - Unbuffer stdout so stderr isn't confusingly first
 - Use a tuple instead of dict so that we can add the AMIs
   to a set
 - Try a --dry-run first so that we verify we have permissions
   (this may be the issue?)
 - Finally and perhaps most importantly, wait for every AMI at
   least once.  This way if the first one is slow, we'll move
   on to the next.
@yuqi-zhang
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 2, 2018
@openshift-merge-robot openshift-merge-robot merged commit 2349764 into openshift:master Oct 2, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants