-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zuul-ci #250
Add zuul-ci #250
Conversation
Build failed.
|
Build failed.
|
About the /run/media issue, you can try adding to the First comment of this change:
|
Build succeeded.
|
Build failed.
|
Build failed.
|
Build failed.
|
73c5c50
to
ca6b9d8
Compare
Build failed.
|
52ab49b
to
fb34595
Compare
Build failed.
|
Build failed.
|
Build failed.
|
Build succeeded.
|
Build succeeded.
|
Build failed.
|
31fc35d
to
7616924
Compare
Build failed.
|
Build failed.
|
Build failed.
|
Build succeeded.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good.
Some quick comments:
Could you squash the "update" and "restructure" commits into some of the original commits? There doesn't seem to be any reason to keep them separate. Or did I miss something?
test/system/000-test.bats
is empty. Should it be removed?
One thing that I couldn't help thinking is that the test suite involves a lot of shell scripting, which is odd since we are trying to move /usr/bin/toolbox
away from being a shell script. It's not surprising that the Bash Automated Testing System involves Bash, but still. :) Anyway, let's not delay this anymore. The tests aren't that complicated, so we can move them to a different framework in future, if needed.
I updated the PR. Let me know if there is anything else you want me to adjust. I agree that it is a bit of a contradiction that I introduced BATS even though we want to move from shell/bash. At the time I started the PR it looked like the optimal solution because it's quite easy to get started. If in the future we find a better solution then I'm not against replacing. |
Build succeeded.
|
These tests are written using BATS (Bash Automated Testing System). I used a very helpful helpers.bash script from the libpod project (Thank you!) that I tweaked slightly. containers#68
This adds several .yaml files that specify jobs (those in folder playbooks) and one that serves as the main config (.zuul.yaml). Tests and builds are currently executed on every change in PRs (ie., check and gating) and periodically (according to the documentation this pipeline should be run at least once a day). There are 4 tests in total: 1. 'ninja test' - does the same thing that Travis did 2. Fedora 30 - runs the system tests with current Podman and Toolbox in Fedora 30 3. Fedora 31 - the same but for Fedora 31 4. Fedora Rawhide - the same but for Fedora Rawhide containers#68
Travis was running 'ninja test' and that's now covered by Zuul. containers#68
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for cleaning up the commits, @HarryMichal
chdir: '{{ zuul.project.src_dir }}' | ||
|
||
- name: Run system tests | ||
command: bats ./test/system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: we should swap the order of commits so that ./test/system
is in place before we start referring to it.
- ShellCheck | ||
- bash-completion | ||
- udisks2 | ||
- wget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo? Or do we really need wget
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very possibly a typo.
Build succeeded.
|
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
The tests introduced by containers#250 have proven to be rather unstable due to mistakes in their design. The tests very quite chaotically structured. Because of that images were deleted and pulled too often which caused several false positives (containers#374, containers#372). This changes the strucutre of the tests in a major way. The tests (resp. commands) are now ran in a manner to kinda simulate the way Toolbox is used. From clean state, through creating containers, using them and in the end deleting them. This should reduce the strain on the bandwidth and possibly even speed up the tests themselves. More information in the README.md in the directory with the tests.
This adds configuration for zuul-ci provided by softwarefactory, the first batch of system tests that I scrambled together and a fix for bind mounting toolbox.sh inside of containers. This should be ready to be merged. If it will be, also close this #68.