-
Notifications
You must be signed in to change notification settings - Fork 900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use anand version #2
Conversation
This will populate the BUILD file with: <VERSION>.<Date of build>_<git short sha1> Such as: "anand.1.20140612170003_6bb7741" This is also what will be presented in the "Configure" -> "About" page.
Once we're open source, we can dynamically populate the ova.json based on the VERSION file for the current release branch.
@jrafanie Do we want more than just 1 number after the name portion? Like anand.0.1 or anand.0.0.1 ? |
I don't know @Fryguy. I assume we are doing the anand release and any major bugs/security issues will warrant a retag/rebuild of the appliances. We won't be introducing new features or breaking compability AFAIK. |
Should that file even exist anymore? |
Yes, @Fryguy it would affect that. I agree, it doesn't make sense in upstream. |
Yes I think so. |
If we're thinking of anand as the major version, then we should have a minor version too. I would also like to communicate whether something is for general release, or is considered an alpha or beta quality release. My suggestions: manageiq-ovirt-stable.ova will always point to the latest stable release off the latest branch (initially, manageiq-ovirt-anand-1.ova, manageiq-ovirt-anand-2.ova, etc; after Byrne is marked stable, manageiq-ovirt-byrne-1.ova, etc) manageiq-ovirt-nightly.ova will always point to the most recent nightly release off master. The rest we can work out later. |
So... how can we test our plan? What are the steps? We should get this right before Thursday. |
On 06/16/2014 03:15 PM, John Mark wrote:
Wednesday you mean, yes? |
This developer-centric file is more useful for maintaining multiple numbered versions.
@Fryguy Updated, removed the pg_dev.yml file that no longer works with the new VERSION convention. |
Checked commits jrafanie@dbe1977 .. jrafanie@07e279d with rubocop 0.21.0 |
@jrafanie As discussed earlier today, let's cut the actual wording in the new branch, and leave master branch as the "master" version name. Once you have everything written up, can you just document it here? |
On 06/16/2014 09:16 PM, Jason Frey wrote:
In the meantime, please start uploading nightly builds to the CDN. We |
Closing in favor of #38. We'll change to anand-1 on the "anand" branch. |
…x_changes Support case alert box changes
- Moved the all_encrypted_options_fields method to MiqRequestWorkflow and simplified (tons) by using descendants. Thanks Jason. - Added SUBCLASSES in miq_request_workflow and eager load the subclasses - Updated encrypted_options_fields to return [] in miq_request_workflow
Convert style1 class tables to Patternfly #2
Remove :_type_disabled for models in migrations, where type column doesn't exist #2
Automate task finished spec
Use OvirtSDK for the provisioning flow (cherry picked from commit 88572201c6afd204f255dafe612098e85a582d13)
Add support for storage fileshares (cherry picked from commit 7acce0382a8b10db453fe58e068ef97dec8c66a0) https://bugzilla.redhat.com/show_bug.cgi?id=1461169
…-master By default use hawkular client ruby master branch
This is the 1st commit message: Reload replication settings This is the commit message ManageIQ#2: Mark dialog selection as a required field
Test cleanup
For reading "stdout" for the `ansible-runner` invocation, use the `artifacts/result/job_events/*.json` files instead of the `stdout`, since they are consistently correct (it seems). There is some additional sorting and data massaging that is also done to make sure things are correct (and have been documented in the code directly), but it is mostly: - looping through the .json files in the directory - sorting them to the proper numerical order - Reading the files and appending them to a string - Ensuring a new lines exist for each file read This makes it so that each file's content is on it's own line in the same format as the `artifacts/result/stdout` that was used previously. I am pretty sure the downsides to this approach are limited, and the previous commit should make it so that any non-Hash values potentially genrated from this new code will be a non-issue.
For reading "stdout" for the `ansible-runner` invocation, use the `artifacts/result/job_events/*.json` files instead of the `stdout`, since they are consistently correct (it seems). There is some additional sorting and data massaging that is also done to make sure things are correct (and have been documented in the code directly), but it is mostly: - looping through the .json files in the directory - sorting them to the proper numerical order - Reading the files and appending them to a string - Ensuring a new lines exist for each file read This makes it so that each file's content is on it's own line in the same format as the `artifacts/result/stdout` that was used previously. I am pretty sure the downsides to this approach are limited, and the previous commit should make it so that any non-Hash values potentially generated from this new code will be a non-issue.
For reading "stdout" for the `ansible-runner` invocation, use the `artifacts/result/job_events/*.json` files instead of the `stdout`, since they are consistently correct (it seems). There is some additional sorting and data massaging that is also done to make sure things are correct (and have been documented in the code directly), but it is mostly: - looping through the .json files in the directory - sorting them to the proper numerical order - Reading the files and appending them to a string - Ensuring a new lines exist for each file read This makes it so that each file's content is on it's own line in the same format as the `artifacts/result/stdout` that was used previously. I am pretty sure the downsides to this approach are limited, and the previous commit should make it so that any non-Hash values potentially generated from this new code will be a non-issue.
This is an addition to the solution from the previous commit, which continues to use the `artifacts/result/stdout`, but filters out lines that don't include a full JSON object. The advantages to this approach is it is much simpler to implement code wise, and if ansible-runner's stdout is ever fixed, this will gracefully continue to do as we would expect. However, it comes at a few costs that are worth considering: - Some what slow since there is regexp for every line - Brittle since the regexp is very rudimentary The second concern is probably a bigger deterrent since the Regexp is very rudimentary and probably is missing plenty of edge cases, so there is a good chance some output could be filtered out. Regarding the speed, this was tested against result of `ansible-runner` running the following playbook: --- - name: List Variables hosts: all tasks: - name: Display all variables/facts known for a host debug: var: hostvars[inventory_hostname] And used the following script to test: 5.times do puts Benchmark.measure do 500.times do Ansible::Runner::Response.new(:base_dir => Dir.pwd) .parsed_stdout .map {|l| l['stdout']} .join("\n") end end end For the previous commit, the results were: $ ruby benchmark.rb 13.357855 0.879015 14.236870 ( 14.438316) 14.575711 0.861268 15.436979 ( 15.639258) 14.455340 0.850817 15.306157 ( 15.525687) 14.487061 0.831244 15.318305 ( 15.525777) 14.419763 0.868440 15.288203 ( 15.470261) And with these changes: $ ruby benchmark_script.rb 2.821104 0.075949 2.897053 ( 2.937712) 2.864495 0.060510 2.925005 ( 2.970379) 2.921303 0.058061 2.979364 ( 3.016452) 3.191436 0.036860 3.228296 ( 3.285050) 3.079974 0.065541 3.145515 ( 3.225359) The slowness of the previous code was mostly caused by the extra `JSON.parse` calls that happen as a result of all of the extra non JSON lines that aren't being filtered out. These solution is definitely faster, but there is a faster solution that will be provided in the next commit.
This is an addition to the solution from the previous commit, which continues to use the `artifacts/result/stdout`, but filters out lines that don't include a full JSON object. The advantages to this approach is it is much simpler to implement code wise, and if ansible-runner's stdout is ever fixed, this will gracefully continue to do as we would expect. However, it comes at a few costs that are worth considering: - Some what slow since there is regexp for every line - Brittle since the regexp is very rudimentary The second concern is probably a bigger deterrent since the Regexp is very rudimentary and probably is missing plenty of edge cases, so there is a good chance some output could be filtered out. Regarding the speed, this was tested against result of `ansible-runner` running the following playbook: --- - name: List Variables hosts: all tasks: - name: Display all variables/facts known for a host debug: var: hostvars[inventory_hostname] And used the following script to test: 5.times do puts Benchmark.measure do 500.times do Ansible::Runner::Response.new(:base_dir => Dir.pwd) .parsed_stdout .map {|l| l['stdout']} .join("\n") end end end For the previous commit, the results were: $ ruby benchmark.rb 13.357855 0.879015 14.236870 ( 14.438316) 14.575711 0.861268 15.436979 ( 15.639258) 14.455340 0.850817 15.306157 ( 15.525687) 14.487061 0.831244 15.318305 ( 15.525777) 14.419763 0.868440 15.288203 ( 15.470261) And with these changes: $ ruby benchmark_script.rb 2.821104 0.075949 2.897053 ( 2.937712) 2.864495 0.060510 2.925005 ( 2.970379) 2.921303 0.058061 2.979364 ( 3.016452) 3.191436 0.036860 3.228296 ( 3.285050) 3.079974 0.065541 3.145515 ( 3.225359) The slowness of the previous code was mostly caused by the extra `JSON.parse` calls that happen as a result of all of the extra non JSON lines that aren't being filtered out. These solution is definitely faster, but there is a faster solution that will be provided in the next commit.
This is an addition to the solution from the previous commit, which continues to use the `artifacts/result/stdout`, but filters out lines that don't include a full JSON object. The advantages to this approach is it is much simpler to implement code wise, and if ansible-runner's stdout is ever fixed, this will gracefully continue to do as we would expect. However, it comes at a few costs that are worth considering: - Some what slow since there is regexp for every line - Brittle since the regexp is very rudimentary The second concern is probably a bigger deterrent since the Regexp is very rudimentary and probably is missing plenty of edge cases, so there is a good chance some output could be filtered out. Regarding the speed, this was tested against result of `ansible-runner` running the following playbook: --- - name: List Variables hosts: all tasks: - name: Display all variables/facts known for a host debug: var: hostvars[inventory_hostname] And used the following script to test: 5.times do puts Benchmark.measure do 500.times do Ansible::Runner::Response.new(:base_dir => Dir.pwd) .parsed_stdout .map {|l| l['stdout']} .join("\n") end end end For the previous commit, the results were: $ ruby benchmark.rb 13.357855 0.879015 14.236870 ( 14.438316) 14.575711 0.861268 15.436979 ( 15.639258) 14.455340 0.850817 15.306157 ( 15.525687) 14.487061 0.831244 15.318305 ( 15.525777) 14.419763 0.868440 15.288203 ( 15.470261) And with these changes: $ ruby benchmark_script.rb 2.821104 0.075949 2.897053 ( 2.937712) 2.864495 0.060510 2.925005 ( 2.970379) 2.921303 0.058061 2.979364 ( 3.016452) 3.191436 0.036860 3.228296 ( 3.285050) 3.079974 0.065541 3.145515 ( 3.225359) The slowness of the previous code was mostly caused by the extra `JSON.parse` calls that happen as a result of all of the extra non JSON lines that aren't being filtered out. These solution is definitely faster, but there is a faster solution that will be provided in the next commit.
This is an addition to the solution from the previous commit, which continues to use the `artifacts/result/stdout`, but filters out lines that don't include a full JSON object. The advantages to this approach is it is much simpler to implement code wise, and if ansible-runner's stdout is ever fixed, this will gracefully continue to do as we would expect. However, it comes at a few costs that are worth considering: - Some what slow since there is regexp for every line - Brittle since the regexp is very rudimentary The second concern is probably a bigger deterrent since the Regexp is very rudimentary and probably is missing plenty of edge cases, so there is a good chance some output could be filtered out. Regarding the speed, this was tested against result of `ansible-runner` running the following playbook: --- - name: List Variables hosts: all tasks: - name: Display all variables/facts known for a host debug: var: hostvars[inventory_hostname] And used the following script to test: 5.times do puts Benchmark.measure do 500.times do Ansible::Runner::Response.new(:base_dir => Dir.pwd) .parsed_stdout .map {|l| l['stdout']} .join("\n") end end end For the previous commit, the results were: $ ruby benchmark.rb 13.357855 0.879015 14.236870 ( 14.438316) 14.575711 0.861268 15.436979 ( 15.639258) 14.455340 0.850817 15.306157 ( 15.525687) 14.487061 0.831244 15.318305 ( 15.525777) 14.419763 0.868440 15.288203 ( 15.470261) And with these changes: $ ruby benchmark_script.rb 2.821104 0.075949 2.897053 ( 2.937712) 2.864495 0.060510 2.925005 ( 2.970379) 2.921303 0.058061 2.979364 ( 3.016452) 3.191436 0.036860 3.228296 ( 3.285050) 3.079974 0.065541 3.145515 ( 3.225359) The slowness of the previous code was mostly caused by the extra `JSON.parse` calls that happen as a result of all of the extra non JSON lines that aren't being filtered out. These solution is definitely faster, but there is a faster solution that will be provided in the next commit.
This is an addition to the solution from the previous commit, which continues to use the `artifacts/result/stdout`, but filters out lines that don't include a full JSON object. The advantages to this approach is it is much simpler to implement code wise, and if ansible-runner's stdout is ever fixed, this will gracefully continue to do as we would expect. However, it comes at a few costs that are worth considering: - Some what slow since there is regexp for every line - Brittle since the regexp is very rudimentary The second concern is probably a bigger deterrent since the Regexp is very rudimentary and probably is missing plenty of edge cases, so there is a good chance some output could be filtered out. Regarding the speed, this was tested against result of `ansible-runner` running the following playbook: --- - name: List Variables hosts: all tasks: - name: Display all variables/facts known for a host debug: var: hostvars[inventory_hostname] And used the following script to test: 5.times do puts Benchmark.measure do 500.times do Ansible::Runner::Response.new(:base_dir => Dir.pwd) .parsed_stdout .map {|l| l['stdout']} .join("\n") end end end For the previous commit, the results were: $ ruby benchmark.rb 13.357855 0.879015 14.236870 ( 14.438316) 14.575711 0.861268 15.436979 ( 15.639258) 14.455340 0.850817 15.306157 ( 15.525687) 14.487061 0.831244 15.318305 ( 15.525777) 14.419763 0.868440 15.288203 ( 15.470261) And with these changes: $ ruby benchmark_script.rb 2.821104 0.075949 2.897053 ( 2.937712) 2.864495 0.060510 2.925005 ( 2.970379) 2.921303 0.058061 2.979364 ( 3.016452) 3.191436 0.036860 3.228296 ( 3.285050) 3.079974 0.065541 3.145515 ( 3.225359) The slowness of the previous code was mostly caused by the extra `JSON.parse` calls that happen as a result of all of the extra non JSON lines that aren't being filtered out. These solution is definitely faster, but there is a faster solution that will be provided in the next commit.
Use anand version.