Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement "loop flattening" in k8s #19

Closed
tima opened this issue Feb 15, 2021 · 0 comments · Fixed by #49
Closed

Implement "loop flattening" in k8s #19

tima opened this issue Feb 15, 2021 · 0 comments · Fixed by #49
Assignees
Milestone

Comments

@tima
Copy link
Collaborator

tima commented Feb 15, 2021

SUMMARY

Using a loop to process a batch of file means a fork and API call is made for each item in the loop. This adds a substantial amount of overhead to process each file that could be avoided if the list of files are "flattened" and submitted with one API call. The proposal here is to build this "flattening" logic into the k8s module itself where optimal handling of a batch of files is trivial.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
  • k8s
ADDITIONAL INFORMATION
- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    definition: "{{ lookup('template', item) }}"
  loop: "{{ monitoring_dashboard_yamls_to_deploy }}"
  when:
    ...

While functional, using a loop to process each file means a fork and API call is also made for each item. This adds a substantial amount of overhead to process each file that could be avoided if the files are "flattened" in one pass and submitted with one API call.

Using creative use of Jinja2 syntax this can be done...

- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    definition: |
      {% for mdy in monitoring_dashboard_yamls_to_deploy %}
      ---
      {{ lookup("template", mdy) }}
      ...
      {% endfor %}
  when:
    ...

This executes significantly faster than the prior example, but it also requires a fair bit of knowledge and effort to get right.

The proposal here is to build this "flattening" logic into the k8s module itself where optimal handling of batch file processing is trivial.

The src and template (and by extension template/path) would be able to optionally accept a list of strings (file paths) instead of a simple string. Seeing a list has been passed, the module would perform the functional equivalent the last example where the module processes the files and submits them in one pass to the K8s cluster.

With this implemented, something like this should be possible with the same performance results:

- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    template: {{ monitoring_dashboard_yamls_to_deploy }}
  when:
    ...

Note, here the monitoring_dashboard_yamls_to_deploy variable is presumed to contains a list of file paths. This could have been explicitly defined in the task without a variable.

- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    template: 
      - crds/foo.yaml
      - crds/bar.yaml
      - crds/baz.yaml
      - crds/fred.yaml
  when:
    ...

Relative file paths given to src and template should be "Role-aware" similar to template and file modules.

# my_kiali role
- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    template: 
      - foo.yaml
      - bar.yaml
  when:
    ...

In this example the module would know to look in /path/to/roles/my_kiali/templates for foo.yaml and bar.yaml.

ERROR HANDLING

Consideration must be given to how errors are handled in this scenario. How should the module report and otherwise handle a processing error in the middle of a batch? Exit immediately or continue to process the remainder? Should this be controlled by the user with a boolean param? If so, how are multiple errors reported? See ansible-collections/community.kubernetes#321 that relates to this very issue.

@gravesm gravesm transferred this issue from ansible-collections/community.kubernetes Apr 8, 2021
@gravesm gravesm added this to the 2.0.0 milestone Apr 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants