Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_hostname' when adding a control plan/etcd node #11024

Closed
tandrez opened this issue Mar 22, 2024 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tandrez
Copy link

tandrez commented Mar 22, 2024

What happened?

Hello,

I'm trying to add a new control plane and etcd node to my cluster following the docs.

The command I execute is basically:

ansible-playbook -i inventory.ini --limit=etcd,kube_control_plane -e ignore_assert_errors=yes cluster.yml

The failed task output is:

TASK [kubespray-defaults : Set no_proxy to all assigned cluster IPs and hostnames] ********************************************************************************************************************************************************************************************
fatal: [lc2-k8sm -> localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_hostname'. 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_hostname'\n\nThe error appears to be in '/Users/gjj422/Git/ALM/infra-as-code/template/k8s/roles/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Set no_proxy to all assigned cluster IPs and hostnames\n  ^ here\n"}

When I suppress the parameter --limit=etcd,kube_control_plane, it works.

What did you expect to happen?

The playbook executes without error.

How can we reproduce it (as minimally and precisely as possible)?

  1. Deploy a cluster with one control plane and etcd node
  2. Add a second control plane and etcd node in inventory
  3. Run cluster.yml passing --limit=etcd,kube_control_plane -e ignore_assert_errors=yes

OS

Ansible control node: MacOSx Ventura
Ansible managed nodes: CentOS 7.9 and AlmaLinux 8.9

Version of Ansible

ansible [core 2.14.11]
config file = None
configured module search path = ['/Users/gjj422/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/gjj422/almvenv/lib/python3.11/site-packages/ansible
ansible collection location = /Users/gjj422/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/gjj422/almvenv/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:23:30) [Clang 14.0.0 (clang-1400.0.29.202)] (/Users/gjj422/almvenv/bin/python3.11)
jinja version = 3.1.2
libyaml = True

Version of Python

3.11.5

Version of Kubespray (commit)

v2.23.0

Network plugin used

calico

Full inventory with variables

ansible -i inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"

Command used to invoke ansible

ansible-playbook -i inventory.ini --limit=etcd,kube_control_plane -e ignore_assert_errors=yes cluster.yml

Output of ansible run

ansible.log

Anything else we need to know

I also have tried with Kubespray v2.24.1 hoping that #10953 has resolved this problem but I still have the same error.

@tandrez tandrez added the kind/bug Categorizes issue or PR as related to a bug. label Mar 22, 2024
@tandrez
Copy link
Author

tandrez commented Mar 26, 2024

Hello,

I have the same error when adding a new worker node by running scale.yml with --limit=NODE_NAME, even after running facts.yml.

@opethema
Copy link
Contributor

Hello,

I have the same error when adding a new worker node by running scale.yml with --limit=NODE_NAME, even after running facts.yml.

same here

@opethema
Copy link
Contributor

running facts (without --limit=NODE_NAME) before the scale did the trick for me

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 28, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants