Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Kubernetes cluster unreachable with helm 3.0 #1126

Closed
rubiktubik opened this issue Nov 23, 2019 · 25 comments
Closed

Error: Kubernetes cluster unreachable with helm 3.0 #1126

rubiktubik opened this issue Nov 23, 2019 · 25 comments
Assignees
Labels
kind/enhancement An improvement to existing functionality
Milestone

Comments

@rubiktubik
Copy link

Version:
k3s version v1.0.0 (18bd921)

Describe the bug
I want to use helm version 3 with k3s but i when type helm install stable/postgresql --generate-name for example i get:
Error: Kubernetes cluster unreachable

To Reproduce

  1. Installed helm 3 with script](https://helm.sh/docs/intro/install/#from-script).
  2. Add repo with helm repo add stable https://kubernetes-charts.storage.googleapis.com/
  3. Update repo with helm repo update
  4. Install posgresql-chart with helm install stable/postgresql --generate-name

Expected behavior
Installation should work.

Actual behavior
Error: Kubernetes cluster unreachable

Additional context

@r1chjames
Copy link

r1chjames commented Nov 24, 2019

Same issue here on k3s version v1.0.0 (18bd921).

@grawin
Copy link

grawin commented Nov 25, 2019

Try setting the KUBECONFIG environment variable.
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

@davidnuzik davidnuzik added the kind/enhancement An improvement to existing functionality label Nov 26, 2019
@davidnuzik davidnuzik added this to the Backlog milestone Nov 26, 2019
@r1chjames
Copy link

r1chjames commented Nov 26, 2019

That worked for me, but I only tried on a fresh CentOS/k3s/Helm install. Thanks @grawin

@Genubath
Copy link

Genubath commented Nov 27, 2019

Same issue. The fix @grawin posted doesn't solve it for me though.

EDIT: I also tried the steps in https://github.com/ibrokethecloud/rancher-helm3 but to no avail.

@rubiktubik
Copy link
Author

The fix @grawin posted doesn't worked for me either, i'am using a ubuntu 18.04 system.

@sixcorners
Copy link

sixcorners commented Dec 2, 2019

If you add "-v 20" to your helm command line it will show it's connecting to port 8080.
Running this seems to fix it:
kubectl config view --raw >~/.kube/config

This lets helm use the same config kubectl is using I think.

@galal-hussein
Copy link
Contributor

@rubiktubik looks like helm can't reach the k3s cluster, can you try to use --kubeconfig with helm command or using ~/.kube/config as @sixcorners suggested, please reopen the issue if the problem still persists.

@ghost
Copy link

ghost commented Jan 4, 2020

If you add "-v 20" to your helm command line it will show it's connecting to port 8080.
Running this seems to fix it:
kubectl config view --raw >~/.kube/config

This lets helm use the same config kubectl is using I think.

can confirm this solution works for me as well

@pcgeek86
Copy link

pcgeek86 commented Feb 18, 2020

This resolved the error message for me.

sudo helm install harbor/harbor --version 1.3.0 --generate-name --kubeconfig /etc/rancher/k3s/k3s.yaml

@michelcameroon
Copy link

with k3s i had the same problem, system tells me the file
/etc/rancher/k3s/k3s,yaml
is not reachable

the file had rw for root i change to 744 and it works. please tell me if it is correct,

@aureq
Copy link

aureq commented Jun 14, 2020

If you are using sudo, be aware that this command doesn't preserve environment variables (such as KUBECONFIG) by default when switching to a different context.

If you wish to preserve specific environment variables when using sudo then:

cat << EOF > /etc/sudoers.d/env
Defaults env_keep += "http_proxy https_proxy no_proxy"
Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY"
Defaults env_keep += "KUBECONFIG"
EOF

@Vesnica
Copy link

Vesnica commented Jul 6, 2020

If you are using sudo, be aware that this command doesn't preserve environment variables (such as KUBECONFIG) by default when switching to a different context.

If you wish to preserve specific environment variables when using sudo then:

cat << EOF > /etc/sudoers.d/env
Defaults env_keep += "http_proxy https_proxy no_proxy"
Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY"
Defaults env_keep += "KUBECONFIG"
EOF

Just use sudo -E, which will preserve the environment variables.

@jordanburke
Copy link

@Vesnica thanks that worked for me.

@xbmono
Copy link

xbmono commented Jul 30, 2020

I'm using helm 3.2.4 on Windows and have the same issue. Setting environment variable KUBECONFIG didn't help either. Without --kubeconfig it doesn't fail but it returns no result on helm ls

@montaro
Copy link

montaro commented Sep 4, 2020

what worked for me is to set the KUBECONFIG with absoult path value after changing directory to the chart directory.

@cdperdomo
Copy link

If you add "-v 20" to your helm command line it will show it's connecting to port 8080.
Running this seems to fix it:
kubectl config view --raw >~/.kube/config

This lets helm use the same config kubectl is using I think.

This kubectl config view --raw >~/.kube/config works for me. thaks

@poojabolla
Copy link

I tried this command,

kubectl config view --raw >~/.kube/config

but after running this config file became empty.

Can anyone suggest how to recover my config file with all values?

@schunduEA
Copy link

schunduEA commented Dec 16, 2020

@poojabolla.. Its gone, you must use >> instead > on appending something in existing file

@lpossamai

This comment has been minimized.

@brandond

This comment has been minimized.

@lpossamai

This comment has been minimized.

@sachinmalanki
Copy link

sachinmalanki commented Jun 14, 2021

@poojabolla.. Its gone, you must use >> instead > on appending something in existing file

I got this issue when using azure kubernetes

az aks get-credentials -n myCluster -g myResourceGroup
The config file is autogenerated and placed in '~/.kube/config' file as per OS

@MrMYHuang
Copy link

For microk8s, the k8s config can be generated by this command:
microk8s.kubectl config view --raw > ~/.kube/config

@shing1211
Copy link

This resolved the error message for me.

sudo helm install harbor/harbor --version 1.3.0 --generate-name --kubeconfig /etc/rancher/k3s/k3s.yaml

thankyou this works for me

@sufiyanpk7
Copy link

For microk8s you can try with

export KUBECONFIG=/var/snap/microk8s/4094/credentials/client.config

@k3s-io k3s-io locked as resolved and limited conversation to collaborators Nov 1, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/enhancement An improvement to existing functionality
Projects
None yet
Development

No branches or pull requests