Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

digitalocean: Document common tweaks required for a working setup #531

Closed
klausenbusk opened this issue Dec 27, 2017 · 8 comments
Closed
Labels
area/digitalocean lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@klausenbusk
Copy link
Contributor

/area digitalocean

Per #529, for kubeadm and bootkube setups, the flexvolume dir need to be accessible for kube-controller-manager for a working setup (the volume won't attach if not).

bootkube also require adding a DIGITALOCEAN_ACCESS_TOKEN env variable to kube-controller-manager, as kube-controller-manager run as nobody and as so can't read the do_token file.

@klausenbusk
Copy link
Contributor Author

klausenbusk commented Jan 25, 2018

Per: #571

  • Document CoreOS required change (change of flexvolume dir)
    • Document that kube-controller-manager (kubeadm only?) need to use the /usr/share/ca-certificates instead of /etc/ssl/certs due to symlinks.
  • Add RBAC policy (see digitalocean: Add RBAC policy #572) Done

cc @lloeki

@klausenbusk klausenbusk changed the title digitalocean: Document common tweaks needed for a working setup digitalocean: Document common tweaks required for a working setup Jan 25, 2018
@lloeki
Copy link

lloeki commented Jan 31, 2018

I was faced with a silent mount failure. The sole available bit of log was on the pod's events:

Unable to mount volumes for pod "postgres-deployment-8b4fd44dc5-4rjw9_www(591133ae-0692-11e3-bfd9-22337054e03c)": timeout expired waiting for volumes to attach/mount for pod "www"/"postgres-deployment-8b4fd44dc5-4rjw9". list of unattached/unmounted volumes=[postgres-persistent-storage]

Volume was attaching correctly, just not mounting. Took me two hours before I thought about looking into the node's kubelet logs:

journalctl -xn -u kubelet.service

Turns out I forgot to set --volume-plugin-dir= on a node's kubelet args.

It's a bit more general than digitalocean, but I thought that may be worth mentioning somewhere that a mount timeout could have its cause logged over there and not inside kubernetes.

@Kiura
Copy link

Kiura commented Feb 5, 2018

@lloeki, could you please tell me how you added --volume-plugin-dir and where/which config?

@lloeki
Copy link

lloeki commented Feb 6, 2018

@Kiura the systemd service file. If you're using kubeadm:

mkdir -p /etc/kubernetes/kubelet-plugins/volume/exec
sed -i -e 's#\(KUBELET_EXTRA_ARGS=\)#\1--volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec #' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet

Again, be sure to set it for both the master and each none.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/digitalocean lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants