Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Permission denied with Postgresql and CSI hostpath add-on #13098

Closed
alexellis opened this issue Dec 6, 2021 · 7 comments
Closed

Permission denied with Postgresql and CSI hostpath add-on #13098

alexellis opened this issue Dec 6, 2021 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@alexellis
Copy link

alexellis commented Dec 6, 2021

What Happened?

I created a cluster as per:

$ minikube start     --addons volumesnapshots,csi-hostpath-driver     --apiserver-port=6443     --container-runtime=containerd     --kubernetes-version=1.21.2     -p arkade --driver kvm2

😄  [arkade] minikube v1.21.0 on Ubuntu 20.04
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node arkade in cluster arkade
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🔥  Deleting "arkade" in kvm2 ...
🤦  StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP not available after waiting: machine arkade didn't return IP after 1 minute
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ NO_PROXY=localhost,127.0.0.0/8,::1
    ▪ no_proxy=localhost,127.0.0.0/8,::1
E1206 11:53:00.952183 1661470 docker.go:159] "Failed to stop" err="sudo systemctl stop -f docker.service: Process exited with status 5\nstdout:\n\nstderr:\nFailed to stop docker.service: Unit docker.service not loaded.\n" service="docker.service"
📦  Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
    ▪ env NO_PROXY=localhost,127.0.0.0/8,::1
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...

$ kubectl get storageclass -A
NAME                        PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-hostpath-sc (default)   hostpath.csi.k8s.io        Delete          Immediate           false                  5m25s
standard                    k8s.io/minikube-hostpath   Delete          Immediate           false                  5m26s

Then installed Postgresql using the Bitnami chart and the container was unable to start:

kubectl logs statefulset/postgresql-postgresql
postgresql 12:09:00.22 
postgresql 12:09:00.22 Welcome to the Bitnami postgresql container
postgresql 12:09:00.22 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 12:09:00.22 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 12:09:00.23 
postgresql 12:09:00.24 INFO  ==> ** Starting PostgreSQL setup **
postgresql 12:09:00.26 INFO  ==> Validating settings in POSTGRESQL_* env vars..
postgresql 12:09:00.27 INFO  ==> Loading custom pre-init scripts...
postgresql 12:09:00.27 INFO  ==> Initializing PostgreSQL database...
mkdir: cannot create directory ‘/bitnami/postgresql/data’: Permission denied

The PV/PVC looked as expected:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                  STORAGECLASS      REASON   AGE
pvc-16255071-5e96-4b3a-b4d7-03a75f5bc77a   8Gi        RWO            Delete           Bound    default/data-postgresql-postgresql-0   csi-hostpath-sc            6m38s
$ kubectl get pvc
NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-postgresql-postgresql-0   Bound    pvc-16255071-5e96-4b3a-b4d7-03a75f5bc77a   8Gi        RWO            csi-hostpath-sc   6m38s
$ 

OS details:

$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
$ uname
Linux
$ uname -a
Linux alex-nuc8 5.4.0-91-generic #102-Ubuntu SMP Fri Nov 5 16:31:28 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ 

I am trying to run Kasten's operator for backing up volumes and couldn't complete the tutorial due to this error. The operator was going to create snapshots of the Postgresql database.

I was also confused by the references to Docker when I had passed in the --container-runtime=containerd flag.

Attach the log file

log.txt

Operating System

Ubuntu

@bb-Ricardo
Copy link

Same here, can't really find a reason

@maszczyn
Copy link

I think this is related to the way hostPath provisioner works, see this comment. As far as I understand - there's nothing you can do about it, at least not without modifying provisioner's source code.

I am looking for a solution to similar issue as you: to have data persisted outside the minikube's VM and have use of security capabilities like fsGroup. After studying the linked issue I believe it cannot be achieved with hostPath provisioner. Using NFS or alike might help (don't know - didn't check yet).

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 30, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mattmeye
Copy link

mattmeye commented Dec 29, 2022

same problem, no solution yet /reopen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants