Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hostPath permissions wrong on multi node #11765

Open
david0 opened this issue Jun 25, 2021 · 9 comments
Open

hostPath permissions wrong on multi node #11765

david0 opened this issue Jun 25, 2021 · 9 comments
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@david0
Copy link

david0 commented Jun 25, 2021

minikube version: v1.21.0
Steps to reproduce the issue:

  1. minikube start --nodes=2

  2. Provision hostpath volumes to minikube-m02 e.g. via statefulset example

  3. check permissions via minikube ssh -n minikube-m02 -- ls -lisa /tmp/hostpath-provisioner/default (should be 777)

total 16
8376322 4 drwxr-xr-x 4 root root 4096 Jun 25 13:27 .
8376321 4 drwxr-xr-x 3 root root 4096 Jun 25 13:27 ..
8376324 4 drwxr-xr-x 2 root root 4096 Jun 25 13:27 mongo-persistent-storage-claim-mongo-0
8376323 4 drwxr-xr-x 2 root root 4096 Jun 25 13:27 tmp-vol-mongo-0

The first node has the correct permissions and pods are working there.

Full output of minikube logs command:
minikube-logs.txt

Full output of failed command:

kubectl logs mongo-0 
{"t":{"$date":"2021-06-25T13:33:17.238+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongo-0"}}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"W",  "c":"CONTROL",  "id":20720,   "ctx":"initandlisten","msg":"Available memory is less than system memory","attr":{"availableMemSizeMB":400,"systemMemSizeMB":2135}}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.6","gitVersion":"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7","openSSLVersion":"OpenSSL 1.1.1  11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2021-06-25T13:33:17.240+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"0.0.0.0"},"replication":{"replSet":"rs0"}}}}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"E",  "c":"STORAGE",  "id":20568,   "ctx":"initandlisten","msg":"Error setting up listener","attr":{"error":{"code":9001,"codeName":"SocketException","errmsg":"Permission denied"}}}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"REPL",     "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"CONTROL",  "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"STORAGE",  "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"STORAGE",  "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2021-06-25T13:33:17.241+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":48}}
@ilya-zuyev ilya-zuyev added addon/storage-provisioner Issues relating to storage provisioner addon co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question. labels Jun 25, 2021
@afbjorklund
Copy link
Collaborator

I don't think the hostpath provisioner even works on the second node.

@david0
Copy link
Author

david0 commented Jun 27, 2021

What does that mean? Has another component created the folders on the second node or is it a known limitation of the hostpath provisionioner?

@afbjorklund
Copy link
Collaborator

I mean if you create files on one node (like the first), they probably don't show up on other nodes (like the second).

So I wonder if those directories are the result of trying to mount non-existing directories, or something like that ?

Ultimately minikube would have to provide something like NFS, to offer persistent storage across multiple nodes...

As long as "something else" (outside minikube) is transporting the files, then I guess it would continue to work as well.

@david0
Copy link
Author

david0 commented Jun 27, 2021

Oh, I see now that even if the hostpath provisioner would set the correct permissions, still it would only work on a very limited amound of usecases (e.g. ReadWriteOnce, temporary data with acceptable data loss).

That brings up the question to me what kind of pv do people typically use/are expected to use on a multi node minikube? It looks to me as if host mounting /tmp/hostpath-provisioner should work. Or do they install some other storageprovider?

@spowelljr spowelljr added the long-term-support Long-term support issues that can't be fixed in code label Jul 28, 2021
@R-omk
Copy link

R-omk commented Aug 9, 2021

Related #12165

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 7, 2021
@R-omk
Copy link

R-omk commented Dec 2, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2021
@sharifelgamal sharifelgamal added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Dec 22, 2021
@sharifelgamal sharifelgamal added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Mar 16, 2022
@sharifelgamal
Copy link
Collaborator

Yeah, this is definitely still an issue in multinode.

@stevester94
Copy link

As a workaround I am deploying a daemonset which mounts the hostpath-provisioner directory and sets all subdirs to 777 every second.

apiVersion: v1
kind: Namespace
metadata:
  name: minikube-pv-hack
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: minikube-pv-hack
  namespace: minikube-pv-hack
spec:
  selector:
    matchLabels:
      name: minikube-pv-hack
  template:
    metadata:
      labels:
        name: minikube-pv-hack
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: minikube-pv-hack
        image: registry.access.redhat.com/ubi8:latest
        command:
        - bash
        - -c
        - |
          while : ; do
            chmod 777 /target/*
            sleep 1
          done
        volumeMounts:
        - name: host-vol
          mountPath: /target
      volumes:
      - name: host-vol
        hostPath:
          path: /tmp/hostpath-provisioner/default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

9 participants