-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Azure CSI Node does not format and mount new volume #539
Comments
The nomad job definition has a typo.
|
the error is due to incorrect mount options, apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi
provisioner: disk.csi.azure.com
parameters:
skuname: StandardSSD_LRS
fsType: ext4
|
Removing the two mount options was the solution. Nevertheless is not very intuitive, since it's a "logical" option having the fstype on the mount part. The code could accept as a viable option. |
…zure-lib9 feat: skip account matching with special tags
What happened:
Azure managed disk is attached to the VM but when detecting if it is formatted it exits with error 2.
What you expected to happen:
Disk should be tested for the existence of a partition and a file system and if its not present, a partition should be created and formatted with the selected file system.
How to reproduce it:
Error message
Nomad volume definition
Anything else we need to know?:
Environment:
kubectl version
):We are using Nomad v0.12.4 (8efaee4)
Ubuntu 18.04.5 LTS
uname -a
):25~18.04.1-Ubuntu SMP Sat Sep 5 15:28:57 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I Attached the controller and node logs for review
controller_log_teste3.log
node_log_test3.log
The text was updated successfully, but these errors were encountered: