Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure CSI Node does not format and mount new volume #539

Closed
carlosrbcunha opened this issue Sep 14, 2020 · 3 comments
Closed

Azure CSI Node does not format and mount new volume #539

carlosrbcunha opened this issue Sep 14, 2020 · 3 comments

Comments

@carlosrbcunha
Copy link

What happened:

Azure managed disk is attached to the VM but when detecting if it is formatted it exits with error 2.

What you expected to happen:

Disk should be tested for the existence of a partition and a file system and if its not present, a partition should be created and formatted with the selected file system.

How to reproduce it:

  • New azure disk is created via terraform
  • Volume is registered in Nomad
  • Job is submitted that uses previously registered volume

Error message

I0914 15:14:45.113179       1 nodeserver.go:121] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /csi/staging/teste3/rw-file-system-single-node-writer with mount options([ro])
I0914 15:14:45.113197       1 mount_linux.go:405] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0])
I0914 15:14:45.148726       1 mount_linux.go:408] Output: "", err: exit status 2
E0914 15:14:45.148763       1 utils.go:112] GRPC error: rpc error: code = Internal desc = could not format "/dev/disk/azure/scsi1/lun0"(lun: "0"), and mount it at "/csi/staging/teste3/rw-file-system-single-node-writer"

Nomad volume definition

id = "teste1"
name = "teste1"
type = "csi"
external_id = "/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/bifana-core-rg/providers/Microsoft.Compute/disks/bifana-core-teste1-disk"
plugin_id = "az-disk0"
access_mode = "single-node-writer"
attachment_mode = "file-system"
mount_options {
   fs_type = "ext4"untitled:Untitled-5
   mount_flags = ["ro"]
}

Anything else we need to know?:

Environment:

  • CSI Driver version:
DRIVER INFORMATION:
-------------------
Build Date: "2020-09-13T10:59:42Z"
Compiler: gc
Driver Name: disk.csi.azure.com
Driver Version: v0.9.0
Git Commit: 8e22159b825702d0f4cfbfdb67f1160d6a0bc0c9
Go Version: go1.14.4
Platform: linux/amd64
Topology Key: topology.disk.csi.azure.com/zone
  • Kubernetes version (use kubectl version):
    We are using Nomad v0.12.4 (8efaee4)
  • OS (e.g. from /etc/os-release):
    Ubuntu 18.04.5 LTS
  • Kernel (e.g. uname -a):
    25~18.04.1-Ubuntu SMP Sat Sep 5 15:28:57 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

I Attached the controller and node logs for review
controller_log_teste3.log
node_log_test3.log

@carlosrbcunha
Copy link
Author

The nomad job definition has a typo.
Here is the correct one.

id = "teste1"
name = "teste1"
type = "csi"
external_id = "/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/bifana-core-rg/providers/Microsoft.Compute/disks/teste3"
plugin_id = "az-disk0"
access_mode = "single-node-writer"
attachment_mode = "file-system"
mount_options {
   fs_type = "ext4"
   mount_flags = ["ro"]
}

@andyzhangx
Copy link
Member

the error is due to incorrect mount options, fsType could be specified in storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-csi
provisioner: disk.csi.azure.com
parameters:
  skuname: StandardSSD_LRS
  fsType: ext4

ext4 is actually setting by default, so remove the above two mount_options should work.

@carlosrbcunha
Copy link
Author

Removing the two mount options was the solution. Nevertheless is not very intuitive, since it's a "logical" option having the fstype on the mount part. The code could accept as a viable option.
Thanks very much for your help and keep on the great work.

sozercan pushed a commit to sozercan/azuredisk-csi-driver that referenced this issue Aug 16, 2021
…zure-lib9

feat: skip account matching with special tags
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants