Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attached disks resource leak. #5372

Open
vburenin opened this issue Jun 5, 2017 · 9 comments
Open

Attached disks resource leak. #5372

vburenin opened this issue Jun 5, 2017 · 9 comments
Labels
component/portlayer/storage kind/defect Behavior that is inconsistent with what's intended status/needs-attention The issue needs to be discussed by the team

Comments

@vburenin
Copy link
Contributor

vburenin commented Jun 5, 2017

User Statement:
I try to make a PortLayer call WriteImage that supposed to unpack content of an image into newly created VMDK. If there is an error happens, for example SHA256 mismatch, VMDK disks stays unmounted and not detached, that can easily cause resource leak and make VCH unusable.

For some reason, VMDK is mounted as read-only.

Acceptance Criteria:
Disk should be unmounted and detached in case of an error.

Logs

Jun  5 2017 20:24:45.024Z DEBUG op=269.6 (delta:45.253µs): [NewOperation] op=269.6 (delta:18.926µs) [github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).CreateImageStore:134]
Jun  5 2017 20:24:45.037Z INFO  op=269.6 (delta:12.706779ms): Refreshing image cache from datastore.
Jun  5 2017 20:24:45.072Z INFO  Creating directory [datastore1] vic-docker/VIC/containerd-storage/images
Jun  5 2017 20:24:45.083Z INFO  Creating directory [datastore1] vic-docker/VIC/containerd-storage/images/scratch
Jun  5 2017 20:24:45.093Z INFO  Creating directory [datastore1] vic-docker/VIC/containerd-storage/images/scratch/imageMetadata
Jun  5 2017 20:24:45.100Z INFO  op=269.6 (delta:75.615866ms): Creating image scratch ([datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk)
Jun  5 2017 20:24:45.100Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).CreateAndAttach:92] [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.100Z INFO  op=269.6 (delta:75.734058ms): Create/attach vmdk [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk from parent <nil>
Jun  5 2017 20:24:45.129Z DEBUG op=269.6 (delta:104.833413ms): Attach reconfigure task=Task:haTask-15-vim.VirtualMachine.reconfigure-156709180
Jun  5 2017 20:24:45.234Z DEBUG op=269.6 (delta:209.617191ms): Mapping vmdk to pci device [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.239Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] VirtualMachine:15
Jun  5 2017 20:24:45.239Z DEBUG op=269.6 (delta:214.550203ms): Looking for attached disk matching filename [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.239Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] VirtualMachine:15
Jun  5 2017 20:24:45.256Z DEBUG op=269.6 (delta:231.381045ms): backing file name [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.256Z DEBUG op=269.6 (delta:231.405678ms): Found candidate disk for [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk at [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.256Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] [16.854724ms] VirtualMachine:15
Jun  5 2017 20:24:45.256Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] [16.893573ms] VirtualMachine:15
Jun  5 2017 20:24:45.256Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.waitForDevice:47] /sys/bus/pci/devices/0000:03:00.0/host0/subsystem/devices/0:0:0:0/block
Jun  5 2017 20:24:45.256Z DEBUG op=269.6 (delta:2.625µs): Waiting for attached disk to appear in /sys/bus/pci/devices/0000:03:00.0/host0/subsystem/devices/0:0:0:0/block, or timeout
Jun  5 2017 20:24:45.256Z INFO  op=269.6 (delta:505.742µs): Attached disk present at /dev/sda
Jun  5 2017 20:24:45.256Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.waitForDevice:47] [527.012µs] /sys/bus/pci/devices/0000:03:00.0/host0/subsystem/devices/0:0:0:0/block
Jun  5 2017 20:24:45.256Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).CreateAndAttach:92] [156.297269ms] [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.256Z DEBUG op=269.6 (delta:231.986852ms): Scratch disk created with size 8000000
Jun  5 2017 20:24:45.256Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/fs.(*Ext4).Mkfs:37] /dev/sda
Jun  5 2017 20:24:45.256Z INFO  Creating ext4 filesystem on device /dev/sda
Jun  5 2017 20:24:45.663Z DEBUG Filesystem created on device /dev/sda
Jun  5 2017 20:24:45.663Z DEBUG [ END ] [github.com/vmware/vic/pkg/fs.(*Ext4).Mkfs:37] [406.74274ms] /dev/sda
Jun  5 2017 20:24:45.663Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).Detach:301] /dev/sda
Jun  5 2017 20:24:45.663Z INFO  op=269.6 (delta:638.787743ms): Detaching disk /dev/sda
Jun  5 2017 20:24:45.663Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] VirtualMachine:15
Jun  5 2017 20:24:45.663Z DEBUG op=269.6 (delta:638.80333ms): Looking for attached disk matching filename [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.663Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] VirtualMachine:15
Jun  5 2017 20:24:45.689Z DEBUG op=269.6 (delta:664.479748ms): backing file name [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.689Z DEBUG op=269.6 (delta:664.492637ms): Found candidate disk for [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk at [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:24:45.689Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] [25.687727ms] VirtualMachine:15
Jun  5 2017 20:24:45.689Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] [25.700939ms] VirtualMachine:15
Jun  5 2017 20:24:45.692Z DEBUG op=269.6 (delta:667.531746ms): Detach reconfigure task=Task:haTask-15-vim.VirtualMachine.reconfigure-156709186
Jun  5 2017 20:24:45.780Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).Detach:301] [117.433755ms] /dev/sda
Jun  5 2017 20:24:45.794Z DEBUG Index: inserting http:///storage/images/containerd-storage/scratch (parent: http:///storage/images/containerd-storage/scratch) in index
Jun  5 2017 20:24:45.795Z DEBUG [BEGIN] [github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*ContainersHandlersImpl).GetContainerListHandler:286]
Jun  5 2017 20:24:45.795Z DEBUG [ END ] [github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*ContainersHandlersImpl).GetContainerListHandler:286] [33.272µs] 
Jun  5 2017 20:25:41.131Z DEBUG op=269.7 (delta:8.643µs): [NewOperation] op=269.7 (delta:3.582µs) [github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).WriteImage:296]
Jun  5 2017 20:25:41.131Z DEBUG op=269.7 (delta:91.967µs): Getting image scratch from http:///storage/images/containerd-storage
Jun  5 2017 20:25:41.131Z DEBUG op=269.7 (delta:178.688µs): Getting image extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba from http:///storage/images/containerd-storage
Jun  5 2017 20:25:41.131Z DEBUG op=269.7 (delta:218.368µs): Image extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba not in cache, retreiving from datastore
Jun  5 2017 20:25:41.131Z DEBUG [BEGIN] [github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).GetImage:424] http:///storage/images/containerd-storage/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba
Jun  5 2017 20:25:41.203Z DEBUG [ END ] [github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).GetImage:424] [71.59693ms] http:///storage/images/containerd-storage/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba
Jun  5 2017 20:25:41.203Z INFO  Creating directory [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba
Jun  5 2017 20:25:41.208Z INFO  Creating directory [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/imageMetadata
Jun  5 2017 20:25:41.233Z INFO  op=269.7 (delta:102.422664ms): Creating image extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba ([datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk)
Jun  5 2017 20:25:41.233Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).CreateAndAttach:92] [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.233Z INFO  op=269.7 (delta:102.533613ms): Create/attach vmdk [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk from parent [datastore1] vic-docker/VIC/containerd-storage/images/scratch/scratch.vmdk
Jun  5 2017 20:25:41.274Z DEBUG op=269.7 (delta:142.957065ms): Attach reconfigure task=Task:haTask-15-vim.VirtualMachine.reconfigure-156709211
Jun  5 2017 20:25:41.377Z DEBUG op=269.7 (delta:246.716357ms): Mapping vmdk to pci device [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.387Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] VirtualMachine:15
Jun  5 2017 20:25:41.387Z DEBUG op=269.7 (delta:256.066912ms): Looking for attached disk matching filename [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.387Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] VirtualMachine:15
Jun  5 2017 20:25:41.408Z DEBUG op=269.7 (delta:277.547185ms): backing file name [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.408Z DEBUG op=269.7 (delta:277.561243ms): Found candidate disk for [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk at [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.408Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] [21.493287ms] VirtualMachine:15
Jun  5 2017 20:25:41.408Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] [21.510915ms] VirtualMachine:15
Jun  5 2017 20:25:41.408Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.waitForDevice:47] /sys/bus/pci/devices/0000:03:00.0/host0/subsystem/devices/0:0:0:0/block
Jun  5 2017 20:25:41.408Z DEBUG op=269.7 (delta:2.998µs): Waiting for attached disk to appear in /sys/bus/pci/devices/0000:03:00.0/host0/subsystem/devices/0:0:0:0/block, or timeout
Jun  5 2017 20:25:41.409Z INFO  op=269.7 (delta:333.306µs): Attached disk present at /dev/sda
Jun  5 2017 20:25:41.409Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.waitForDevice:47] [356.348µs] /sys/bus/pci/devices/0000:03:00.0/host0/subsystem/devices/0:0:0:0/block
Jun  5 2017 20:25:41.409Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).CreateAndAttach:92] [175.419ms] [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.409Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/fs.(*Ext4).Mount:60] /dev/sda
Jun  5 2017 20:25:41.409Z INFO  Mounting /dev/sda to /tmp/mnt-extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba251384577
Jun  5 2017 20:25:41.487Z DEBUG [ END ] [github.com/vmware/vic/pkg/fs.(*Ext4).Mount:60] [78.138134ms] /dev/sda
Jun  5 2017 20:25:41.510Z DEBUG op=269.7 (delta:379.560304ms): extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba wrote 1106304 bytes
Jun  5 2017 20:25:41.518Z ERROR op=269.7 (delta:387.452461ms): Cleaning up failed image extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba
Jun  5 2017 20:25:41.518Z DEBUG op=269.7 (delta:387.463194ms): Unmounting abandoned disk
Jun  5 2017 20:25:41.518Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/fs.(*Ext4).Unmount:68] /tmp/mnt-extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba251384577
Jun  5 2017 20:25:41.518Z INFO  Unmounting /tmp/mnt-extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba251384577
Jun  5 2017 20:25:41.538Z DEBUG [ END ] [github.com/vmware/vic/pkg/fs.(*Ext4).Unmount:68] [19.310673ms] /tmp/mnt-extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba251384577
Jun  5 2017 20:25:41.538Z DEBUG op=269.7 (delta:406.824461ms): Detaching abandoned disk
Jun  5 2017 20:25:41.538Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).Detach:301] /dev/sda
Jun  5 2017 20:25:41.538Z INFO  op=269.7 (delta:406.849975ms): Detaching disk /dev/sda
Jun  5 2017 20:25:41.538Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] VirtualMachine:15
Jun  5 2017 20:25:41.538Z DEBUG op=269.7 (delta:406.863128ms): Looking for attached disk matching filename [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.538Z DEBUG [BEGIN] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] VirtualMachine:15
Jun  5 2017 20:25:41.579Z DEBUG op=269.7 (delta:448.145763ms): backing file name [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.579Z DEBUG op=269.7 (delta:448.174152ms): Found candidate disk for [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk at [datastore1] vic-docker/VIC/containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.579Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDisk:224] [41.311246ms] VirtualMachine:15
Jun  5 2017 20:25:41.579Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.findDiskByFilename:262] [41.326207ms] VirtualMachine:15
Jun  5 2017 20:25:41.622Z DEBUG op=269.7 (delta:491.783709ms): Detach reconfigure task=Task:haTask-15-vim.VirtualMachine.reconfigure-156709217
Jun  5 2017 20:25:41.675Z DEBUG [ END ] [github.com/vmware/vic/pkg/vsphere/disk.(*Manager).Detach:301] [137.619158ms] /dev/sda
Jun  5 2017 20:25:41.675Z INFO  Removing containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/manifest
Jun  5 2017 20:25:41.700Z INFO  Removing containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba.vmdk
Jun  5 2017 20:25:41.726Z INFO  Removing containerd-storage/images/extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba
Jun  5 2017 20:25:41.744Z ERROR op=269.7 (delta:613.25648ms): WriteImage of extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba failed with: Failed to validate image checksum. Expected sha256:1cae461a1479c5a24dd38bd5f377ce65f531399a7db8c3ece891ac2197173f1d, got sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba
root@ [ ~ ]# mount 
rootfs on / type rootfs (rw,size=965040k,nr_inodes=241260)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=965052k,nr_inodes=241263,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda on /tmp/mnt-extract sha256:4ac76077f2c741c856a2419dfdb0804b18e48d2e1a9ce9c6a3f0605a2078caba251384577 type ext4 (ro,relatime,data=ordered)
@vburenin
Copy link
Contributor Author

vburenin commented Jun 5, 2017

github.com/vmware/vic/lib/portlayer/storage/vsphere/image.go:334

    actualSum := fmt.Sprintf("sha256:%x", h.Sum(nil))
    if actualSum != sum {
        err = fmt.Errorf("Failed to validate image checksum. Expected %s, got %s", sum, actualSum)
        return nil, err
    }

    if err = vmdisk.Unmount(); err != nil {
        return nil, err
    }

    if err = v.dm.Detach(op, vmdisk); err != nil {
        return nil, err
    }

@anchal-agrawal
Copy link
Contributor

Pinging @matthewavery for estimate and priority. Temporarily removing high priority - please re-assess based on our criteria.

@anchal-agrawal anchal-agrawal added status/needs-attention The issue needs to be discussed by the team and removed priority/p0 labels Jun 5, 2017
@hickeng hickeng added the kind/defect Behavior that is inconsistent with what's intended label Jun 9, 2017
@hickeng
Copy link
Member

hickeng commented Jun 9, 2017

Not only should it be unmounted and detached - we should be deleting the layer on disk so that we can re-pull.

@vburenin
Copy link
Contributor Author

The problem is not at the code I pointed. There is a deferred call at the beginning that can do all this clean up, but it is a little obstructed and needs to be redesigned.

As it happened the root cause is at mount point name that contains spaces on my side. Spaces are replaced by "\040" sequence in /proc/self/mountinfo that lead to string mismatch when we check if this mount point is actually mounted. Because there is no match (" " != "\040"), partition is not unmounted and it leads to a resource leak.

@corrieb
Copy link
Contributor

corrieb commented Jul 18, 2017

@vburenin is this a theoretical problem for a port layer consumer, or could this manifest as a bug for a customer using VIC? Need to know how to prioritize for 1.2.

@vburenin
Copy link
Contributor Author

@corrieb This is not a theoretical problem unfortunately. There is a bug in the vendored docker code of how unmount is handled. We either need to impose restrictions for image names not to include spaces or do not use user provided image names when stored in datastore. It went a little more far than this later during my containerd work. ESXi doesn't allow file names longer than 128 bytes, so I ended up using name hashes instead real image names to go through the situation when containerd wanted to use really long names over 200 bytes.

@hickeng
Copy link
Member

hickeng commented Jul 19, 2017

@vburenin Can you look over #5753 given it's touching on this - the PR is focused on correct handling of concurrent access to the same disk, but does alter the low level flow for error handling.
I don't know if it'll address this problem.

@mdubya66
Copy link
Contributor

Closing as resolved by PR #5373.

@hickeng
Copy link
Member

hickeng commented Jul 31, 2018

#5373 does not close this. I suspect a mix up with #4732.

@hickeng hickeng reopened this Jul 31, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/portlayer/storage kind/defect Behavior that is inconsistent with what's intended status/needs-attention The issue needs to be discussed by the team
Projects
None yet
Development

No branches or pull requests

5 participants