Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows 10: Docker does not release disk space after deleting all images and containers #244

Closed
aludin opened this issue Nov 21, 2016 · 224 comments

Comments

@aludin
Copy link

aludin commented Nov 21, 2016

Description
When running docker images using Windows 10 professional, the docker virtual disk MobiLinuxVM.vhdx keeps on growing. After having finished with the images/container, and deleting them all, the virtual disk does not shrink in size. Expected behavior: MobiLinuxVM.vhdx should shrink and release the unused space.

Steps to reproduce the issue:

  1. Run various containers and observe that MobiLinux grows (once a container was running, I added my own software, which amounted to about 20GB).
  2. Deleted all containers & images using: docker rm $(docker ps -a -q) and docker rmi $(docker images -q).

Describe the results you received:
MobiLinuxVM.vhdx still 40+GB in size

Describe the results you expected:
MobiLinuxVM.vhdx should have "shrunk" to its original size when initially installing Docker on my Windows 10 box.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 23:26:11 2016
 OS/Arch:      windows/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 23:26:11 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 0
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.27-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.919 GiB
Name: moby
ID: IWNC:POO5:TSO4:GIAK:MLHK:C46G:DRJF:IWBM:YSRO:COUL:6TKR:M2ZC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 12
 Goroutines: 22
 System Time: 2016-11-20T17:06:20.2301916Z
 EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):
Windows 10 professional. Docker latest install using Hyper-V.

@rn
Copy link
Contributor

rn commented Nov 21, 2016

@aludin Thanks for your report. We are already tracking this issue internally and have almost all the pieces in place. Hopefully this feature will be added in one of the next Betas

@rn
Copy link
Contributor

rn commented Nov 29, 2016

@aludin We recently added TRIM support which will reclaim un-used disk space when the application exits. This is currently checked into our master tree and will be available in the next Beta release (hopefully later this week).

@aludin
Copy link
Author

aludin commented Nov 29, 2016

@rneugeba Great! I will test as soon as it is available!

@dgageot dgageot self-assigned this Nov 29, 2016
@rn
Copy link
Contributor

rn commented Nov 30, 2016

Beta31 has been release: https://download.docker.com/win/beta/InstallDocker.msi and TRIM support is enabled. I'm closing this issue for now. Please re-open if it is not working as expected.

Note, You have to quit the application (or restart) in order to reclaim the disk space.

@rn
Copy link
Contributor

rn commented Dec 8, 2016

I'm re-opening this as we there was a minor bug in Beta31 which prevents the TRIM support to actually be activated. We noticed this too late to merge a fix for Beta32 (released this week) but it should be available in the next Beta (Beat33). Apologies for the delay

@rn
Copy link
Contributor

rn commented Dec 15, 2016

Beta 33 was just released and it contains the fix mentioned above. Please give it a try.
Closing this issue (again).

@rn rn closed this as completed Dec 15, 2016
@lijiayi
Copy link

lijiayi commented Jan 24, 2017

There has to be a work around right?

@rn
Copy link
Contributor

rn commented Jan 24, 2017

this has now been released both on stable and beta, so no workaround should be necessary.

@citron
Copy link

citron commented Mar 9, 2017

Hi
No progress on that issue? This is a major problem on Windows 10.
Please Docker team fix it!

@rn
Copy link
Contributor

rn commented Mar 9, 2017

@citron I'm pretty sure we fixed this on both stable and beta channels. If this issue still persists please open a new issue with a detailed description and a diagnostics ID.

Thanks

@citron
Copy link

citron commented Mar 9, 2017

@rneugeba Do I really have to reopen the same exact case? It is annoying because I decided to quit Docker on Windows today because of that vhdx-do-not-get-slim thing. I was running the very latest stable Docker on Windows 10 64 bits.

@rn
Copy link
Contributor

rn commented Mar 9, 2017

@citron we enabled "Trim" support in the Linux VM which should reclaim un-used disk space, however Hyper-V will only do most of the reclamation when you shutdown the VM. There is very little online reclamation of un-used disk space in Hyper-V.

The VM is shutdown when you quit the Application. Does this reclaim the disk space? And if not, how do you measure it? Windows supports sparse files and the VHDX may say it's of a particualr size but only occupies less on the disk

@FuYu3699
Copy link

FuYu3699 commented Apr 17, 2017

@citron
I had similar issue, docker used all the 60 GB Hyper-V allocated disk space. I tried a few things which didnot work. Then I reset and powered off the virtual server using Hyper-V manager. Followed by using the reset tool in the docker settings dialog. I reset the docker, and it restarted the virtual server, and the disk shunk back to under 3GB.
there is another “reset to factory default” option that I was prepare to use. but it seems the "reset docker..." is sufficient.

hope this helps.

@HamedOsama
Copy link

It appears that virtual disk space tends to expand over time, and even after removing all images and containers, it doesn't automatically shrink. To reduce its size, you need to perform manual optimization.

This worked for me after one week of trying

Windows Home version:
wsl --shutdown
diskpart
select vdisk file="C:\Users\User\AppData\Local\Docker\wsl\data\ext4.vhdx"
attach vdisk readonly
compact vdisk
detach vdisk

Windows Pro version:
Optimize-VHD -Path C:\Users\username\AppData\Local\Docker\wsl\data\ext4.vhdx -Mode full

@mmarinchenko
Copy link

The WSL team has added experimental support for sparse virtual disks: microsoft/WSL#4699 (comment)

@0xced
Copy link

0xced commented Sep 19, 2023

Beware, Docker might not work with the latest WSL update, see microsoft/WSL#10487 (comment)

How do i revert this update? docker doesnt work

This new feature looks really great but I'll personally wait for the stable WSL release (and people reporting that Docker actually works 😅).

@idkthisnik
Copy link

i had that problem. helped to clear C/USERS/users/AppData/Local/Temp/docker-scout/sha256. it was sized 100 gb for me. it seems to be like a cashed containers or images, but i didn't find fully info about it.

@thy-neighbor
Copy link

i had that problem. helped to clear C/USERS/users/AppData/Local/Temp/docker-scout/sha256. it was sized 100 gb for me. it seems to be like a cashed containers or images, but i didn't find fully info about it.

This worked for me too. Though image data was being stored in C/USERS/users/AppData/Local/Temp/ in multiple folders titled in a "stereoscope-" format. Removed 40 GB being held hostage. I guess one could write a script to remove these files through cmd line.

Still waiting for an official fix, periodically going through this process and using the diskpart command mentioned in this thread is unacceptable for long term.

@mmarinchenko
Copy link

@idkthisnik

i had that problem. helped to clear C/USERS/users/AppData/Local/Temp/docker-scout/sha256. it was sized 100 gb for me. it seems to be like a cashed containers or images, but i didn't find fully info about it.

I created docker/roadmap#578

@gelomon
Copy link

gelomon commented Oct 27, 2023

It appears that virtual disk space tends to expand over time, and even after removing all images and containers, it doesn't automatically shrink. To reduce its size, you need to perform manual optimization.

This worked for me after one week of trying

Windows Home version: wsl --shutdown diskpart select vdisk file="C:\Users\User\AppData\Local\Docker\wsl\data\ext4.vhdx" attach vdisk readonly compact vdisk detach vdisk

Windows Pro version: Optimize-VHD -Path C:\Users\username\AppData\Local\Docker\wsl\data\ext4.vhdx -Mode full

Thank you! I'm literally running out of disk space deleted files inside my volumes but still nothig.
This is the only thing that works on my end.

@Kanu9
Copy link

Kanu9 commented Dec 11, 2023

@idkthisnik

i had that problem. helped to clear C/USERS/users/AppData/Local/Temp/docker-scout/sha256. it was sized 100 gb for me. it seems to be like a cashed containers or images, but i didn't find fully info about it.

I created docker/roadmap#578

I don't know if it helps but I found a lot of "docker-tarball files" randomly in the "Temp" folder Path:"C/USERS/users/AppData/Local/Temp" these are probably created by Docker to pass the files of the build context to the daemon. Deleting them saved me up to 200GB . Does it work for you ?

@schivmeister
Copy link

schivmeister commented Dec 20, 2023

@idkthisnik

i had that problem. helped to clear C/USERS/users/AppData/Local/Temp/docker-scout/sha256. it was sized 100 gb for me. it seems to be like a cashed containers or images, but i didn't find fully info about it.

I created docker/roadmap#578

I don't know if it helps but I found a lot of "docker-tarball files" randomly in the "Temp" folder Path:"C/USERS/users/AppData/Local/Temp" these are probably created by Docker to pass the files of the build context to the daemon. Deleting them saved me up to 200GB . Does it work for you ?

I have been stumped by this for several days, on a Windows 11 with WSL2 set up w/ Docker and half a dozen images and containers.

I kept running out of disk space on C: but my VHDs are on D:. Nothing helped to identify what was eating up all that space! Funnily enough, storage sense reported 25GB of TMP used space, but only a few MBs were removable!

Then I tried monitoring the disk writes using process monitor. During building, it was writing to some %LOCALAPPDATA\Temp\{UUID} and that turned out to be swap.vhdx, and that's understandable.

Then, when the containers were spinning up, I saw it was writing to docker-scout, which I knew beforehand because TreeSize reported it to be big. But then I saw something strange, guess what? It was writing to a certain stereoscope folder with tarballs, what looked like export of the containers.

I don't know what those are for, but even if the folder reported large sizes, the contents within were miniscule. As they were in Temp, I chose to delete them. That saved 25GB - exactly the amount reported but unidentified by Storage Sense and TreeSize! Not even a find on the /C mount from within WSL found that..

We need a way to switch the entirety of storage and operations to some other location, please. This annoyed me like nothing else and made me lose work hours.

@vertigoths
Copy link

For me, "Docker Desktop" -> "Troubleshoot" -> "Reset to factory defaults" fixed it.

@SchizoDuckie

This comment has been minimized.

@schivmeister
Copy link

@idkthisnik

i had that problem. helped to clear C/USERS/users/AppData/Local/Temp/docker-scout/sha256. it was sized 100 gb for me. it seems to be like a cashed containers or images, but i didn't find fully info about it.

I created docker/roadmap#578

I don't know if it helps but I found a lot of "docker-tarball files" randomly in the "Temp" folder Path:"C/USERS/users/AppData/Local/Temp" these are probably created by Docker to pass the files of the build context to the daemon. Deleting them saved me up to 200GB . Does it work for you ?

BTW, not sure if this one is turning out to be a distinct issue from what's reported in this ticket, but the files in question here for docker-scout invisible occupied space are part of the process which also eats up RAM docker/for-mac#6987. Following the instruction there to disable SBOM indexing helped to not recreate these files.

@paulwababu
Copy link

8 years later crazy

@massimo03
Copy link

massimo03 commented Apr 29, 2024

Hello everybody, I found myself in the same situation and none of the commands worked for me

  1. docker system prune -a --volumes
  2. diskpart
  3. Optimize-VHD -Path "" -Mode Full

So in the end I had to delete the virtual disk, losing all db data stored in it.

Now, how much time will I have before I have to face the same situation again? I don't know, but i would avoid to delete every now and then and re-dump databases.

Docker: 4.29.0

compose.yaml i used

version: '3.1'
services:
mysql:
image: mysql
container_name: database-mysql
hostname: database-mysql
command: --secure-file-priv=/mysql-upload
restart: always
volumes:
- ./database:/var/lib/mysql
- ./mysql-upload:/mysql-upload
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=<>
- TZ=UTC

@Pygeretmus
Copy link

Hi, i had the same problem!
Uninstalling docker helped me and gave me 60GB of the space (you can reinstall in again after).

@massimo03
Copy link

Hi, i had the same problem! Uninstalling docker helped me and gave me 60GB of the space (you can reinstall in again after).

Not for me, I tried but nothing changed and all free disk space was saturated, so i had to delete data virtual disk

@hsarbia
Copy link

hsarbia commented May 6, 2024

Hi @massimo03, @Pygeretmus & @paulwababu,

I have been experiencing this same frustrating issue for a long time as well : The WSL 2 ext4.vhdx docker data volume size not being reduced even though I was pruning everything ( docker system prune --all ).

But yesterday I managed to shrink this ext4.vhdx WSL data volume from 50 GB to 6.30 GB by following this one-liner command found there: https://dev.to/marzelin/how-to-reduce-size-of-docker-data-volume-in-docker-desktop-for-windows-v2-5d38#comment-1gpen

Windows PowerShell command:
Optimize-VHD -Path $Env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx -Mode Full

Just make sure beforehand, to Quit Docker Desktop and that the docker daemon is not running.
Also, this command might be useful ( wsl --shutdown ) but in my case I didn't need to use it.

My environment:
Docker Desktop version: 4.29.0
WSL version: 2.1.5.0
Kernel version: 5.15.146.1-2

Please let me know if that worked as well for you. Have a great day !

@GuidoPH
Copy link

GuidoPH commented Jul 8, 2024

I thought I had the exact same issue, the .vhdx file was hogging 100+ gb without reason.
None of my images reported using the space, none of the volumes, etc.
The disc optimization commands also did nothing.
I was about the nuke my entire install when I found the culprit: I had an image with 127gb of logs, so be sure to check those too!

@GoiDcl
Copy link

GoiDcl commented Sep 4, 2024

I see they managed to clean %LOCALAPPDATA%/Docker/wsl/data folder by just moving .vhdx file to %LOCALAPPDATA%/Docker/wsl/disk folder. Well done. However the issue still persists. And i need to purge data once in a while or kill wsl processes in task manager and delete docker_data.vhdx manually when my free disk space becomes 0 bytes.
image

@timocafe

This comment has been minimized.

@vincolus

This comment has been minimized.

@JocPelletier
Copy link

JocPelletier commented Nov 28, 2024

Will it be fixed befoe 2030?

Since Docker Desktop 4.34.0: "Windows now supports automatic reclamation of disk space in Docker Desktop for WSL2 installations using a managed virtual hard disk."

https://docs.docker.com/desktop/features/wsl/

@djs55
Copy link

djs55 commented Nov 29, 2024

@JocPelletier thanks for highlighting the release note! I'll close this ticket (but if there are further bugs then feel free to open new ones.)

@djs55 djs55 closed this as completed Nov 29, 2024
@GoiDcl
Copy link

GoiDcl commented Nov 29, 2024

Will it be fixed befoe 2030?

Since Docker Desktop 4.34.0: "Windows now supports automatic reclamation of disk space in Docker Desktop for WSL2 installations using a managed virtual hard disk."

https://docs.docker.com/desktop/features/wsl/

As of 4.35.1 (my current version) I still face the issue. It is worse than that even. I had a lot of junk piled up after testing and ran out of free disk space again. Since all this junk data was in the Minio's volume i deleted Minio container and proceed to do docker volume prune -f, and then docker builder prune -f. Docker happily reported it has reclaimed over 7 Gb of data, when in reality i only gain 1,5 Gb. Good job on automatic reclamation of disk space !!

@vincolus
Copy link

  • well, docker is a piece of software garbage, and while it has a good concept, the implementation lacks a lot, which we can see since a few years.
    I highly mistrust the docker developers, and gitlab developers. They must be CIA members of some sort. just my opinion, but if you really wanted to build this in a good, secure, and efficient way, then you wouldnt come up with some security-, control- and efficiency-nightmare like docker. all just because you know, hackers, exist. and docker is meant to be some kind of barrier? the opposite is the case and everyone using docker just makes it way easier for them to infiltrate our systems whenever they need to. docker for me is being part of a botnet that serves you until the doomsday arrives

@Pra-wnn
Copy link

Pra-wnn commented Dec 3, 2024

Guys just open Powershell
Optimize-VHD -Path "C:\Users<your-username>\docker_data.vhdx" -Mode Full

This did the work for me

Assuming you have Hyper-V installed if not well run this script in Powershell in admin privileges

pushd "%~dp0"

dir /b %SystemRoot%\servicing\Packages\*Hyper-V*.mum >hyper-v.txt

for /f %%i in ('findstr /i . hyper-v.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i"

del hyper-v.txt

Dism /online /enable-feature /featurename:Microsoft-Hyper-V -All /LimitAccess /ALL

pause

you would need to run this in Powershell admin privileges to install and enable Hyper-V

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests