-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to configure memory balloon device, min/max size #105
Comments
FYI I experimented with running two 4 core / 12GB VMs on a Mac Mini with 16GB and it all worked pretty well already when I ran some memory intense
My understanding of the ballon device was that it allows to resize the VM while it's running by modifying |
It seems that by default the I've tried running the same experiment with the |
I think it will be interesing to run couple of experiments:
|
Let's close the issue since it doesn't seems like very useful. Balloon device doesn't help with memory management automatically. It only allows to explicitly ask a VM to return some memory while it's running. |
Sorry for my lateness in replying, but this all makes sense to me. Thanks for considering it further! |
There's an option to configure a memory balloon device for the VM, which as far as I understand should make it possible for the VM to better share memory back to the host operating system if it gets low.
UTM supports the basic "whether to add this device" to the VM (here), though I'm not clear on whether there is more to do in order for it to be functional than perhaps configuring min/max memory limits for the balloon device (Apple has some more info on that here).
I'm thinking this is primarily interesting in the case where one may wish to have the system maximum of 2 VMs on a single host like a memory-constrained 16GB M1 Mac Mini that may want to be able to run (e.g.) simulator tests, potentially a few in parallel and which means some high RAM requirements that would be transient.
I'm not sure if folks have tested this particular mechanism much but it seems like if we had the option to configure it for VMs, it might be possible to crowdsource the testing / benchmarking of how well it works or if it can help to eek out more performance from concurrent VMs.
The text was updated successfully, but these errors were encountered: