-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal and code for instantaneously started disposable VMs #1512
Comments
I recommend to also open a pull request against
https://github.com/marmarek/qubes-core-admin/.
|
I haven't opened a pull request because I mostly wrote this for myself and while usable by others it's not quite ready to ship since it's missing integration, and I'm not sure if it's worth doing it now as opposed to waiting for the new core code. |
Definitely it is too late for having it in R3.1. So it may go into the next major version. Given the progress on core3, we'll probably skip R3.2 and go straight for R4.0 (with core3). But probably will not manage go implement savefile-based DispVMs there in time. So this approach will be really useful, also for generic AppVMs (have some DispVM running, without any application and use it when requested). As for qubes core API, it would be very similar - lack of savefile would mean that |
BTW, there are potential anonymity issues because the first actual use of the new VM happens at the same time as the new disposable VM for the next request is started. This means that they can be correlated if they are both exploited, or from the network if starting a VM causes network traffic correlated with subsequent traffic from actual use (I think this is mitigated by Tor rotating circuits every 10 minutes, but not totally sure). It may be a good idea to delay attaching a netvm to avoid correlation from network; avoiding uptime correlation might be possible by starting the VM with a fixed wall clock time (e.g. start of Unix epoch) and then keeping it paused and fixing the clock later. |
This should be mitigated by stream isolation by source IP?
(IsolateClientAddr)
|
I have just started using Qubes on my brandnew laptop and I am amazed by what you guys have been putting together here. I assumed there to be a much steeper learning curve, which is why I have been putting Qubes off until… my old laptop broke. The delays that I’m faced when opening something in a disposable VM are one of the few things that bother me moderately (the other ones being the heavy use of Fedora, a few GUI bugs that I’ll report after exploring them in some more detail and the huge memory footprint – will upgrade to 32GB RAM soon, never thought 16GB might not be enough for me). I would love to see this feature implemented and am pleased to see it tagged as P:major, although it seems as though it hasn’t made its way to 4.0 as originally planned (using 4.0-rc4 here). I’m sorry to clutter up the issues page with this, but I really think this needs saying: Qubes rules and you guys are doing some really amazing work here! I’ll never go back to another operating system (used to use plain Debian beforehand). In two months, I’ll have some more money at my hands and intend to donate to the Qubes project regularly. |
@qubesuser, are you still working on this? |
Wow, I came with the similar idea but never implemented it. Nice to see someone already tried, it would be a very useful feature! |
I think it's fair to say that the answer is "no," so if anyone else would like to pick this up, please comment here. |
Where is the code now? The original link is 404 :( |
I have no idea, sorry. All I know is what's in this public issue. If that was the only copy of the code, it may no longer be available to us. 🙁 |
That sucks :(( Does anyone have a copy by a chance? I am afraid it would not fit the current code base without some adaptation anyway but we could try at least.. |
I offer my humble attempt at implementing something like this: |
I want to clarify my previous post, since I've been assigned to this issue: my linked repo (some bash scripts designed for, among others, this use case) was primarily meant to give people something alike what this issue is about, but it's not what I would consider a proper solution. A proper solution would require modifying some core QubesOS code and likely make some design decisions, which is beyond me; e.g. one could change the qrexec policy specification to allow using Furthermore, my solution very heavily relies on Bottom line is, while I certainly wouldn't mind a review of my linked scripts, I'm not sure that they are really adequate as a solution to this issue. |
The problem you're addressing (if any)
Disposable vms are very useful for the intent they're made, however one drawback they have is that usage is not instant as other appvms, and one need to wait for it to load before using it (varying from 7-20s depending on hardware)
Describe the solution you'd like
It would be great if there was an option to "preload" the dispvm (quantity to preload would be defined by the user and limited by hardware specs) so whenever you need to use a dispvm the target program launches automatically.
Where is the value to a user, and who might that user be?
It would be a great benefit in terms of speed and convenience depending on how much the user relies on dispvms
Additional context
Currently behaviour would be preserved, if you launch a program for a dispvm using the qubes menu, each call would use a different preloaded dispvm, no reuse would be made. Also when one dispvm is "used", another one would be preloaded to keep the defined amount always ready.
Original description:
Starting disposable VMs is faster than normal VMs, but it can often still take several seconds and be a noticeable delay in the user experience.
This proposes to solve this issue by keeping one or more disposable VMs always around runnning, but without qubes-guid started and thus "invisible".
When the user requests a disposable VMs, the system takes one of those cached disposable VMs, adjusts them if necessary and starts qubes-guid, and then starts another cached disposable VMs for the next request.
This allows instantaneously started DispVMs at the cost of losing 1.5-6 GB of RAM, which can be a good tradeoff at least for machines with >= 16GB RAM.
There are two ways of doing this: the most flexible way would be to support any DispVM usage by starting the appropriate service on the cached DVM, and there is an inflexible but faster way that pre-starts the application as well, but only supports a limited number of DispVM applications started from dom0 (typically a web browser and a terminal).
My code implements the "inflexible" way and offers two modes: a faster "separate" mode that keeps around a DispVM for each configured application, and a slower but less RAM hungry "unified" mode that keeps a DispVM with all the applications running, and kills the ones not needed at user request.
You can find the implementation at: https://github.com/qubesuser/qubes-core-admin/tree/insta_dvm
You'll need to create a configuration file in /etc/qubes/dvms like the one provided in the branch. The mode is chosen automatically depending on available RAM, but can be configured in /etc/qubes/cached-dvm-mode
The branch is missing packaging for qubes-start-cached-dvm and the dvms config file, systemd integration for starting it at boot, and making dom0 start menu entries use it.
It's also somewhat hackish overall and might need a rewrite in Python and adjustment to the new core code if shipped after that.
The text was updated successfully, but these errors were encountered: