-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to PVH #2185
Comments
Possible problems: PCI passthrough (as currently broken on HVM - #1659) |
Does SLAT and PVH improve the guest VM performance (boot time/runtime)? |
Additionally:
|
On a non-VT-d computer, does PVH support PCI passthrough? (If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.) |
On Wed, Aug 03, 2016 at 11:10:50AM -0700, Vít Šesták wrote:
The idea is to not support such hardware in Qubes 4.x, and for older Best Regards, |
Some major problem: it looks like populate-on-demand doesn't work - domain crashes when started with memory < maxmem. This is a blocker for dynamic memory management (qmemman). |
Theoretically yes, but haven't tested it yet. |
Thanks for the info. I understand the reason, especially with the limited resources for development. The QSB-24 is, however, confusing. It just mentions SLAT/EPT will be needed, but not VT-d, which might sound like 99 % of Qubes users have CPU compatible with the future Qubes 4.0. When counting Qubes 4.0, it does not sound like this, as non-VT-d-compatible machines seem to be more frequent. Not supporting VT-d in 4.0 is controversial, but it seems to be somehow justified. Moreover, it does not sound like a practical issue for me if it happens in 4.0. My main point is the communication: Please make this future incompatibility more visible. Without thinking about implementation of PVH, I would not have any idea that such issue might arise. All Qubes users should have a real chance to realize the adjusted hardware requirements soon enough. There is arguably some time before the last non-VT-d-requiring release (3.2?) gets unsupported (maybe an year), but one buying a new laptop considering Qubes compatibility should know (or have real chance to know) that fact today. |
You're right. We'll update system requirements page soon. But we need to work out some details. |
Does this mean changing our support period for prior versions? Currently it's "six months after each subsequent major or minor release." If so, this is another thing we should make sure to communicate clearly. |
On Wed, Aug 03, 2016 at 01:55:21PM -0700, Marek Marczykowski-Górecki wrote:
Shall we create a new ticket for tracking of this? Obvious problem with "VM must j. |
That's great. I am in favour of communicating this via blog/mailinglist/Twitter/etc. |
Based on current state of PVH in Xen, for Qubes 4.0 we'll go with standard HVM, then later switch to PVHv2 when ready. |
@marmarek do you really want to switch to pure HVM? I think for Linux it would make sense to use PVHVM: A HVM which uses hardware virtualization extensions and PV driver for networking and disk I/O. (source: https://wiki.xen.org/wiki/Xen_Project_Software_Overview) |
Yes, of course I meant HVM with PV drivers. |
So, will stubdomains be needed? IIUC, PVHVM does not need this, while HVM+PV does. And when there is a full HVM (e.g. a Windows VM) which needs a stubdomain, what domain type will it use? If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure. |
See linked discussion on xen-devel - in short PVH isn't usable yet. |
Yes, that's unfortunately right. |
PVHv2 status update (after talking in person to Xen people at FOSDEM): there is still slight disagreement on details of Linux support for PVHv2 (AFAIR about CPU hotplug). Should be resolved and implemented this year, but probably will take more than 1-2 months. This is all about PVHv2 without PCI passthrough, which is another story. Which means there wont be PVHv2 Linux VMs in Qubes 4.0. |
Phoronix reported that Xen developer submitted patches for PVHv2 (formerly known as HVMLite) to the 4.11 kernel: Xen Changes For Linux 4.11: Lands PVHv2 Guest Support Here is the Linux Kernel-Archive pull request message: [GIT PULL] xen: features and fixes for 4.11-rc0 |
Good to know, thanks! |
Also, make it possible to set default on a template for its VMs. QubesOS/qubes-issues#2185
Install both stubdom implementations: mini-os one (xen-hvm) and linux one (xen-hvm-stubdom-linux). QubesOS/qubes-issues#2185
Install both stubdom implementations: mini-os one (xen-hvm) and linux one (xen-hvm-stubdom-linux). QubesOS/qubes-issues#2185
Install both stubdom implementations: mini-os one (xen-hvm) and linux one (xen-hvm-stubdom-linux). QubesOS/qubes-issues#2185
Install both stubdom implementations: mini-os one (xen-hvm) and linux one (xen-hvm-stubdom-linux). QubesOS/qubes-issues#2185
Since MSI support is fixed/implemented, HVM is useable for hardware handling domains. QubesOS/qubes-issues#2849 QubesOS/qubes-issues#2185
Actually, it turned out this step isn't needed. But it is still nice to have. |
Automated announcement from builder-github The package
|
@marmarek what is the final decision? Are you switching entirely to HVM or is PVH still an option? |
Where possible (no PCI devices, Linux >= 4.11), we're switching to PVH. |
Automated announcement from builder-github The package
Or update dom0 via Qubes Manager. |
PVH + PCI passthrough: http://markmail.org/message/xabm6msg6amgjzrd |
For Qubes 4 we want to move away from using PV as the default method of virtualization in favor of using hw-aided (i.e. SLAT-enforced) virtualization, which currently Xen offers as PVH. The main reason for this is security. We believe SLAT should be less buggy than PV memory virtualization, as e.g. XSA 148 has shown a few months ago. Today most platforms should support SLAT, which was not the case 6 years ago when we originally chose PV over HVM. HVM without SLAT requires clunky Shadow Page Tables virtualization, arguably even more complex and error-prone than PV virtualization.
This ticket serves as a central place to track progress of this task, which should include:
The text was updated successfully, but these errors were encountered: