Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qvm-clone, maximum memory and disabled memory balancing (disabled meminfo-writer service) #5306

Closed
brendanhoar opened this issue Sep 9, 2019 · 7 comments · Fixed by QubesOS/qubes-manager#234

Comments

@brendanhoar
Copy link

Qubes OS version
R4.0 current-testing

Affected component(s) or functionality
qvm-clone

Brief summary
source VM: initial=4096; max=4096; Include in memory balancing=UNCHECKED

To Reproduce
qvm-clone sourcevm targetvm

Expected behavior
target VM: initial=4096; max=4096; Include in memory balancing=UNCHECKED

Actual behavior
target VM: initial=4096; max=4000; Include in memory balancing=UNCHECKED

Right...4000, not 4096...plus there's also a hidden issue, discussed below.

Additional context
This leads to two issues:

  1. A Windows 7 HVM clone that should be as stable as the source is not, either due to having a range of RAM assigned and/or having qmemman enabled (secretly?). Anyway Windows 7 crashes a lot until you can resolve the inproperly cloned settings.
  2. If you go to the Qubes Setting GUI and navigate to the second tab, Qubes Settings warns you of the mismatch. If you click OK, it fixes the displayed maximum value to 4096. If you click OK, it appears you have corrected the value. You have not. If you open the Qubes Setting GUI again, and navigate to the second tab, the maximum is, again, set at 4000.

I cannot detect any significant differences between the output of qvm-preferences and qvm-features of the source and target (both have 0 in the maxmem), but the source does not have this issue and the target does.

I suspect what is happening is multi-pronged:

  1. the 4000 is weird, probably standard fare bug.
  2. the UI/CLI-based automatic handling, stored in qubesdb, of whether to perform memory ballooning (based on maxmem really being set to 0) are only two of the three ways this 0 can be set...however, qvm-clone is the third way and it is not automatically setting the ballooning info correctly in qubesdb.

Checking the checkbox, clicking apply, verifying or fixing the maxmem to 4096 if the UI hasn't already fixed it, then unchecking the checkbox and clicking OK resolves the issue each time.

Related, non-duplicate issues
#4480

As above, the hidden qmemman setting was fixed in qvm-preferences and/or Qubes Settings, but was missed in qvm-clone.

From this previous thread, I don't even know if meminfo-writer is really a thing any more.

B

@brendanhoar brendanhoar added P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: bug labels Sep 9, 2019
@andrewdavidwong andrewdavidwong added this to the Release 4.0 updates milestone Sep 10, 2019
@brendanhoar
Copy link
Author

brendanhoar commented Sep 15, 2019

in the qubes.xml, what exactly is the differences between <feature name="qubesmanager.maxmem_value" ... and <property name="maxmem" ...?

FWIW, I think there appear to be some gaps between what I can get correctly stored for a cloned windows hvm via the GUI and via cli using qvm- commands.

@marmarek
Copy link
Member

in the qubes.xml, what exactly is the differences between <feature name="qubesmanager.maxmem_value" ... and <property name="maxmem" ...?

qubesmanager.maxmem_value is used by qubes manager only, to remember previous value of maxmem, while it is set to 0 (to disable dynamic memory management).

@brendanhoar
Copy link
Author

Hmm, I think the weird behavior I am seeing with the Qubes Setting GUI's maxmem being 4000 instead of 4096 on the clone is related to the source VM having a PCI device configured but no value for qubesmanager.maxmem_value (as it was created entirely via CLI).

Both have the "Include in memory balancing" setting unchecked in the GUI, but the copy keeps showing a greyed out 4000 in the maxmem field, while the source correctly shows a greyed out 4096 in the maxmem field. Manually checking the "Include in memory balancing", clicking apply, and unchecking it, clicking apply and clicking OK clears the weird behavior (presumably as it sets the qubesmanager.maxmem_value correctly?

Notably, the copy created by qvm-clone does not have a PCI device configured.

  1. Shouldn't it though? Shouldn't a clone be a clone, even if it prevents both VM copies from running at the same time?

  2. Because the clone has no PCI device configured, there's probably a codepath being traversed that fills in the maxmem in the GUI correctly vs. the codepath without a PCI device that fills it in incorrectly.

Brendan

@marmarta
Copy link
Member

The clone part is solved by marmarta/qubes-core-admin-client@c0a8c65 , (this PR: QubesOS/qubes-core-admin-client#136 ). Still working on the second problem :)

@brendanhoar
Copy link
Author

Thanks!

marmarek pushed a commit to QubesOS/qubes-manager that referenced this issue May 27, 2020
When a VM is not included in memory balancing, there is no point
(and it can be actively harmful via deception) in showing warnings
about init_mem and maxmem mismatch.

fixes QubesOS/qubes-issues#5306

(cherry picked from commit b058db4)
@qubesos-bot
Copy link

Automated announcement from builder-github

The package qubes-manager-4.0.42-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

@qubesos-bot
Copy link

Automated announcement from builder-github

The package qubes-manager-4.0.42-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants