-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add setup to emulate NVMe device #672
Conversation
faa18c5
to
02fddbb
Compare
02fddbb
to
1f367dc
Compare
/test check-provision-k8s-1.20-cgroupsv2 |
@@ -119,4 +129,4 @@ exec qemu-system-x86_64 -enable-kvm -drive format=qcow2,file=${next},if=virtio,c | |||
-serial pty -M q35,accel=kvm,kernel_irqchip=split \ | |||
-device intel-iommu,intremap=on,caching-mode=on -soundhw hda \ | |||
-uuid $(cat /proc/sys/kernel/random/uuid) \ | |||
${QEMU_ARGS} | |||
${QEMU_ARGS} ${NVME_QEMU_ARG} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to touch vm.sh ? The NVMe qemu arguments must be after the other arguments ? If not this can be implemented just at gocli passing the NVMe arguments to QEMU_ARGS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@qinqon we need to create the image before calling qemu. For the rest, you are right I can pass the arguments as extra qemu args.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alicefr I think there is a pretty simple way to still do this at cluster-up
in contrast to cluster-provision
while keeping most of your changes like they are. The vm.sh
script is executed in both phases. You could move the gocli
flag from provision
to up
and the disks would then only be created when vm.sh
is executed for cluster-up
. Similar like you can sets different memory for both phases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry @rmohr I don't quite understand the comment. Do you mean moving the --nvme
flag from gocli run
to gocli provision
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. Forget my comment. Everything is where it should be. I hod for some reason the impression that you provision the images during the provision phase. 👍
The setup allows to create and emulate in the provisioned nodes an NVMe device. This can be particularly useful for testing scenarios with NVMe like NVMe passthrough. Added in the gocli provision the new flag nvme in order to create the device. Signed-off-by: Alice Frosi <[email protected]>
1f367dc
to
adb93c6
Compare
@@ -482,6 +485,16 @@ func run(cmd *cobra.Command, args []string) (retErr error) { | |||
nodeQemuArgs = fmt.Sprintf("%s -device vfio-pci,host=%s", nodeQemuArgs, gpuAddress) | |||
} | |||
|
|||
var vmArgsNvmeDisks []string | |||
if len(nvmeDisks) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be possible to execute qemu-img create
command at gocli so we don't touch vm.sh ? maybe adding additional steps at the vmContainerConfig Cmd:.
We want to move as much bash code to gocli as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no because the gocli is executed in a separate container and it only sets the command to run in the node container. If we want to remove the script then we need some kind of program that executes the commands The current setup is not different than the way how the script creates the image for block devices (https://github.com/kubevirt/kubevirtci/blob/main/cluster-provision/centos8/scripts/vm.sh#L85)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
qemu-img
needs to be executed in the container that provisions the node
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qinqon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[1a96266 Update instruction to use K8s 1.21 since 1.16 already been deprecated.](kubevirt/kubevirtci#667) [4e3eb21 Add setup to emulate NVMe device](kubevirt/kubevirtci#672) Signed-off-by: kubevirt-bot <[email protected]>
[1a96266 Update instruction to use K8s 1.21 since 1.16 already been deprecated.](kubevirt/kubevirtci#667) [4e3eb21 Add setup to emulate NVMe device](kubevirt/kubevirtci#672) Signed-off-by: kubevirt-bot <[email protected]>
The setup allows to create and emulate in the provisioned nodes NVMe
device. This can be particularly useful for testing scenarios with
NVMe like NVMe passthrough.
How to enable it during the cluster creation: