Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The podman statically compiled and QEMU #15

Open
PedroRegisPOAR opened this issue Sep 28, 2023 · 1 comment
Open

The podman statically compiled and QEMU #15

PedroRegisPOAR opened this issue Sep 28, 2023 · 1 comment

Comments

@PedroRegisPOAR
Copy link
Contributor

PedroRegisPOAR commented Sep 28, 2023

podman machine

The status is:
image
https://blog.replit.com/nix-vs-docker

Fact: since https://github.com/containers/podman/releases/tag/v4.3.1 exists the static version.

cat > Containerfile << 'EOF'
FROM docker.io/library/alpine:3.18.3 as alpine-with-nix

RUN apk update \
 && apk \
       add \
       --no-cache \
       ca-certificates \
       shadow \
 && mkdir -pv /home/nixuser \
 && addgroup nixgroup --gid 4455 \
 && adduser \
       -g '"An unprivileged user with an group"' \
       -D \
       -h /home/nixuser \
       -G nixgroup \
       -u 3322 \
       nixuser \
 && echo \
 && echo 'Start kvm stuff...' \
 && getent group kvm || groupadd kvm \
 && usermod --append --groups kvm nixuser \
 && echo 'End kvm stuff!' \
 && echo \
 && test -d /etc || mkdir -pv /etc \
 && echo 'America/Recife' > /etc/timezone \
 && echo \
 && mkdir -pv /nix/var/nix && chmod -v 0777 /nix && chown -Rv nixuser:nixgroup /nix \
 && echo \
 && apk del shadow

USER nixuser
WORKDIR /home/nixuser
ENV USER="nixuser"
ENV PATH=/home/nixuser/.nix-profile/bin:/home/nixuser/.local/bin:"$PATH"
ENV NIX_CONFIG="extra-experimental-features = nix-command flakes"

RUN mkdir -pv "$HOME"/.local/bin \
 && cd "$HOME"/.local/bin \
 && wget -O- https://hydra.nixos.org/build/231020695/download/2/nix > nix \
 && chmod -v +x nix \
 && cd - \
 && export PATH=/home/nixuser/.local/bin:/bin:/usr/bin \
 && nix flake --version \
 && nix -vv registry pin nixpkgs github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57 \
 && nix \
        profile \
        install \
        github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#git \
        github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#jq \
        github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#podman \
        github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#openssh \
 && nix \
        profile \
        install \
        --impure \
        --expr \
        '( let nixpkgs = (builtins.getFlake "github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57"); pkgs = import nixpkgs { }; in (pkgs.qemu.override { hostCpuOnly = true; }))' \
 && rm -rfv "$HOME"/.cache/nix /tmp/* \
 && nix \
        store \
        gc \
        --verbose \cur
        --option keep-derivations false \
        --option keep-outputs false \
 && nix store optimise --verbose

RUN podman \
         --log-level=trace \
         machine \
         init \
         --cpus=4 \
         --disk-size=30 \
         --log-level=trace \
         --memory=3072 \
         --rootful=false \
         --timezone=local \
         --volume="$HOME":"$HOME" \
         vm


#RUN tar -xvz -C /tmp/ -f <(wget -O - https://github.com/containers/podman/releases/download/v4.6.2/podman-remote-static-linux_amd64.tar.gz) \
# && mv -v /tmp/bin/podman-remote-static-linux_amd64 "$HOME"/.local/bin/podman-remote-static \
# && ln -fsv "$HOME"/.local/bin/podman-remote-static "$HOME"/.local/bin/podman \
# && podman --version

RUN PODMAN_MACHINE_CONFIG_FULL_PATH=$(echo ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json) \
 && jq -c '.CmdLine += ["-nographic"]' "$PODMAN_MACHINE_CONFIG_FULL_PATH" > "$PODMAN_MACHINE_CONFIG_FULL_PATH".temp \
 && mv -v "$PODMAN_MACHINE_CONFIG_FULL_PATH".temp "$PODMAN_MACHINE_CONFIG_FULL_PATH" \
 && echo
# && echo '[ENGINE]' >> "$HOME"/.config/containers/containers.conf \
# && echo 'helper_binaries_dir=["/home/nixuser/.nix-profile/bin"]' >> "$HOME"/.config/containers/containers.conf

EOF

podman \
build \
--tag alpine-with-nix \
--target alpine-with-nix \
. \
&& podman \
run \
--annotation=run.oci.keep_original_groups=1 \
--device=/dev/kvm:rw \
--hostname=container-nix \
--interactive=true \
--name=conteiner-unprivileged-nix \
--privileged=false \
--tty=true \
--rm=true \
localhost/alpine-with-nix:latest \
sh \
-c \
'
    echo First start the podman virtual machine \
    && podman --log-level=trace machine start vm \
    && echo The machine must have started \
    && podman --remote --log-level=ERROR run quay.io/podman/hello \
    && echo \
    && podman --remote run docker.io/tianon/toybox toybox \
    && echo \
    && podman --remote run docker.io/library/busybox busybox \
    && echo \
    && podman --remote run docker.io/library/alpine cat /etc/os*release \
    && echo \
    && podman --remote run --privileged quay.io/podman/stable podman --version \
    && echo \
    && podman --remote run --privileged quay.io/podman/stable podman run quay.io/podman/hello \
    && echo \
    && podman \
        --remote \
        run \
        --interactive=true \
        --memory-reservation=200m \
        --memory=300m \
        --memory-swap=400m \
        --rm=true \
        --tty=true \
        docker.io/library/alpine cat /etc/os-release \
    && echo \
    && podman --remote images \
    && echo \
    && podman --remote info \
    && echo \
    && podman machine info --format json 
'

TODO: watch

Volumes with podman machine

What may still be broken but unnoticed?

-v "$HOME/git:$HOME/git:ro,security_model=none"

Refs.:

Imperative way

mkdir -pv project-app \
&& cd project-app \
&& echo abcxyz > logs.txt

podman \
--remote \
--log-level=ERROR \
run \
--annotation=run.oci.keep_original_groups=1 \
--device=/dev/fuse:rw \
--device=/dev/kvm:rw \
--env="DISPLAY=${DISPLAY:-:0.0}" \
--hostname=container-nix \
--interactive=true \
--name=conteiner-unprivileged-nix \
--privileged=true \
--tty=true \
--userns=keep-id \
--rm=true \
--volume="$(pwd)":/var/home/nixuser/code:rw \
--workdir=/var/home/nixuser/code \
docker.io/library/alpine \
sh \
-c \
'
cat logs.txt \
&& touch logs.txt \
&& echo 1a2b3c > logs.txt \
&& cat logs.txt
'

cat logs.txt

In .yaml format

mkdir -pv project-app \
&& cd project-app \
&& echo abcxyz > logs.txt
cat << 'EOF' > alpine-pod-with-volumes.yaml
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.6.2

# NOTE: If you generated this yaml from an unprivileged and rootless podman container on an SELinux
# enabled system, check the podman generate kube man page for steps to follow to ensure that your pod/container
# has the right permissions to access the volumes added.
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    run.oci.keep_original_groups/conteiner-alpine-for-pod: "1"
  creationTimestamp: "2023-10-03T19:30:11Z"
  labels:
    app: conteiner-alpine-for-pod-pod
  name: conteiner-alpine-for-pod-pod
spec:
  containers:
  - command: [ "sh", "-c", "sleep 1000000"]
    env:
    - name: HOSTNAME
      value: container-nix
    image: docker.io/library/alpine:latest
    name: conteiner-alpine-for-pod
    resources: { }
    securityContext:
      allowPrivilegeEscalation: true
      capabilities: { }
      privileged: true
      runAsGroup: 102
      runAsUser: 101
      seLinuxOptions: {}      
    stdin: true
    tty: true
    volumeMounts:
    - mountPath: /var/home/nixuser/code:rw
      name: home-nixuser-project-app-host-0
    workingDir: /var/home/nixuser/code
  hostUsers: true
  privileged: true
  readOnlyRootFilesystem: false
  seLinuxOptions: {}  
  hostname: container-nix
  volumes:
  - hostPath:
      path: ./
      type: Directory
    name: home-nixuser-project-app-host-0
EOF

Refs.:

podman --remote pod rm --force conteiner-alpine-for-pod-pod

podman --remote play kube alpine-pod-with-volumes.yaml

echo abcxyz > logs.txt

podman --remote exec -it -u 0 conteiner-alpine-for-pod-pod-conteiner-alpine-for-pod sh \
-c \
'
touch logs.txt
'

podman --remote exec -it -u 0 conteiner-alpine-for-pod-pod-conteiner-alpine-for-pod sh \
-c \
'
cat logs.txt \
&& touch logs.txt \
&& echo 1a2b3c > logs.txt \
&& cat logs.txt
'

TODO:

  securityContext:
    seLinuxOptions:
      type: spc_t

Other

podman \
run \
--annotation=run.oci.keep_original_groups=1 \
--device=/dev/kvm:rw \
--hostname=container-nix \
--interactive=true \
--name=conteiner-unprivileged-nix \
--privileged=false \
--tty=true \
--rm=true \
localhost/alpine-with-nix:latest \
sh \
-c \
'
 echo First start the podman virtual machine \
 && podman --log-level=trace machine start vm \
 && echo The machine must have started \
 && podman \
--remote \
--log-level=ERROR \
run \
--annotation=run.oci.keep_original_groups=1 \
--device=/dev/fuse:rw \
--device=/dev/kvm:rw \
--env="DISPLAY=${DISPLAY:-:0.0}" \
--hostname=container-nix \
--interactive=true \
--name=conteiner-unprivileged-nix \
--privileged=true \
--tty=true \
--userns=keep-id \
--rm=true \
docker.io/library/ubuntu:23.04 \
cat /etc/os-release \
 && echo \
 && podman --remote run --privileged quay.io/podman/stable podman run ubi8 cat /etc/os-release \
 && echo \
 && podman \
--remote \
run \
--interactive=true \
--memory-reservation=200m \
--memory=300m \
--memory-swap=400m \
--rm=true \
--tty=true \
docker.io/library/fedora:39 cat /etc/os-release \
 && echo \
 && podman images
'
podman \
run \
--annotation=run.oci.keep_original_groups=1 \
--device=/dev/fuse:rw \
--device=/dev/kvm:rw \
--env="DISPLAY=${DISPLAY:-:0}" \
--group-add=keep-groups \
--hostname=container-nix \
--interactive=true \
--mount=type=tmpfs,tmpfs-size=3G,destination=/tmp \
--mount=type=tmpfs,tmpfs-size=2G,destination=/var/tmp \
--name=conteiner-unprivileged-nix \
--privileged=true \
--tty=true \
--userns=keep-id \
--rm=true \
--volume=/tmp/.X11-unix:/tmp/.X11-unix:ro \
localhost/alpine-with-nix:latest
  1. Some adjusts for headless QEMU VM:
PODMAN_MACHINE_CONFIG_FULL_PATH=$(echo ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json)

# cat "$PODMAN_MACHINE_CONFIG_FULL_PATH" | jq '.CmdLine += ["-nographic"]'

jq -c '.CmdLine += ["-nographic"]' "$PODMAN_MACHINE_CONFIG_FULL_PATH" > "$PODMAN_MACHINE_CONFIG_FULL_PATH".temp \
&& mv -v "$PODMAN_MACHINE_CONFIG_FULL_PATH".temp "$PODMAN_MACHINE_CONFIG_FULL_PATH"

Refs.:

Helper: TODO test it

podman machine inspect vm | jq -r '.[].ConfigPath.Path'
cat << 'EOF' >> ~/.config/containers/containers.conf
[ENGINE]
helper_binaries_dir=["/home/nixuser/.nix-profile/bin"]
EOF
podman --log-level=trace machine start vm
podman --remote --log-level=ERROR run quay.io/podman/hello
mkdir test-dir
touch test-dir/test-file.txt

podman --remote run --volume="$(pwd)"/test-dir:/code --workdir=/code ubuntu:23.04 bash -c 'ls -al'

Refs.:

podman \
--remote \
--log-level=ERROR \
run \
--annotation=run.oci.keep_original_groups=1 \
--device=/dev/fuse:rw \
--device=/dev/kvm:rw \
--env="DISPLAY=${DISPLAY:-:0.0}" \
--hostname=container-nix \
--interactive=true \
--name=conteiner-unprivileged-nix \
--privileged=true \
--tty=true \
--userns=keep-id \
--rm=true \
ubuntu:23.04 \
bash \
-c \
'
id
'

TODO: try to help containers/podman#14303 (comment)

TODO: Test it

export CONTAINERS_HELPER_BINARY_DIR=$(dirname "$(type -p gvproxy)")

TODO: make an patch with that commit and try override it in nix.

Other commads

export DOCKER_HOST='unix:///home/nixuser/.local/share/containers/podman/machine/qemu/podman.sock'

TODO: read and try to make work

https://github.com/ES-Nix/get-nix/tree/draft-in-wip#single-user

https://github.com/ES-Nix/podman-rootless/tree/from-nixpkgs#podman-rootless

nix profile install nixpkgs#qemu_kvm nixpkgs#podman nixpkgs#socat
nix \
profile \
install \
github:NixOS/nixpkgs/nixpkgs-unstable#qemu_kvm \
github:NixOS/nixpkgs/nixpkgs-unstable#podman \
github:NixOS/nixpkgs/nixpkgs-unstable#socat
echo 'Start kvm stuff...' \
&& getent group kvm || sudo groupadd kvm \
&& sudo usermod --append --groups kvm "$USER" \
&& sudo chmod 666 /dev/kvm \
&& sudo chown "$USER": /dev/kvm \
&& echo 'End kvm stuff!'
rm -fv "$HOME"/.local/bin/podman-remote-static "$HOME"/.local/bin/podman
nix \
profile \
install \
github:NixOS/nixpkgs/nixpkgs-unstable#qemu_kvm \
github:NixOS/nixpkgs/nixpkgs-unstable#podman \
github:NixOS/nixpkgs/nixpkgs-unstable#socat
podman \
--log-level=trace \
machine \
init \
--cpus=4 \
--disk-size=30 \
--log-level=trace \
--memory=3072 \
--rootful=false \
--timezone=local \
--volume="$HOME":"$HOME" \
vm

Refs.:

podman --log-level=trace machine init
podman machine rm --force podman-machine-default
podman machine rm --force vm
podman --remote info --format {{.Host.EventLogger}}
podman machine stop; \
podman machine rm --force; \
podman --log-level=trace machine init --memory=3072 --cpus=4 \
&& podman --log-level=trace machine start

Adapted from: containers/podman#14303 (comment)

What about dowload the image upfront?

echo 'SHA256 (fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz) = 09ab10a13330307baefb71e6fcf9f07ce93799aad9ec25185bf59deb3d6c1eb7' >fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz-CHECKSUM

curl -O https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/38.20230902.3.0/x86_64/fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz.sig


curl -O https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/38.20230902.3.0/x86_64/fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz

gpgv --keyring ./fedora.gpg fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz.sig fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz
sha256sum -c fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz-CHECKSUM

Refs.:

is there documentation on the ignition file itself? I saw the cli arg to pass it in but I can't find any documentation on the actual content of that file.

no there is not ... we dont envision people messing with an individual ignition file. But we added support for providing one.
containers/podman#13900 (comment)

About socat

nix profile install nixpkgs#socat
socat -u OPEN:/dev/null UNIX-CONNECT:"${XDG_RUNTIME_DIR}"/podman/podman-machine-default_ready.sock

From: https://unix.stackexchange.com/a/556790

socat -u OPEN:/dev/null UNIX-CONNECT:"${XDG_RUNTIME_DIR}"/podman/podman.sock

From: https://docs.podman.io/en/latest/markdown/podman-system-service.1.html

curl -v -s -X GET --unix-socket "${XDG_RUNTIME_DIR}"/podman/podman.sock "http:///libpod/containers/json"
@PedroRegisPOAR
Copy link
Contributor Author

PedroRegisPOAR commented Oct 3, 2023

In an Mac M2 inside an UTM VM Mac M2 (like the host), nix, qemu-system-aarch, podman --remote

nix \
profile \
install \
nixpkgs#qemu \
nixpkgs#jq \
nixpkgs#podman
podman \
--log-level=trace \
machine \
init \
--cpus=4 \
--disk-size=30 \
--log-level=trace \
--memory=3072 \
--rootful=false \
--timezone=local \
--volume="$HOME":"$HOME" \
vm

This file must be created or update after each podman machine init:

less ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json

What it means?

I haven't found a solution yet, but if qemu is not using Apple virtualization, then he should be able to run.
containers/podman#12617 (comment)

Is this about tcg?

And this workaround? https://podman-desktop.io/docs/troubleshooting/troubleshooting-podman-on-macos#podman-machine-on-apple-silicon

The fix. Edit it to have this:

...
  "-accel",
  "tcg",
  "-cpu",
  "cortex-a57",
  "-M",
  "virt,highmem=off",
...

Refs.:

TODO: why brew maintainers did it like that? containers/podman#18073 (comment)

TODO: test the difference in "virt,highmem=off", vs "virt,highmem=on",

TODO: what about cortex-a72?

An Mac M2 VM worked with this configuration:

{
 "ConfigPath": {
  "Path": "/Users/alvaro/.config/containers/podman/machine/qemu/vm.json"
 },
 "CmdLine": [
  "/Users/alvaro/.nix-profile/bin/qemu-system-aarch64",
  "-m",
  "3072",
  "-smp",
  "4",
  "-fw_cfg",
  "name=opt/com.coreos/config,file=/Users/alvaro/.config/containers/podman/machine/qemu/vm.ign",
  "-qmp",
  "unix:/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/qmp_vm.sock,server=on,wait=off",
  "-netdev",
  "socket,id=vlan,fd=3",
  "-device",
  "virtio-net-pci,netdev=vlan,mac=5a:94:ef:e4:0c:ee",
  "-device",
  "virtio-serial",
  "-chardev",
  "socket,path=/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_ready.sock,server=on,wait=off,id=avm_ready",
  "-device",
  "virtserialport,chardev=avm_ready,name=org.fedoraproject.port.0",
  "-pidfile",
  "/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_vm.pid",
  "-accel",
  "tcg",
  "-cpu",
  "cortex-a57",
  "-M",
  "virt,highmem=off",
  "-drive",
  "file=/Users/alvaro/.nix-profile/share/qemu/edk2-aarch64-code.fd,if=pflash,format=raw,readonly=on",
  "-drive",
  "file=/Users/alvaro/.local/share/containers/podman/machine/qemu/vm_ovmf_vars.fd,if=pflash,format=raw",
  "-virtfs",
  "local,path=/Users/alvaro,mount_tag=vol0,security_model=mapped-xattr",
  "-drive",
  "if=virtio,file=/Users/alvaro/.local/share/containers/podman/machine/qemu/vm_fedora-coreos-38.20230918.2.0-qemu.aarch64.qcow2"
 ],
 "Rootful": false,
 "UID": 501,
 "IgnitionFilePath": {
  "Path": "/Users/alvaro/.config/containers/podman/machine/qemu/vm.ign"
 },
 "ImageStream": "testing",
 "ImagePath": {
  "Path": "/Users/alvaro/.local/share/containers/podman/machine/qemu/vm_fedora-coreos-38.20230918.2.0-qemu.aarch64.qcow2"
 },
 "Mounts": [
  {
   "ReadOnly": false,
   "Source": "/Users/alvaro",
   "Tag": "vol0",
   "Target": "/Users/alvaro",
   "Type": "9p"
  }
 ],
 "Name": "vm",
 "PidFilePath": {
  "Path": "/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_proxy.pid"
 },
 "VMPidFilePath": {
  "Path": "/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_vm.pid"
 },
 "QMPMonitor": {
  "Address": {
   "Path": "/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/qmp_vm.sock"
  },
  "Network": "unix",
  "Timeout": 2000000000
 },
 "ReadySocket": {
  "Path": "/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_ready.sock"
 },
 "CPUs": 4,
 "DiskSize": 30,
 "Memory": 3072,
 "IdentityPath": "/Users/alvaro/.ssh/vm",
 "Port": 49228,
 "RemoteUsername": "core",
 "Starting": false,
 "Created": "2023-10-03T14:13:43.403789-03:00",
 "LastUp": "2023-10-03T14:13:43.403789-03:00"
}

Start the podman machine VM:

echo First start the podman virtual machine \
&& podman --log-level=trace machine start vm \
&& echo The machine must have started \
&& podman --remote --log-level=ERROR run quay.io/podman/hello 

Note: it takes around 8min to finish, maybe more.

Other details

podman --version
podman version 4.3.1
qemu-kvm --version
QEMU emulator version 7.1.0
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers

Old

nix \
profile \
install \
nixpkgs#qemu \
nixpkgs#jq \
nixpkgs#podman
% qemu-kvm --version
QEMU emulator version 7.1.0
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
nix profile install nixpkgs#podman
warning: skipping dangling symlink '/private/tmp/nix-841-0/bin/virtiofsd'

ehh... broken, of course...
TODO: maybe the problem is the nested virtualization? On metal it would it work?

Tip, maybe -accel tcg

qemu-system-aarch64 -accel help
alvaro@Maquina-Virtual-de-Alvaro ~ % less ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json                   alvaro@Maquina-Virtual-de-Alvaro ~ % less ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json
alvaro@Maquina-Virtual-de-Alvaro ~ % PODMAN_MACHINE_CONFIG_FULL_PATH=$(echo ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json) \
 && jq -c '.CmdLine += ["-nographic"]' "$PODMAN_MACHINE_CONFIG_FULL_PATH" > "$PODMAN_MACHINE_CONFIG_FULL_PATH".temp \
 && mv -v "$PODMAN_MACHINE_CONFIG_FULL_PATH".temp "$PODMAN_MACHINE_CONFIG_FULL_PATH" \
 && echo
/Users/alvaro/.config/containers/podman/machine/qemu/vm.json.temp -> /Users/alvaro/.config/containers/podman/machine/qemu/vm.json

alvaro@Maquina-Virtual-de-Alvaro ~ % less ~/.config/containers/podman/machine/qemu/$(podman machine info --format "{{ .Host.CurrentMachine }}").json
alvaro@Maquina-Virtual-de-Alvaro ~ % echo First start the podman virtual machine \
    && podman --log-level=trace machine start vm \
    && echo The machine must have started \
    && podman --remote --log-level=ERROR run quay.io/podman/hello
First start the podman virtual machine
INFO[0000] /nix/store/sfw92crhskck0gp1czazdgjn09sd0a7l-podman-4.3.1/bin/podman filtering at log level trace 
Starting machine "vm"
DEBU[0000] qemu cmd: [/Users/alvaro/.nix-profile/bin/qemu-system-aarch64 -m 3072 -smp 4 -fw_cfg name=opt/com.coreos/config,file=/Users/alvaro/.config/containers/podman/machine/qemu/vm.ign -qmp unix:/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/qmp_vm.sock,server=on,wait=off -netdev socket,id=vlan,fd=3 -device virtio-net-pci,netdev=vlan,mac=5a:94:ef:e4:0c:ee -device virtio-serial -chardev socket,path=/var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_ready.sock,server=on,wait=off,id=avm_ready -device virtserialport,chardev=avm_ready,name=org.fedoraproject.port.0 -pidfile /var/folders/qf/2qlrk7g97yvbsfjgkjwrs8rw0000gn/T/podman/vm_vm.pid -accel hvf -accel tcg -cpu host -M virt,highmem=on -drive file=/Users/alvaro/.nix-profile/share/qemu/edk2-aarch64-code.fd,if=pflash,format=raw,readonly=on -drive file=/Users/alvaro/.local/share/containers/podman/machine/qemu/vm_ovmf_vars.fd,if=pflash,format=raw -virtfs local,path=/Users/alvaro,mount_tag=vol0,security_model=mapped-xattr -drive if=virtio,file=/Users/alvaro/.local/share/containers/podman/machine/qemu/vm_fedora-coreos-38.20230918.2.0-qemu.aarch64.qcow2 -nographic] 
Waiting for VM ...
Error: qemu exited unexpectedly with exit code -1, stderr: qemu-system-aarch64: -accel hvf: Error: HV_UNSUPPORTED

Read https://devangtomar.medium.com/colima-containers-on-linux-on-mac-f6396c27e39b

Updating qemu and trying again

nix profile install github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#podman github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#qemu github:NixOS/nixpkgs/f3dab3509afca932f3f4fd0908957709bb1c1f57#jq 

Still broken

nix run github:NixOS/nixpkgs/nixpkgs-unstable#darwin.builder
error: 'darwin.builder' has been changed and renamed to 'darwin.linux-builder'. The default ssh port is now 31022. Please update your configuration or override the port back to 22. See https://nixos.org/manual/nixpkgs/unstable/#sec-darwin-builder

Broken

QEMU_OPTS="-m 8192" nix run github:NixOS/nixpkgs/nixpkgs-unstable#darwin.linux-builder

Refs.:
NixOS/nixpkgs#108984 (comment)

Maybe newer qemu?

nix \
profile \
install \
github:NixOS/nixpkgs/c0838e12afa82d81668ab8550983e0521f117790#podman \
github:NixOS/nixpkgs/c0838e12afa82d81668ab8550983e0521f117790#qemu \
github:NixOS/nixpkgs/c0838e12afa82d81668ab8550983e0521f117790#jq
codesign -d --entitlements - $(readlink -f $(which qemu-system-aarch64))

Refs.:

Executable=/nix/store/7iman6fw62bbicihx8l9c0i68d22dl91-qemu-8.1.1/bin/qemu-system-aarch64
[Dict]
        [Key] com.apple.security.hypervisor
        [Value]
                [Bool] true
cat << 'EOF' > entitlements.xml
<?xml version="1.0" encoding="utf-8"?>

<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">

<plist version="1.0"> <dict> <key>com.apple.security.hypervisor</key> <true/> </dict> </plist>
EOF

Refs.:

codesign -s -- --entitlements entitlements.xml --force $(readlink -f $(which qemu-system-aarch64))

Refs.:

error: The specified item could not be found in the keychain.

Colima

Now trying colima...

nix profile install github:abiosoft/colima/f2c91a1b5bd4d0764ac3c4d889ad5d4d9837f639

Of course it is broken, it the default... what would be the fun right?!

error: hash mismatch in fixed-output derivation '/nix/store/4y54rj0y2zfp1aq4d9d6cpgr16lkya7j-colima-go-modules.drv':
         specified: sha256-lsTvzGFoC3Brnr1Q0Hl0ZqEDfcTeQ8vWGe+xylTyvts=
            got:    sha256-IQKfv+bwDQMuDytfYvirBfrmGexj3LGnIQjoJv1NEoU=
error: 1 dependencies of derivation '/nix/store/59cm2wgmc5cz9y2ifmkbzf6a553ikl70-colima.drv' failed to build

https://github.com/abiosoft/colima#usage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant