-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't convert compose service with CDI device #107
Comments
For someone facing this issue, the following workaround seems like it works OK. Define a new runtime at [engine.runtimes]
nvidia = ["/usr/bin/nvidia-container-runtime"] Use jellyfin:
image: docker.io/jellyfin/jellyfin:latest
container_name: jellyfin
restart: always
#user: 973:973 # media:media
runtime: nvidia
group_add:
- video
ports:
- 127.0.0.1:8096:8096
volumes:
- ./jellyfin/config:/config
- ./jellyfin/cache:/cache
- /mnt/hdd/media:/data/media
security_opt:
- label=disable
labels:
- io.containers.autoupdate=registry I haven't tested the generate quadlet service but it returns the following which seems correct (ignore the volume paths, I didn't pass # jellyfin.container
[Container]
AutoUpdate=registry
ContainerName=jellyfin
Image=docker.io/jellyfin/jellyfin:latest
PodmanArgs=--group-add video
PublishPort=127.0.0.1:8096:8096
SecurityLabelDisable=true
Volume=./jellyfin/config:/config
Volume=./jellyfin/cache:/cache
Volume=/mnt/hdd/media:/data/media
GlobalArgs=--runtime nvidia
[Service]
Restart=always |
According to the Compose Specification, |
Specifically for Podman, there is |
Shouldn't the spec be corrected given that CDI devices exist? I think CDI devices are a relatively recent standard (not older than 5 years) and it's only very recently that Nvidia started recommending it for Podman users. It seems like a case of the spec being out of date. Docker also supports CDI devices but I'm not sure if their docker-compose is doing this same type of validation. IMO it should be valid given that both |
I actually preferred the runtime approach as it doesn't require me to create some kind of package update hook/systemd service that keeps the CDI yaml file up-to-date. The issue with CDI is that the file needs to be updated everytime Cuda or the Nvidia driver is updated. Either way, this issue doesn't impact me anymore but I kept the issue open as it seems a simple issue to fix. Someone might need CDI devices for some other vendor and wouldn't be able to use the runtime workaround. (Edit: |
Thanks for the information! I haven't tried to use a GPU in a container myself and hadn't heard of CDI before.
Probably. You should create an issue in the compose-spec repo since you understand this better than I do.
Is there documentation on this? I can't find anything about CDI in the docker-run(1) or podman-run(1) man pages. |
In the podman-run man page, the reference to CDI devices is subtle:
With CDI devices, container-device and permissions needs to be omitted. It is strange it isn't mentioned more directly though. |
I made a ticket here: compose-spec/compose-spec#532 |
Are you sure that's a reference to CDI devices? Leaving off the container-device instructs Podman to mount the device in the same place in the container as the host. I get that Podman and Docker do support CDI devices. I'm just hesitant to add it to Podlet / |
It's actually not, I checked the man page's git history and this predates CDI. |
Consider the following service:
Ignore the fact that the user entry would fail with podlet due to #106, another validation failure is triggered by the devices entry.
The text was updated successfully, but these errors were encountered: