Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build Docker multiarch images and fix Dockerfile #278

Merged
merged 10 commits into from
Oct 23, 2021

Conversation

ldobbelsteen
Copy link
Contributor

@ldobbelsteen ldobbelsteen commented Oct 21, 2021

This relates to issue #213 that I opened. I have modified the Docker image workflow to support building images for multiple architectures (amd64, arm64 and armv7). Unfortunately I was not able to add RISC as an architecture, as it is not supported by Node.

I also fixed the Dockerfile such that it won't be necessary to manually add the ffmpeg binaries to the project root in order to build the image. This way, the workflow will build.

@vgarleanu
Copy link
Member

This looks awesome! I'll merge this if the pipeline runs.

@ldobbelsteen
Copy link
Contributor Author

After some more testing I realized ffmpeg-static only targets x86, so the build doesn't work for ARM, oops. Would it be a good idea to package vanilla ffmpeg for architectures other than x86?

@ldobbelsteen ldobbelsteen marked this pull request as draft October 21, 2021 16:41
@vgarleanu
Copy link
Member

After some more testing I realized ffmpeg-static only targets x86, so the build doesn't work for ARM, oops. Would it be a good idea to package vanilla ffmpeg for architectures other than x86?

I think that should do for now, but ideally we should edit the ffmpeg-static build scripts to also target ARM and so on.

@ldobbelsteen
Copy link
Contributor Author

After some more testing I realized ffmpeg-static only targets x86, so the build doesn't work for ARM, oops. Would it be a good idea to package vanilla ffmpeg for architectures other than x86?

I think that should do for now, but ideally we should edit the ffmpeg-static build scripts to also target ARM and so on.

I agree, but for now, I have modified the Dockerfile to use the vanilla ffmpeg Debian package when the architecture is not amd64. I have both amd64 and arm64 working and tested with this new Dockerfile, so I'm pretty sure everything works now.

@ldobbelsteen ldobbelsteen marked this pull request as ready for review October 21, 2021 19:10
@vgarleanu
Copy link
Member

After some more testing I realized ffmpeg-static only targets x86, so the build doesn't work for ARM, oops. Would it be a good idea to package vanilla ffmpeg for architectures other than x86?

I think that should do for now, but ideally we should edit the ffmpeg-static build scripts to also target ARM and so on.

I agree, but for now, I have modified the Dockerfile to use the vanilla ffmpeg Debian package when the architecture is not amd64. I have both amd64 and arm64 working and tested with this new Dockerfile, so I'm pretty sure everything works now.

Gotcha, I'm happy to merge this. Ill give the docker image a spin on my raspi.

@martadinata666
Copy link
Contributor

martadinata666 commented Oct 22, 2021

Adding consideration this https://johnvansickle.com/ffmpeg/ and alread referenced, already had collection ffmpeg static for respective arch, so instead build from source, we can fetch the respective binary. By using if condition per TARGETARCH

RUN if [ "${TARGETARCH}" = "amd64" ]; then \
    wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz && \
    tar -xf ffmpeg-git-amd64-static.tar.xz && \
    ls -alh && \
    mv ffmpeg-git-$DATE-amd64-static/ffmpeg .local/bin/ && \
    mv ffmpeg-git-$DATE-amd64-static/ffprobe .local/bin/ && \
    rm -rf ffmpeg-git-$DATE-$amd64-static \
    fi;

Repeated for every TARGETARCH.

debian@2a9ff00de3d2:~$ ffmpeg
ffmpeg version N-59353-g933765aa0e-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 8 (Debian 8.3.0-6)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzimg
  libavutil      57.  7.100 / 57.  7.100
  libavcodec     59. 12.100 / 59. 12.100
  libavformat    59.  6.100 / 59.  6.100
  libavdevice    59.  0.101 / 59.  0.101
  libavfilter     8. 14.100 /  8. 14.100
  libswscale      6.  1.100 /  6.  1.100
  libswresample   4.  0.100 /  4.  0.100
  libpostproc    56.  0.100 / 56.  0.100
Hyper fast Audio and Video encoder

Additional material

@ldobbelsteen
Copy link
Contributor Author

I have stumbled upon another issue, which is that building the main crate does not work under QEMU for ARMv7, due to this weird bug. Building the Dockerfile natively works for ARMv7, but under QEMU (which is used in the workflow) there are some large file support issues. I have removed ARMv7 from the workflow for now, which only leaves AMD64 and ARM64.

@vgarleanu
Copy link
Member

Adding consideration this https://johnvansickle.com/ffmpeg/ and alread referenced, already had collection ffmpeg static for respective arch, so instead build from source, we can fetch the respective binary. By using if condition per TARGETARCH

The problem with using these pre compiled binaries is that they lack certain features like VAAPI by the looks of it. NVENC options are also lacking.

@martadinata666
Copy link
Contributor

The problem with using these pre compiled binaries is that they lack certain features like VAAPI by the looks of it. NVENC options are also lacking.

Ah i see, the precompiled from https://johnvansickle.com/ffmpeg/ indeed don't support VAAPI.

Another way i can suggest is using ffmpeg from jellyfin by using jellyfin repo.

My personal vaapi image build, work on AMD and intel. Can't really tell for nvenc, and arm devices, if someone can help to test gladly welcome.

FROM 192.168.0.2:5050/dedyms/debian:latest
# https://github.com/intel/compute-runtime/releases
ARG TARGETARCH
ARG GMMLIB_VERSION=21.2.1
ARG IGC_VERSION=1.0.8744
ARG NEO_VERSION=21.41.21220
ARG LEVEL_ZERO_VERSION=1.2.21220

# echo "deb http://deb.debian.org/debian buster-backports main" >> /etc/apt/sources.list && \
RUN apt update && apt install --no-install-recommends --no-install-suggests -y ca-certificates gnupg wget apt-transport-https \
    && wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | apt-key add - \
    && echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/debian bullseye main" | tee /etc/apt/sources.list.d/jellyfin.list \
    && apt-get update \
    && apt install --no-install-recommends --no-install-suggests -y mesa-va-drivers jellyfin-ffmpeg \
    && ln -s /usr/lib/jellyfin-ffmpeg/ffmpeg $HOME/.local/bin/ffmpeg \
    && ln -s /usr/lib/jellyfin-ffmpeg/ffprobe $HOME/.local/bin/ffprobe \
    && apt-get remove gnupg apt-transport-https -y \
    && apt-get clean autoclean -y \
    && apt-get autoremove -y \
    && rm -rf /var/lib/apt/lists/*
# Intel VAAPI Tone mapping dependencies:
# Prefer NEO to Beignet since the latter one doesn't support Comet Lake or newer for now.
# Do not use the intel-opencl-icd package from repo since they will not build with RELEASE_WITH_REGKEYS enabled.
RUN if [ "${TARGETARCH}" = "amd64" ]; then \
        mkdir intel-compute-runtime && \
        cd intel-compute-runtime && \
        wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-gmmlib_${GMMLIB_VERSION}_amd64.deb && \
        wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-${IGC_VERSION}/intel-igc-core_${IGC_VERSION}_amd64.deb && \
        wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-${IGC_VERSION}/intel-igc-opencl_${IGC_VERSION}_amd64.deb && \
        wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-opencl_${NEO_VERSION}_amd64.deb && \
        wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-ocloc_${NEO_VERSION}_amd64.deb && \
        wget https://github.com/intel/compute-runtime/releases/download/${NEO_VERSION}/intel-level-zero-gpu_${LEVEL_ZERO_VERSION}_amd64.deb && \
        dpkg -i *.deb && \
        cd .. && \
        rm -rf intel-compute-runtime; \
    fi;
CMD ["bash"]

ffmpeg capabilities

ffmpeg version 4.3.2-Jellyfin Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-gpl --enable-version3 --enable-static --enable-libfontconfig --enable-fontconfig --enable-gmp --enable-gnutls --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --arch=amd64 --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-vdpau --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvenc --enable-nvdec --enable-ffnvcodec
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Hyper fast Audio and Video encoder

jellyfin-ffmpeg

debian@95909c0fb5a3:~/.unmanic$ dpkg-query -L jellyfin-ffmpeg
/.
/usr
/usr/lib
/usr/lib/jellyfin-ffmpeg
/usr/lib/jellyfin-ffmpeg/ffmpeg
/usr/lib/jellyfin-ffmpeg/ffprobe
/usr/lib/jellyfin-ffmpeg/lib
/usr/lib/jellyfin-ffmpeg/lib/dri
/usr/lib/jellyfin-ffmpeg/lib/dri/i965_drv_video.so
/usr/lib/jellyfin-ffmpeg/lib/libdav1d.so
/usr/lib/jellyfin-ffmpeg/lib/libdav1d.so.5
/usr/lib/jellyfin-ffmpeg/lib/libdav1d.so.5.1.0
/usr/lib/jellyfin-ffmpeg/lib/libigdgmm.so.11.2.0
/usr/lib/jellyfin-ffmpeg/lib/libmfx.so.1.35
/usr/lib/jellyfin-ffmpeg/lib/libmfxhw64.so.1.35
/usr/lib/jellyfin-ffmpeg/lib/libva-drm.so.2.1100.0
/usr/lib/jellyfin-ffmpeg/lib/libva.so.2.1100.0
/usr/lib/jellyfin-ffmpeg/lib/libzimg.so.2.0.0
/usr/lib/jellyfin-ffmpeg/lib/mfx
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_h264la_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_hevc_fei_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_hevcd_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_hevce_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_vp8d_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_vp9d_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/libmfx_vp9e_hw64.so
/usr/lib/jellyfin-ffmpeg/lib/mfx/plugins.cfg
/usr/share
/usr/share/doc
/usr/share/doc/jellyfin-ffmpeg
/usr/share/doc/jellyfin-ffmpeg/changelog.Debian.gz
/usr/share/doc/jellyfin-ffmpeg/changelog.gz
/usr/share/doc/jellyfin-ffmpeg/copyright
/usr/lib/jellyfin-ffmpeg/lib/libigdgmm.so
/usr/lib/jellyfin-ffmpeg/lib/libigdgmm.so.11
/usr/lib/jellyfin-ffmpeg/lib/libmfx.so
/usr/lib/jellyfin-ffmpeg/lib/libmfx.so.1
/usr/lib/jellyfin-ffmpeg/lib/libmfxhw64.so
/usr/lib/jellyfin-ffmpeg/lib/libmfxhw64.so.1
/usr/lib/jellyfin-ffmpeg/lib/libva-drm.so
/usr/lib/jellyfin-ffmpeg/lib/libva-drm.so.2
/usr/lib/jellyfin-ffmpeg/lib/libva.so
/usr/lib/jellyfin-ffmpeg/lib/libva.so.2
/usr/lib/jellyfin-ffmpeg/lib/libzimg.so
/usr/lib/jellyfin-ffmpeg/lib/libzimg.so.2

@vgarleanu
Copy link
Member

IIRC jellyfin-ffmpeg is not up to date with ffmpeg's master branch and includes some jellyfin specific modifications that we do not need. I think its best we stick with ffmpeg from our own repo, until we can figure out a more stable solution.

@onedr0p
Copy link

onedr0p commented Oct 22, 2021

It would be cool to update this PR to also push to GHCR, dockerhub rate limits pull requests and is going downhill in terms of being friendly towards OSS projects.

@ldobbelsteen
Copy link
Contributor Author

It would be cool to update this PR to also push to GHCR, dockerhub rate limits pull requests and is going downhill in terms of being friendly towards OSS projects.

I personally prefer GHCR too, but I think it'd be more practical to maintain a single registry. Keeping quick dev builds in sync between two registries for example would not be very practical I think. Don't know what @vgarleanu thinks of this.

@vgarleanu
Copy link
Member

It would be cool to update this PR to also push to GHCR, dockerhub rate limits pull requests and is going downhill in terms of being friendly towards OSS projects.

I personally prefer GHCR too, but I think it'd be more practical to maintain a single registry. Keeping quick dev builds in sync between two registries for example would not be very practical I think. Don't know what @vgarleanu thinks of this.

I'm not entirely sure what the benefits of using GHCR are, but I think it might be better to stick with dockerhub for now.

@onedr0p
Copy link

onedr0p commented Oct 22, 2021

I'm not entirely sure what the benefits of using GHCR are, but I think it might be better to stick with dockerhub for now.

It's free, easily integrated with your project here and no image pull limits.

@vgarleanu
Copy link
Member

I'm not entirely sure what the benefits of using GHCR are, but I think it might be better to stick with dockerhub for now.

It's free, easily integrated with your project here and no image pull limits.

sounds good to me, will I have to reconfigure anything within docker on my dev machine?

@onedr0p
Copy link

onedr0p commented Oct 22, 2021

I'm not entirely sure what the benefits of using GHCR are, but I think it might be better to stick with dockerhub for now.

It's free, easily integrated with your project here and no image pull limits.

sounds good to me, will I have to reconfigure anything within docker on my dev machine?

Nope the only thing you'll need to do after the workflow is updated, is prepare the repository here to display the package and make it public.

@vgarleanu
Copy link
Member

@ldobbelsteen if you could add support for deploying to GHCR that would be lovely!

@ldobbelsteen
Copy link
Contributor Author

@ldobbelsteen if you could add support for deploying to GHCR that would be lovely!

Done deal. The GITHUB_TOKEN in the workflow is actually pre-supplied by Actions so there's no need to setup any tokens.

@vgarleanu vgarleanu merged commit 000c2e3 into Dusk-Labs:master Oct 23, 2021
@lwndow
Copy link

lwndow commented Oct 24, 2021

@ldobbelsteen when should we see this outputting containers? 0B pull when using new ghcr repo from README

@ldobbelsteen
Copy link
Contributor Author

@ldobbelsteen when should we see this outputting containers? 0B pull when using new ghcr repo from README

The workflow only runs when a tag happens, or when it is manually triggered, which I don't have the rights to.

@vgarleanu
Copy link
Member

@ldobbelsteen when should we see this outputting containers? 0B pull when using new ghcr repo from README

I'll probably end up making a new rc image sometime next week, and a dev release Monday morning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants