Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add rockchip support #1814

Closed

Conversation

bdherouville
Copy link

Use : ffmpeg -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp

docker run -v /dev/dri/renderD128:/dev/dri/renderD128 ffmpeg
-hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp
-fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
-i rtsp://server:80/Streaming/Channels/101 -c:v copy -c:a aac -f null -

Use : ffmpeg  -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp

docker run -v /dev/dri/renderD128:/dev/dri/renderD128 ffmpeg \
  -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp \
  -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 \
  -i rtsp://server:80/Streaming/Channels/101  -c:v copy -c:a aac -f null -
@blakeblackshear
Copy link
Owner

This can be added in 0.9.0, but I need people to test since I don't have a rkmpp board to test with. Anyone willing to test this with 0.9.0?

@bdherouville
Copy link
Author

This can be added in 0.9.0, but I need people to test since I don't have a rkmpp board to test with. Anyone willing to test this with 0.9.0?

Is it available in the beta branche ? Or is there any doc to build and use a personal docker registry ?

@blakeblackshear blakeblackshear changed the base branch from master to release-0.9.0 September 25, 2021 13:19
@blakeblackshear
Copy link
Owner

I am going to add these changes into a separate branch

@blakeblackshear
Copy link
Owner

@bdherouville can you give this image a try? blakeblackshear/frigate:0.9.0-0f5dfea-aarch64

Make sure you read the release notes for 0.9.0.

@bdherouville
Copy link
Author

@bdherouville can you give this image a try? blakeblackshear/frigate:0.9.0-0f5dfea-aarch64

Make sure you read the release notes for 0.9.0.

Perfect. I am completely reinstalling my system from scratch. So for the moment I don't have any compatibility concerns.

Thank you !

@blakeblackshear
Copy link
Owner

Just be aware there are some significant updates in the configuration.

@bdherouville
Copy link
Author

OK, I read the release notes. I am ready. Thanks for the advice.

@bdherouville
Copy link
Author

bdherouville commented Sep 27, 2021

For the moment I have an error :

"""

frigate | [2021-09-27 16:47:25] watchdog.front ERROR : FFMPEG process crashed unexpectedly for front.
frigate | [2021-09-27 16:47:25] watchdog.front ERROR : The following ffmpeg logs include the last 100 lines prior to exit.
frigate | [2021-09-27 16:47:25] watchdog.front ERROR : You may have invalid args defined for this camera.
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_info: mpp version: 6cc2ef5f author: Herman Chen 2021-09-17 [mpp_list]: Add list_mode and list_move_tail
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_rt: NOT found ion allocator
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_rt: found drm allocator
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: vcodec_service: open vcodec_service (null) failed
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: hal_h264d_api: mpp_dev_init failed ret: -1
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_hal: mpp_hal_init hal h264d_rkdec init failed ret -1
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_hal: mpp_hal_init could not found coding type 7
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_dec: mpp_dec_init could not init hal
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp_time: mpp_clock_put invalid clock (nil)
frigate | [2021-09-27 16:47:25] ffmpeg.front.detect ERROR : mpp[246]: mpp: error found on mpp initialization

"""

ffmpeg:
  # Optional: global ffmpeg args (default: shown below)
  global_args: -hide_banner -loglevel warning
  # Optional: global hwaccel args (default: shown below)
  # NOTE: See hardware acceleration docs for your specific device
  hwaccel_args:
    - -hwaccel
    - drm
    - -hwaccel_device
    - /dev/dri/renderD128
  # Optional: global input args (default: shown below)
  input_args: -c:v h264_rkmpp -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
  # Optional: global output args
  output_args:
    # Optional: output args for detect streams (default: shown below)
    detect: -f rawvideo -pix_fmt yuv420p
    # Optional: output args for record streams (default: shown below)
    record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
    # Optional: output args for rtmp streams (default: shown below)
    rtmp: -c copy -f flv

@bdherouville
Copy link
Author

I tested again with my container

docker run -v /dev/dri/renderD128:/dev/dri/renderD128 bdherouville:frigate -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:pass@nvr/Streaming/Channels/101 -c:v copy -c:a aac -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an test.mp4

It works flawlessly

Metadata:
title : HIK Media Server V3.0.21
comment : HIK Media Server Session Description : standard
Duration: N/A, start: 1632766916.049400, bitrate: N/A
Stream #0:0: Video: h264, yuv420p(progressive), 1920x1080, 7.50 fps, 7.50 tbr, 90k tbn, 15 tbc
Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 0, only the last option '-c copy' will be used.
Output #0, segment, to 'test.mp4':
Metadata:
title : HIK Media Server V3.0.21
comment : HIK Media Server Session Description : standard
encoder : Lavf58.45.100
Stream #0:0: Video: h264, yuv420p(progressive), 1920x1080, q=2-31, 7.50 fps, 7.50 tbr, 15360 tbn, 7.50 tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
^Cframe= 257 fps=8.0 q=-1.0 Lsize=N/A time=00:00:35.00 bitrate=N/A speed=1.09x
video:5174kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

@spattinson
Copy link

h264_rkmpp codec in ffmpeg is for legacy kernel on rockchip rk3399 and possibly other rockchip devices, with kernel drivers and userland libraries written by rockchip. It only works with the old legacy 4.* bsp linux kernel. When I tested this previously it crashed a lot, and eventually would not restart properly until reboot. The errors in this reply above are what you see if drivers and libraries are missing, or has failed. I strongly suggest testing this actually in frigate using real cameras with motion and recognition. The command above is only doing a copy not decoding from what I can see? I did not have much success with rkmpp. mainline kernel is where up to date development is being done and I have had success with mainline kernel 5.14.5 and this fork of ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2

@bdherouville
Copy link
Author

h264_rkmpp codec in ffmpeg is for legacy kernel on rockchip rk3399 and possibly other rockchip devices, with kernel drivers and userland libraries written by rockchip. It only works with the old legacy 4.* bsp linux kernel. When I tested this previously it crashed a lot, and eventually would not restart properly until reboot. The errors in this reply above are what you see if drivers and libraries are missing, or has failed. I strongly suggest testing this actually in frigate using real cameras with motion and recognition. The command above is only doing a copy not decoding from what I can see? I did not have much success with rkmpp. mainline kernel is where up to date development is being done and I have had success with mainline kernel 5.14.5 and this fork of ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2

Hi !

Do you mean that for a rockchip device we would have a specific ffmpeg container ?
I will work on build such container and I will test.

I did not understood your remark about using real camera stream as I did that test and it failed ( #1814 (comment) )

Cheers,

@baldisos
Copy link

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

@bdherouville
Copy link
Author

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

Looks good ! Can you share your HW / Frigate options ?

Regards,

@baldisos
Copy link

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

Looks good ! Can you share your HW / Frigate options ?

Regards,

the input_args used:

        - -avoid_negative_ts
        - make_zero  
        - -fflags  
        - nobuffer
        - -flags
        - low_delay
        - -strict
        - experimental
        - -fflags
        - +genpts+discardcorrupt
        - -rw_timeout
        - "5000000"
        - -use_wallclock_as_timestamps
        - "1"

@bdherouville
Copy link
Author

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

Looks good ! Can you share your HW / Frigate options ?
Regards,

the input_args used:

        - -avoid_negative_ts
        - make_zero  
        - -fflags  
        - nobuffer
        - -flags
        - low_delay
        - -strict
        - experimental
        - -fflags
        - +genpts+discardcorrupt
        - -rw_timeout
        - "5000000"
        - -use_wallclock_as_timestamps
        - "1"

Thanks, and what is your hardware ?

@baldisos
Copy link

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

Looks good ! Can you share your HW / Frigate options ?
Regards,

the input_args used:

        - -avoid_negative_ts
        - make_zero  
        - -fflags  
        - nobuffer
        - -flags
        - low_delay
        - -strict
        - experimental
        - -fflags
        - +genpts+discardcorrupt
        - -rw_timeout
        - "5000000"
        - -use_wallclock_as_timestamps
        - "1"

Thanks, and what is your hardware ?

I'm using HassOS on an ODROID-N2+ which uses the RK3399(?) platform i think.

@spattinson
Copy link

h264_rkmpp codec in ffmpeg is for legacy kernel on rockchip rk3399 and possibly other rockchip devices, with kernel drivers and userland libraries written by rockchip. It only works with the old legacy 4.* bsp linux kernel. When I tested this previously it crashed a lot, and eventually would not restart properly until reboot. The errors in this reply above are what you see if drivers and libraries are missing, or has failed. I strongly suggest testing this actually in frigate using real cameras with motion and recognition. The command above is only doing a copy not decoding from what I can see? I did not have much success with rkmpp. mainline kernel is where up to date development is being done and I have had success with mainline kernel 5.14.5 and this fork of ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2

Hi !

Do you mean that for a rockchip device we would have a specific ffmpeg container ? I will work on build such container and I will test.

I did not understood your remark about using real camera stream as I did that test and it failed ( #1814 (comment) )

Cheers,

There are already hardware specific ffmpeg containers in amd64nividia for cuda decoding on nvidia gpu, aarch64 has raspberry pi hardware acceleration, its a separate part of config so easy enough to build a different one need to install the rockchip libraries in the container. Also may need a /dev/ for it. What kernel version are you running?

@bdherouville
Copy link
Author

h264_rkmpp codec in ffmpeg is for legacy kernel on rockchip rk3399 and possibly other rockchip devices, with kernel drivers and userland libraries written by rockchip. It only works with the old legacy 4.* bsp linux kernel. When I tested this previously it crashed a lot, and eventually would not restart properly until reboot. The errors in this reply above are what you see if drivers and libraries are missing, or has failed. I strongly suggest testing this actually in frigate using real cameras with motion and recognition. The command above is only doing a copy not decoding from what I can see? I did not have much success with rkmpp. mainline kernel is where up to date development is being done and I have had success with mainline kernel 5.14.5 and this fork of ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2

Hi !
Do you mean that for a rockchip device we would have a specific ffmpeg container ? I will work on build such container and I will test.
I did not understood your remark about using real camera stream as I did that test and it failed ( #1814 (comment) )
Cheers,

There are already hardware specific ffmpeg containers in amd64nividia for cuda decoding on nvidia gpu, aarch64 has raspberry pi hardware acceleration, its a separate part of config so easy enough to build a different one need to install the rockchip libraries in the container. Also may need a /dev/ for it. What kernel version are you running?

Linux rockpro64 5.10.63-rockchip64 #21.08.2

I am running armbian

@bdherouville
Copy link
Author

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

Looks good ! Can you share your HW / Frigate options ?
Regards,

the input_args used:

        - -avoid_negative_ts
        - make_zero  
        - -fflags  
        - nobuffer
        - -flags
        - low_delay
        - -strict
        - experimental
        - -fflags
        - +genpts+discardcorrupt
        - -rw_timeout
        - "5000000"
        - -use_wallclock_as_timestamps
        - "1"

Thanks, and what is your hardware ?

I'm using HassOS on an ODROID-N2+ which uses the RK3399(?) platform i think.

I just tested the 0.9.0 release with the same parameters. With 5 streams of 1920x1080 and one 1920x720 on a RockPro64 I end with a load of 11 wich I guess is a bit too high.

I don't know if we can monitor the vpu load on that board.

@spattinson
Copy link

I didn't think rkmpp worked on 5.x kernels? do you have /dev/rkmpp device file ?

@bdherouville
Copy link
Author

I didn't think rkmpp worked on 5.x kernels? do you have /dev/rkmpp device file ?

No, I don't have this device.

This document is very interesting, expecially the end.

https://www.linkedin.com/pulse/stream-cheaprk3399-ffmpeg-part-i-bruno-verachten/

@spattinson
Copy link

Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works!

Looks good ! Can you share your HW / Frigate options ?
Regards,

the input_args used:

        - -avoid_negative_ts
        - make_zero  
        - -fflags  
        - nobuffer
        - -flags
        - low_delay
        - -strict
        - experimental
        - -fflags
        - +genpts+discardcorrupt
        - -rw_timeout
        - "5000000"
        - -use_wallclock_as_timestamps
        - "1"

Thanks, and what is your hardware ?

I'm using HassOS on an ODROID-N2+ which uses the RK3399(?) platform i think.

Odroid N2+ uses Amlogic S922X Processor NOT RK3399

@bdherouville
Copy link
Author

bdherouville commented Sep 30, 2021

@blakeblackshear I would like to test frigate with the specific ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2

Is it possible to create a branch where I will be able to push updates as I would like to test starting Frigate and how it interract with the ffmpeg ?

Or, can I have a ffmpeg command identical to the one run in frigate. I did docker inspection and I found multiple commands that were runnning.
A little explanation would help to bench the different ffmpeg without the need of building/running frigate.

Regards,

@blakeblackshear
Copy link
Owner

You don't need to test how frigate works with ffmpeg. You should fork the repo and make changes to this Dockerfile. You can test your changes by running ffmpeg directly in that container after running make aarch64_ffmpeg.

@blakeblackshear
Copy link
Owner

I am going to reopen this. I reverted the previous attempt to incorporate the changes into the next release because it was breaking RPi4 support.

@blakeblackshear
Copy link
Owner

@bdherouville note that the following command you posted does not actually use the hwaccel support you added because its just copying the stream directly to the mp4.

Your command:

docker run -v /dev/dri/renderD128:/dev/dri/renderD128 bdherouville:frigate -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:pass@nvr/Streaming/Channels/101 -c:v copy -c:a aac -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an test.mp4

Try this instead:

docker run -v /dev/dri/renderD128:/dev/dri/renderD128 bdherouville:frigate -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:pass@nvr/Streaming/Channels/101 -f rawvideo -pix_fmt yuv420p pipe: > /dev/null

@spattinson
Copy link

The aarch64 ffmpeg frigate dockerfile installs raspberry pi userspace kernel headers, these wont work for compiling hardware acceleration for other aarch64 platforms which will probably need the correct kernel headers. So another arch would be needed for frigate, like how amd64nvidia is handled today. For my install I just expose the system /usr/include inside the container for the build as armbian doesn't maintain apt packages for those on kernel 5.13.*

@blakeblackshear blakeblackshear deleted the branch blakeblackshear:release-0.9.0 October 5, 2021 22:59
@sakalauskas
Copy link

This change does indeed break Jetson Nano as well as @spattinson explained.

 ERROR   : You may have invalid args defined for this camera.
 ERROR   : ffmpeg: error while loading shared libraries: libnvll.so: cannot open shared object file: No such file or directory

@spattinson do you use docker-compose? What's the best way to expose /usr/include?

@bdherouville
Copy link
Author

After a bit of hope I reverted back to a x86 system. I 'll keep working on this but we can assume that the kernel module does not suit in the 5.X linux kernel.
Mostly I've seen that on arm architecture gstreamer is the prefered soft where acceleration is developed. (nvidia, rockchip)
So I will continue to investigate but it will be a long journey I guess.

@spattinson
Copy link

This change does indeed break Jetson Nano as well as @spattinson explained.

 ERROR   : You may have invalid args defined for this camera.
 ERROR   : ffmpeg: error while loading shared libraries: libnvll.so: cannot open shared object file: No such file or directory

@spattinson do you use docker-compose? What's the best way to expose /usr/include?

I am getting kernel source and installing from that, this needs additional packages xz-utils and rsync added to the apt-get install at the top. For Rockchip the VPU acceleration drivers are still in staging in the kernel and are being actively worked on. One of the kodi libreelec devs is maintaining fork of FFmpeg that has private headers that must be in sync with kernel. For Jetson Nano which probably has stable interfaces with GPU there may be a package you can install name linux-headers- that you can simply install in a way similar to how raspberry pi headers are installed but you may need to add the apt source for that. The method I use may not work for jetson, not sure if nvidia cuda installs headers that are required by ffmpeg or not.

## linux headers
RUN  \
        DIR=/tmp/linux && mkdir -p ${DIR} && cd ${DIR} && \ 
        curl -sLO  https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.14.9.tar.xz && \ 
        tar -x --strip-components=1 -f linux-5.14.9.tar.xz && \ 
        make -j6 headers_install INSTALL_HDR_PATH=/usr && \ 
        rm -rf ${DIR}

@gusarg81
Copy link

gusarg81 commented Oct 8, 2021

Hi,

I know that this is kind off topic. I have a Rock64 (RK3328). I know that hwaccel (example, using ffmpeg) needs legacy kernel 4.4.x, but any Linux image with legacy makes my board to crash randomly with kernel panic.
Only stays stable with mailine kernel.

So, lets say TODAY, is there a way to use hwaccel with this board? If possible, whats is needed? My intention is to have decoding and encoding with hwaccel (no matter if ffmpeg and gstreamer or combination of both).

Also I want to develop a Smart IP Doorbell with this board, and is why I also was looking for Frigate to use it as NVR as well.

Thanks and Sorry for the offtopic.

@spattinson
Copy link

spattinson commented Oct 9, 2021

Hi,

I know that this is kind off topic. I have a Rock64 (RK3328). I know that hwaccel (example, using ffmpeg) needs legacy kernel 4.4.x, but any Linux image with legacy makes my board to crash randomly with kernel panic. Only stays stable with mailine kernel.

So, lets say TODAY, is there a way to use hwaccel with this board? If possible, whats is needed? My intention is to have decoding and encoding with hwaccel (no matter if ffmpeg and gstreamer or combination of both).

Also I want to develop a Smart IP Doorbell with this board, and is why I also was looking for Frigate to use it as NVR as well.

Thanks and Sorry for the offtopic.
Tagging @bdherouville too as they were interested in rockchip too.

TL;DR It does not work reliably for me ATM but this is the closest to working I have seen so far. Work is ongoing in linux kernel and FFmpeg, it may work reliably sometime in the future. When the kernel drivers are moved out of staging and the interface to them is stable I expect to see a pull request on the main FFmpeg git. This is a long reply with information to test because I am giving up at this point and moving to a different platform. I would be interested if you find a solution though, or that I have missed something - hence the detailed reply.

For testing you can try this fork of ffmpeg https://github.com/jernejsk/FFmpeg It has v4l2-request and libdrm stateless VPU decoding built in using hantro and rockchip_vdec/rkvdec.
use kernel 5.14.9, armbian is a convenient way to change kernels - sudo armbian-config -> system -> Other kernels. FFmpeg from the above github has private headers for kernel interfaces and they are updated about a month after each release. You must install the correct userspace kernel headers, I just get the kernel source from https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.14.9.tar.xz and then do make -j6 headers_install INSTALL_HDR_PATH=/usr
Do not use amrbian-config to install kernel headers - it installs the wrong version.

Then install FFmpeg dependencies:
sudo apt install libudev-dev libfreetype-dev libmp3lame-dev libvorbis-dev libwebp-dev libx264-dev libx265-dev libssl-dev libdrm2 libdrm-dev pkg-config libfdk-aac-dev libopenjp2-7-dev
Run configure, this is a minimal set of options, frigate includes many more though, I removed many of them to build faster and save memory (I actually think there are a lot of redundant ffmpeg components in frigates default build files, some X11 frame grabber stuff and codecs nobody uses anymore, but thats for a separate discussion):

./configure \
--enable-libdrm \
--enable-v4l2-request \
--enable-libudev \
--disable-debug \
--disable-doc \
--disable-ffplay \
--enable-shared \
--enable-libfreetype \
--enable-gpl \
--enable-libmp3lame \
--enable-libvorbis \
--enable-libwebp \
--enable-libx265 \
--enable-libx264 \
--enable-nonfree \
--enable-openssl \
--enable-libfdk_aac \
--enable-postproc \
--extra-libs=-ldl \
--prefix="${PREFIX}" \
--enable-libopenjpeg \
--extra-libs=-lpthread \
--enable-neon

Then make -j6
I dont know if this next bit is correct, but it works for me, I dont want to do make install just run the ffmpeg tests from the build directory, to run tests you must run sudo ldconfig $PWD $PWD/lib* first therwise linker will not find libraries.

If you want to try a different kernel version run make distclean in FFmpeg and run ./configure again. If FFmpeg fails to build it will be because private headers do not match kernel headers. errors like V4L... undefined etc

Then you can do some tests and see if you get valid output, for example, this decodes 15s from one of my cams:

./ffmpeg -benchmark -loglevel debug -hwaccel drm -i rtsp://192.168.50.144:8554/unicast -t 15 -pix_fmt yuv420p -f rawvideo out.yuv

Checks to make during and after decoding:
Observe CPU usage, on my system rk3399 with 1.5Ghz little core and 2Ghz big core overclock I get between 17 and 25% cpu on one core, it varies if it runs on a53 little core or a72 big core. It should be better than that, I think its the way that the data is copied around in memory. Gstreamer or mpv attempt to do zero copy decoding so its more efficient. With software decoding CPU use is about 70% of one core. RK3328 does not have the two a72 cores and four a53 cores that RK3399 has, just four a53 cores so rk3328 about half as powerful as RK3399 as the a72 cores are about twice as powerful as the a53 cores.

You should see in the debug output for ffmpeg where it tries each of the /dev/video interfaces to find the correct codec for decoding. Be warned that ffmpeg will sometimes just fall back to software decode, if that happens you will see much higher CPU usage and often ffmpeg will spawn a number of threads to use all cores in your system. Your user should be a member of the "video" group in /etc/group to access without sudo. Log snippet of that section below:

[h264 @ 0xaaab06cd9070] Format drm_prime requires hwaccel initialisation.
[h264 @ 0xaaab06cd9070] ff_v4l2_request_init: avctx=0xaaab06cd9070 hw_device_ctx=0xaaab06c549a0 hw_frames_ctx=(nil)
[h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media1 driver=hantro-vpu
[h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video1 capabilities=69222400
[h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10
[h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed
[h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video2 capabilities=69222400
[h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10
[h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed
[h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media0 driver=rkvdec
[h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video0 capabilities=69222400
[h264 @ 0xaaab06cd9070] v4l2_request_init_context: pixelformat=842094158 width=1600 height=912 bytesperline=1600 sizeimage=2918400 num_planes=1
[h264 @ 0xaaab06cd9070] ff_v4l2_request_frame_params: avctx=0xaaab06cd9070 ctx=0xffff8804df20 hw_frames_ctx=0xffff8804faa0 hwfc=0xffff8804e530 pool=0xffff8805e910 width=1600 height=912 initial_pool_size=3

Check that the output file contains valid video data, try playing it using vlc:
vlc --rawvid-fps 10 --rawvid-width 1600 --rawvid-height 900 --rawvid-chroma I420 out.yuv
adjust the command to what height/width/fps your cameras record in.

If all this is working then try doing longer decodes in parallel, eg is you have 3 cams run the ffmpeg command for each of them in a separate window and increase the time. What happens to me is that at some point ffmpeg will start reporting "resource not available/busy" or similar, rebooting will make it work for a while again.

You can check what codecs are supported by each of the interfaces /dev/video[012] by v4l2-ctl --all -d0 change d0 to d1 d2 etc to view the other decoders/encoders

You can monitor the state of kernel development https://patchwork.kernel.org/project/linux-rockchip/list/ Most of the work on this is being done by Andrzej Pietrasiewicz. My suggestion is monitor both the ffmpeg github and kernel commits/patches, find out when they rebase ffmpeg. Pull that version and install the current kernel for it plus headers and retest.

I have all the frigate docker files already created. I basically created a new set of dockerfiles with an arch of aarch64rockchip and added those to Makefile. I'll upload them to my github at some point, I see little point to a pull request since rockchip is a niche platform with not many users in home assistant or frigate, and it does not currently work for me reliably anyway.

I have been trying to get this working for some time now, at kernel 5.4.* there were a bunch of kernel patches you had to apply. Nothing worked for me then. Often FFmpeg complained about the pixel format. There were some people on Armbian forums who claimed to have it working, but I had my doubts, maybe it was wishful thinking and ffmpeg was really using software decode. Most of the effort around this is for video playback so people can play 1080p and 2/4k videos on desktop and kodi. There is little information about straight decoding to a pipe like frigate. So in research ignore stuff to do with patched libva etc.
For now I am using an old ~2013 i5-4670 four core/thread Haswell with Nvidia GT640 GPU for Frigate and Home Assistant. For three cams at 1600*900 10fps Frigate uses 6% CPU as reported by Home Assistant supervisor. It is very stable. With that in mind and wanting to use a more power efficient system I caved and ordered a Nvidia Jetson 4GB developer kit yesterday. I have confidence I can build Frigate docker containers for that system and it has a similar hardware decoder as their GPUs, I can also try out using CUDA filters and scaling to reduce CPU load for Frigate detector. A start would be to copy the amd64nvidia dockerfiles and create aarch64nvidia arch and modify from there it should be mostly the same.

@gusarg81
Copy link

gusarg81 commented Oct 9, 2021

@spattinson Hi!

Wow, thanks for the very detailed explanation. I will burn Armbian Focal then and choose a newer kernel to test.

By the way, there is no need for --enable-rkmpp in ffmpeg compilation then?

In other hand, I know rk3328 is way less powerful than rk3399 but is the only thing I have in my hands, and like I said, I can't afford to buy another SBC right now (and for some time). Besides, I will use only one camera for this project (is a Arducam UVC USB Camera, 2MP, with night vision, IR-Cut, etc).

One can say "Why do you just use copy in decode/encode to use less cpu?". I need to transcode to use HLS so will be easier to show the stream via web if needed (apart from Frigate). Also this camera, the main FHD stream, is MJPEG (which as far as I know, is not supported in HLS).

Plus, since this SBC will be running Django and other stuff (OpenCV) I want to leave all video processing outside the CPU (which anyways, anything done in video processing with this CPU has a incredible lag, so that is why I try to make it work with hwaccel).

Again thanks I will post once I did the tests.

@spattinson
Copy link

Rkmpp is for legacy kernel, not used for mainline. The legacy kernel with rkmpp did not work for me with frigate, it would sort of work then crap out needing a reboot to work again for ffmpeg. Rockchip focussed on gstreamer for that and if you can use that it may work with very low cpu use. I got 5% cpu use on my system for one stream. The legacy linux mages that friendly arm released have a gstreamer demo vid that plays back great. See https://wiki.friendlyarm.com/wiki/index.php/NanoPC-T4
For one camera you should be good with rk3328 with mainline kernels.
Frigate basically creates an ffmpeg command that just copies input to output for recording and viewing, it only needs decoding for detection.

@gusarg81
Copy link

gusarg81 commented Oct 9, 2021

Weird, after compiled FFmpeg like you told me, I have his error when running ffmpeg:

ffmpeg: symbol lookup error: ffmpeg: undefined symbol: avio_print_string_array, version LIBAVFORMAT_58

@gusarg81
Copy link

gusarg81 commented Oct 10, 2021

Ok, recompiled gain changing some options in configure steps and works.

I did the test with:

ffmpeg -benchmark -loglevel debug -hwaccel drm -rtsp_transport udp -i rtsp://XXXX:[email protected]:554/onvif1 -t 15 -pix_fmt yuv420p -f rawvideo out.yuv

[h264 @ 0xaaab1581a000] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0xaaab1581a000] nal_unit_type: 8(PPS), nal_ref_idc: 3
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
Last message repeated 13 times
[h264 @ 0xaaab1581a000] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0xaaab1581a000] illegal modification_of_pic_nums_idc 30
[h264 @ 0xaaab1581a000] decode_slice_header error
[h264 @ 0xaaab1581a000] no frame!
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[h264 @ 0xaaab159baea0] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0xaaab159baea0] deblocking_filter_idc 7 out of range
[h264 @ 0xaaab159baea0] decode_slice_header error
[h264 @ 0xaaab159baea0] no frame!
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[h264 @ 0xaaab157f9f50] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0xaaab157f9f50] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0xaaab157f9f50] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0xaaab157f9f50] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0xaaab157f9f50] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0xaaab157f9f50] nal_unit_type: 0(Unspecified 0), nal_ref_idc: 0
[h264 @ 0xaaab157f9f50] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0xaaab157f9f50] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0xaaab157f9f50] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0xaaab157f9f50] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0xaaab157f9f50] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0xaaab157f9f50] unknown SEI type 229
[h264 @ 0xaaab157f9f50] unknown SEI type 128
[h264 @ 0xaaab157f9f50] unknown SEI type 229
[h264 @ 0xaaab157f9f50] unknown SEI type 128
[h264 @ 0xaaab157f9f50] Unknown NAL code: 0 (212 bits)
[h264 @ 0xaaab157f9f50] unknown SEI type 229
[h264 @ 0xaaab157f9f50] unknown SEI type 128
[h264 @ 0xaaab157f9f50] unknown SEI type 229
[h264 @ 0xaaab157f9f50] Format drm_prime chosen by get_format().
[h264 @ 0xaaab157f9f50] Format drm_prime requires hwaccel initialisation.
[h264 @ 0xaaab157f9f50] ff_v4l2_request_init: avctx=0xaaab157f9f50 hw_device_ctx=0xaaab15784900 hw_frames_ctx=(nil)
[h264 @ 0xaaab157f9f50] v4l2_request_probe_media_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/media0 driver=hantro-vpu
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/video0 capabilities=69222400
[h264 @ 0xaaab157f9f50] v4l2_request_try_format: pixelformat 875967059 not supported for type 10
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: try output format failed
[h264 @ 0xaaab157f9f50] v4l2_request_probe_media_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/media1 driver=uvcvideo
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/video1 capabilities=69206017
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: missing required mem2mem capability
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/video2 capabilities=77594624
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: missing required mem2mem capability
[h264 @ 0xaaab157f9f50] Failed setup for format drm_prime: hwaccel initialisation returned error.
[h264 @ 0xaaab157f9f50] Format drm_prime not usable, retrying get_format() without it.
[h264 @ 0xaaab157f9f50] Format yuvj420p chosen by get_format().
[h264 @ 0xaaab157f9f50] Reinit context to 1920x1088, pix_fmt: yuvj420p
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[h264 @ 0xaaab1591d8c0] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
Error while decoding stream #0:0: Invalid data found when processing input
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
Last message repeated 11 times
[h264 @ 0xaaab15939450] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
Error while decoding stream #0:0: Invalid data found when processing input
[h264 @ 0xaaab1581a000] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
Last message repeated 5 times
[h264 @ 0xaaab159baea0] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3

This means hwaccel not working?: "Failed setup for format drm_prime: hwaccel initialisation returned error"

During the test CPU was at ~129%.

VLC test of the file just show me the video image but "freeze".

@spattinson
Copy link

failed to find a suitable hardware decoder and did software decode instead. If there any options you can change on your camera? I am confused by the "required mem2mem" error. v4l2m2m is the stateful decoding used on raspberry pi etc, and if you used the same ffmpeg configure options I posted ffmpeg should not support m2m? See the same section of the debug output in my reply has no mention of "missing require mem2mem"

[h264 @ 0xaaab157f9f50] v4l2_request_probe_media_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/media0 driver=hantro-vpu
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/video0 capabilities=69222400
[h264 @ 0xaaab157f9f50] v4l2_request_try_format: pixelformat 875967059 not supported for type 10
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: try output format failed
[h264 @ 0xaaab157f9f50] v4l2_request_probe_media_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/media1 driver=uvcvideo
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/video1 capabilities=69206017
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: missing required mem2mem capability
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: avctx=0xaaab157f9f50 ctx=0xffff88070f50 path=/dev/video2 capabilities=77594624
[h264 @ 0xaaab157f9f50] v4l2_request_probe_video_device: missing required mem2mem capability

You can try leaving out the "-pix_fmt yuv420p" seems you camera produces yuvj420p, just let it output same pixel format as the input?

@gusarg81
Copy link

My camera have two input methods (https://www.arducam.com/product/b0205-arducam-1080p-day-night-vision-usb-camera-module-for-computer-2mp-automatic-ir-cut-switching-all-day-image-usb2-0-webcam-board-with-ir-leds/):

  • YUYV
  • MJPEG

My intention if to use the second, since the first only provides 5FPS (at 1920x1080) and the second 30FPS (at 1920x1080). Is why I use -input_format mjpeg -pixel_format mjpeg with this camera module:

v4l2-ctl --device /dev/video1 --list-formats
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture

    [0]: 'MJPG' (Motion-JPEG, compressed)
    [1]: 'YUYV' (YUYV 4:2:2)

About FFmpeg: used the options you told me, gave me the error I posted before (I will try again right now). I used the common configure options that use the FFmpeg from Ubuntu, added plus the ones you posted.

@gusarg81
Copy link

gusarg81 commented Oct 10, 2021

Compiled again FFmpeg with the configure options you gave me and the result is the same (also removed -pix_fmt yuv420p):

[h264 @ 0xaaaac9b9b8c0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0xaaaac9b9b8c0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0xaaaac9b9b8c0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0xaaaac9b9b8c0] Format drm_prime chosen by get_format().
[h264 @ 0xaaaac9b9b8c0] Format drm_prime requires hwaccel initialisation.
[h264 @ 0xaaaac9b9b8c0] ff_v4l2_request_init: avctx=0xaaaac9b9b8c0 hw_device_ctx=0xaaaac9b579d0 hw_frames_ctx=(nil)
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_media_device: avctx=0xaaaac9b9b8c0 ctx=0xffff840ac930 path=/dev/media0 driver=hantro-vpu
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_video_device: avctx=0xaaaac9b9b8c0 ctx=0xffff840ac930 path=/dev/video0 capabilities=69222400
[h264 @ 0xaaaac9b9b8c0] v4l2_request_try_format: pixelformat 875967059 not supported for type 10
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_video_device: try output format failed
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_media_device: avctx=0xaaaac9b9b8c0 ctx=0xffff840ac930 path=/dev/media1 driver=uvcvideo
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_video_device: avctx=0xaaaac9b9b8c0 ctx=0xffff840ac930 path=/dev/video1 capabilities=69206017
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_video_device: missing required mem2mem capability
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_video_device: avctx=0xaaaac9b9b8c0 ctx=0xffff840ac930 path=/dev/video2 capabilities=77594624
[h264 @ 0xaaaac9b9b8c0] v4l2_request_probe_video_device: missing required mem2mem capability
[h264 @ 0xaaaac9b9b8c0] Failed setup for format drm_prime: hwaccel initialisation returned error.
[h264 @ 0xaaaac9b9b8c0] Format drm_prime not usable, retrying get_format() without it.
[h264 @ 0xaaaac9b9b8c0] Format yuv420p chosen by get_format().
[h264 @ 0xaaaac9b9b8c0] Reinit context to 2560x1440, pix_fmt: yuv420p
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[h264 @ 0xaaaac9b94980] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[h264 @ 0xaaaac9db5b90] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[h264 @ 0xaaaac9c8f9a0] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3
[h264 @ 0xaaaac9cab460] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 3

I forgot to mention that in RTSP test, I am testing another IP camera that is already supports RTSP (which uses yuv420p anyways). Testing with the camera USB camera module (the one detailed in the other post):

Opening an input file: /dev/video1.
[video4linux2,v4l2 @ 0xaaaacf353cb0] fd:3 capabilities:84a00001
[video4linux2,v4l2 @ 0xaaaacf353cb0] Current input_channel: 0, input_name: Camera 1, input_std: 0
[video4linux2,v4l2 @ 0xaaaacf353cb0] Querying the device for the current frame size
[video4linux2,v4l2 @ 0xaaaacf353cb0] Setting frame size to 1920x1080
[video4linux2,v4l2 @ 0xaaaacf353cb0] The V4L2 driver changed the pixel format from 0x32315559 to 0x47504A4D
[video4linux2,v4l2 @ 0xaaaacf353cb0] Trying to set codec:rawvideo pix_fmt:yuv420p
[video4linux2,v4l2 @ 0xaaaacf353cb0] The V4L2 driver changed the pixel format from 0x32315559 to 0x47504A4D
[video4linux2,v4l2 @ 0xaaaacf353cb0] Trying to set codec:rawvideo pix_fmt:yuv420p
[video4linux2,v4l2 @ 0xaaaacf353cb0] The V4L2 driver changed the pixel format from 0x32315659 to 0x47504A4D
[video4linux2,v4l2 @ 0xaaaacf353cb0] Trying to set codec:rawvideo pix_fmt:yuv422p
[video4linux2,v4l2 @ 0xaaaacf353cb0] The V4L2 driver changed the pixel format from 0x50323234 to 0x47504A4D
[video4linux2,v4l2 @ 0xaaaacf353cb0] Trying to set codec:rawvideo pix_fmt:yuyv422
[video4linux2,v4l2 @ 0xaaaacf353cb0] All info found
Input #0, video4linux2,v4l2, from '/dev/video1':
Duration: N/A, start: 70959.261187, bitrate: 165888 kb/s
Stream #0:0, 1, 1/1000000: Video: rawvideo, 1 reference frame (YUY2 / 0x32595559), yuyv422, 1920x1080, 0/1, 165888 kb/s, 5 fps, 5 tbr, 1000k tbn, 1000k tbc

By default, the camera uses YUYV. With this test (which uses 5FPS), CPU usage is low (I can' tell if using hwaccel). Now, testing with MJPEG format, CPU usage goes between 100% and 50%, but the capture is really slow:

frame= 361 fps= 14 q=-0.0 Lsize= 1462050kB time=00:00:12.03 bitrate=995328.0kbits/s dup=263 drop=0 speed=0.468x

[video4linux2,v4l2 @ 0xaaaafe19ad10] fd:3 capabilities:84a00001
[video4linux2,v4l2 @ 0xaaaafe19ad10] Current input_channel: 0, input_name: Camera 1, input_std: 0
[video4linux2,v4l2 @ 0xaaaafe19ad10] Querying the device for the current frame size
[video4linux2,v4l2 @ 0xaaaafe19ad10] Setting frame size to 1920x1080
[mjpeg @ 0xaaaafe19bf50] marker=d8 avail_size_in_buf=221894
[mjpeg @ 0xaaaafe19bf50] marker parser used 0 bytes (0 bits)
[mjpeg @ 0xaaaafe19bf50] marker=c0 avail_size_in_buf=221892
[mjpeg @ 0xaaaafe19bf50] Changing bps from 0 to 8
[mjpeg @ 0xaaaafe19bf50] sof0: picture: 1920x1080
[mjpeg @ 0xaaaafe19bf50] component 0 2:1 id: 0 quant:0
[mjpeg @ 0xaaaafe19bf50] component 1 1:1 id: 1 quant:1
[mjpeg @ 0xaaaafe19bf50] component 2 1:1 id: 2 quant:1
[mjpeg @ 0xaaaafe19bf50] pix fmt id 21111100
[mjpeg @ 0xaaaafe19bf50] Format yuvj422p chosen by get_format().
[mjpeg @ 0xaaaafe19bf50] marker parser used 17 bytes (136 bits)
[mjpeg @ 0xaaaafe19bf50] marker=db avail_size_in_buf=221873
[mjpeg @ 0xaaaafe19bf50] index=0
[mjpeg @ 0xaaaafe19bf50] qscale[0]: 2
[mjpeg @ 0xaaaafe19bf50] index=1
[mjpeg @ 0xaaaafe19bf50] qscale[1]: 3
[mjpeg @ 0xaaaafe19bf50] marker parser used 132 bytes (1056 bits)
[mjpeg @ 0xaaaafe19bf50] marker=c4 avail_size_in_buf=221739
[mjpeg @ 0xaaaafe19bf50] marker parser used 0 bytes (0 bits)
[mjpeg @ 0xaaaafe19bf50] escaping removed 305 bytes
[mjpeg @ 0xaaaafe19bf50] marker=da avail_size_in_buf=221319
[mjpeg @ 0xaaaafe19bf50] marker parser used 221014 bytes (1768112 bits)
[mjpeg @ 0xaaaafe19bf50] marker=d9 avail_size_in_buf=7
[mjpeg @ 0xaaaafe19bf50] decode frame unused 7 bytes
[video4linux2,v4l2 @ 0xaaaafe19ad10] All info found

@gusarg81
Copy link

Hi, I did all it again, installing Armbian from scratch, Kernel as suggested. Tested but I can't tell if is using HWaccel. Debug is quite different. Cpu use during the test is about 20%.
All this using YUYV format from the camera (remember, is an USB UVC camera). This format only allows 5fps at FHD.

I did this test with:

ffmpeg -benchmark -loglevel debug -hwaccel drm -i /dev/video1 -t 15 -pix_fmt yuv420p -f rawvideo out.yuv

Now, selecting MJPEG format (the one I need, because allows 30fps at FHD), the CPU is at 100% so here I guess is not using hwaccell at all. Maybe I am using wrong parameters to ffmpeg.
What I've added to the ffmpeg is -input_format mjpeg and -pixel_format mjpeg

Attaching the log from the YUYV test.
Uploading rock64_ffmpeg_test_hwaccel_yuyv.log…

@salvobellino95
Copy link

Hi, thank you for you work!
I'm using Frigate runnin in RockPi4c+, CPU RK3399T, GPU Mali T860MP4. I choosed this SBC because it is powerful and it has builtin eMMC.

Installed recommended Debian 10 legacy kernel 4.4
Can I enable hw decode acceleration? What can I try?

@cstrat
Copy link

cstrat commented Aug 18, 2023

Found my way to this closed pull request looking for a way to enabled hardware acc on my rock5b for ffmpeg...
Has anyone got an answer for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants