-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rockchip support #1814
Add rockchip support #1814
Conversation
Use : ffmpeg -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp docker run -v /dev/dri/renderD128:/dev/dri/renderD128 ffmpeg \ -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp \ -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 \ -i rtsp://server:80/Streaming/Channels/101 -c:v copy -c:a aac -f null -
This can be added in 0.9.0, but I need people to test since I don't have a rkmpp board to test with. Anyone willing to test this with 0.9.0? |
Is it available in the beta branche ? Or is there any doc to build and use a personal docker registry ? |
I am going to add these changes into a separate branch |
@bdherouville can you give this image a try? Make sure you read the release notes for 0.9.0. |
Perfect. I am completely reinstalling my system from scratch. So for the moment I don't have any compatibility concerns. Thank you ! |
Just be aware there are some significant updates in the configuration. |
OK, I read the release notes. I am ready. Thanks for the advice. |
For the moment I have an error : """ frigate | [2021-09-27 16:47:25] watchdog.front ERROR : FFMPEG process crashed unexpectedly for front. """
|
I tested again with my container docker run -v /dev/dri/renderD128:/dev/dri/renderD128 bdherouville:frigate -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:pass@nvr/Streaming/Channels/101 -c:v copy -c:a aac -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an test.mp4 It works flawlessly Metadata: |
h264_rkmpp codec in ffmpeg is for legacy kernel on rockchip rk3399 and possibly other rockchip devices, with kernel drivers and userland libraries written by rockchip. It only works with the old legacy 4.* bsp linux kernel. When I tested this previously it crashed a lot, and eventually would not restart properly until reboot. The errors in this reply above are what you see if drivers and libraries are missing, or has failed. I strongly suggest testing this actually in frigate using real cameras with motion and recognition. The command above is only doing a copy not decoding from what I can see? I did not have much success with rkmpp. mainline kernel is where up to date development is being done and I have had success with mainline kernel 5.14.5 and this fork of ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2 |
Hi ! Do you mean that for a rockchip device we would have a specific ffmpeg container ? I did not understood your remark about using real camera stream as I did that test and it failed ( #1814 (comment) ) Cheers, |
Just for Information, with 0.9.0 Beta i'm down to about 12% CPU usage coming from about 40%. And also about 200MB less RAM usage. Seems like this works! |
Looks good ! Can you share your HW / Frigate options ? Regards, |
the input_args used:
|
Thanks, and what is your hardware ? |
I'm using HassOS on an ODROID-N2+ which uses the RK3399(?) platform i think. |
There are already hardware specific ffmpeg containers in amd64nividia for cuda decoding on nvidia gpu, aarch64 has raspberry pi hardware acceleration, its a separate part of config so easy enough to build a different one need to install the rockchip libraries in the container. Also may need a /dev/ for it. What kernel version are you running? |
Linux rockpro64 5.10.63-rockchip64 #21.08.2 I am running armbian |
I just tested the 0.9.0 release with the same parameters. With 5 streams of 1920x1080 and one 1920x720 on a RockPro64 I end with a load of 11 wich I guess is a bit too high. I don't know if we can monitor the vpu load on that board. |
I didn't think rkmpp worked on 5.x kernels? do you have /dev/rkmpp device file ? |
No, I don't have this device. This document is very interesting, expecially the end. https://www.linkedin.com/pulse/stream-cheaprk3399-ffmpeg-part-i-bruno-verachten/ |
Odroid N2+ uses Amlogic S922X Processor NOT RK3399 |
@blakeblackshear I would like to test frigate with the specific ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2 Is it possible to create a branch where I will be able to push updates as I would like to test starting Frigate and how it interract with the ffmpeg ? Or, can I have a ffmpeg command identical to the one run in frigate. I did docker inspection and I found multiple commands that were runnning. Regards, |
You don't need to test how frigate works with ffmpeg. You should fork the repo and make changes to this Dockerfile. You can test your changes by running ffmpeg directly in that container after running |
I am going to reopen this. I reverted the previous attempt to incorporate the changes into the next release because it was breaking RPi4 support. |
@bdherouville note that the following command you posted does not actually use the hwaccel support you added because its just copying the stream directly to the mp4. Your command: docker run -v /dev/dri/renderD128:/dev/dri/renderD128 bdherouville:frigate -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:pass@nvr/Streaming/Channels/101 -c:v copy -c:a aac -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an test.mp4 Try this instead: docker run -v /dev/dri/renderD128:/dev/dri/renderD128 bdherouville:frigate -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1 -i rtsp://user:pass@nvr/Streaming/Channels/101 -f rawvideo -pix_fmt yuv420p pipe: > /dev/null |
The aarch64 ffmpeg frigate dockerfile installs raspberry pi userspace kernel headers, these wont work for compiling hardware acceleration for other aarch64 platforms which will probably need the correct kernel headers. So another arch would be needed for frigate, like how amd64nvidia is handled today. For my install I just expose the system /usr/include inside the container for the build as armbian doesn't maintain apt packages for those on kernel 5.13.* |
This change does indeed break Jetson Nano as well as @spattinson explained.
@spattinson do you use docker-compose? What's the best way to expose /usr/include? |
After a bit of hope I reverted back to a x86 system. I 'll keep working on this but we can assume that the kernel module does not suit in the 5.X linux kernel. |
I am getting kernel source and installing from that, this needs additional packages xz-utils and rsync added to the apt-get install at the top. For Rockchip the VPU acceleration drivers are still in staging in the kernel and are being actively worked on. One of the kodi libreelec devs is maintaining fork of FFmpeg that has private headers that must be in sync with kernel. For Jetson Nano which probably has stable interfaces with GPU there may be a package you can install name linux-headers- that you can simply install in a way similar to how raspberry pi headers are installed but you may need to add the apt source for that. The method I use may not work for jetson, not sure if nvidia cuda installs headers that are required by ffmpeg or not.
|
Hi, I know that this is kind off topic. I have a Rock64 (RK3328). I know that hwaccel (example, using ffmpeg) needs legacy kernel 4.4.x, but any Linux image with legacy makes my board to crash randomly with kernel panic. So, lets say TODAY, is there a way to use hwaccel with this board? If possible, whats is needed? My intention is to have decoding and encoding with hwaccel (no matter if ffmpeg and gstreamer or combination of both). Also I want to develop a Smart IP Doorbell with this board, and is why I also was looking for Frigate to use it as NVR as well. Thanks and Sorry for the offtopic. |
TL;DR It does not work reliably for me ATM but this is the closest to working I have seen so far. Work is ongoing in linux kernel and FFmpeg, it may work reliably sometime in the future. When the kernel drivers are moved out of staging and the interface to them is stable I expect to see a pull request on the main FFmpeg git. This is a long reply with information to test because I am giving up at this point and moving to a different platform. I would be interested if you find a solution though, or that I have missed something - hence the detailed reply. For testing you can try this fork of ffmpeg https://github.com/jernejsk/FFmpeg It has v4l2-request and libdrm stateless VPU decoding built in using hantro and rockchip_vdec/rkvdec. Then install FFmpeg dependencies:
Then If you want to try a different kernel version run Then you can do some tests and see if you get valid output, for example, this decodes 15s from one of my cams:
Checks to make during and after decoding: You should see in the debug output for ffmpeg where it tries each of the /dev/video interfaces to find the correct codec for decoding. Be warned that ffmpeg will sometimes just fall back to software decode, if that happens you will see much higher CPU usage and often ffmpeg will spawn a number of threads to use all cores in your system. Your user should be a member of the "video" group in /etc/group to access without sudo. Log snippet of that section below:
Check that the output file contains valid video data, try playing it using vlc: If all this is working then try doing longer decodes in parallel, eg is you have 3 cams run the ffmpeg command for each of them in a separate window and increase the time. What happens to me is that at some point ffmpeg will start reporting "resource not available/busy" or similar, rebooting will make it work for a while again. You can check what codecs are supported by each of the interfaces /dev/video[012] by You can monitor the state of kernel development https://patchwork.kernel.org/project/linux-rockchip/list/ Most of the work on this is being done by Andrzej Pietrasiewicz. My suggestion is monitor both the ffmpeg github and kernel commits/patches, find out when they rebase ffmpeg. Pull that version and install the current kernel for it plus headers and retest. I have all the frigate docker files already created. I basically created a new set of dockerfiles with an arch of aarch64rockchip and added those to Makefile. I'll upload them to my github at some point, I see little point to a pull request since rockchip is a niche platform with not many users in home assistant or frigate, and it does not currently work for me reliably anyway. I have been trying to get this working for some time now, at kernel 5.4.* there were a bunch of kernel patches you had to apply. Nothing worked for me then. Often FFmpeg complained about the pixel format. There were some people on Armbian forums who claimed to have it working, but I had my doubts, maybe it was wishful thinking and ffmpeg was really using software decode. Most of the effort around this is for video playback so people can play 1080p and 2/4k videos on desktop and kodi. There is little information about straight decoding to a pipe like frigate. So in research ignore stuff to do with patched libva etc. |
@spattinson Hi! Wow, thanks for the very detailed explanation. I will burn Armbian Focal then and choose a newer kernel to test. By the way, there is no need for --enable-rkmpp in ffmpeg compilation then? In other hand, I know rk3328 is way less powerful than rk3399 but is the only thing I have in my hands, and like I said, I can't afford to buy another SBC right now (and for some time). Besides, I will use only one camera for this project (is a Arducam UVC USB Camera, 2MP, with night vision, IR-Cut, etc). One can say "Why do you just use copy in decode/encode to use less cpu?". I need to transcode to use HLS so will be easier to show the stream via web if needed (apart from Frigate). Also this camera, the main FHD stream, is MJPEG (which as far as I know, is not supported in HLS). Plus, since this SBC will be running Django and other stuff (OpenCV) I want to leave all video processing outside the CPU (which anyways, anything done in video processing with this CPU has a incredible lag, so that is why I try to make it work with hwaccel). Again thanks I will post once I did the tests. |
Rkmpp is for legacy kernel, not used for mainline. The legacy kernel with rkmpp did not work for me with frigate, it would sort of work then crap out needing a reboot to work again for ffmpeg. Rockchip focussed on gstreamer for that and if you can use that it may work with very low cpu use. I got 5% cpu use on my system for one stream. The legacy linux mages that friendly arm released have a gstreamer demo vid that plays back great. See https://wiki.friendlyarm.com/wiki/index.php/NanoPC-T4 |
Weird, after compiled FFmpeg like you told me, I have his error when running ffmpeg:
|
Ok, recompiled gain changing some options in configure steps and works. I did the test with: ffmpeg -benchmark -loglevel debug -hwaccel drm -rtsp_transport udp -i rtsp://XXXX:[email protected]:554/onvif1 -t 15 -pix_fmt yuv420p -f rawvideo out.yuv
This means hwaccel not working?: "Failed setup for format drm_prime: hwaccel initialisation returned error" During the test CPU was at ~129%. VLC test of the file just show me the video image but "freeze". |
failed to find a suitable hardware decoder and did software decode instead. If there any options you can change on your camera? I am confused by the "required mem2mem" error. v4l2m2m is the stateful decoding used on raspberry pi etc, and if you used the same ffmpeg configure options I posted ffmpeg should not support m2m? See the same section of the debug output in my reply has no mention of "missing require mem2mem"
You can try leaving out the "-pix_fmt yuv420p" seems you camera produces yuvj420p, just let it output same pixel format as the input? |
My camera have two input methods (https://www.arducam.com/product/b0205-arducam-1080p-day-night-vision-usb-camera-module-for-computer-2mp-automatic-ir-cut-switching-all-day-image-usb2-0-webcam-board-with-ir-leds/):
My intention if to use the second, since the first only provides 5FPS (at 1920x1080) and the second 30FPS (at 1920x1080). Is why I use -input_format mjpeg -pixel_format mjpeg with this camera module:
About FFmpeg: used the options you told me, gave me the error I posted before (I will try again right now). I used the common configure options that use the FFmpeg from Ubuntu, added plus the ones you posted. |
Compiled again FFmpeg with the configure options you gave me and the result is the same (also removed -pix_fmt yuv420p):
I forgot to mention that in RTSP test, I am testing another IP camera that is already supports RTSP (which uses yuv420p anyways). Testing with the camera USB camera module (the one detailed in the other post):
By default, the camera uses YUYV. With this test (which uses 5FPS), CPU usage is low (I can' tell if using hwaccel). Now, testing with MJPEG format, CPU usage goes between 100% and 50%, but the capture is really slow:
|
Hi, I did all it again, installing Armbian from scratch, Kernel as suggested. Tested but I can't tell if is using HWaccel. Debug is quite different. Cpu use during the test is about 20%. I did this test with:
Now, selecting MJPEG format (the one I need, because allows 30fps at FHD), the CPU is at 100% so here I guess is not using hwaccell at all. Maybe I am using wrong parameters to ffmpeg. Attaching the log from the YUYV test. |
Hi, thank you for you work! Installed recommended Debian 10 legacy kernel 4.4 |
Found my way to this closed pull request looking for a way to enabled hardware acc on my rock5b for ffmpeg... |
Use : ffmpeg -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp
docker run -v /dev/dri/renderD128:/dev/dri/renderD128 ffmpeg
-hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp
-fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
-i rtsp://server:80/Streaming/Channels/101 -c:v copy -c:a aac -f null -