Originally, this code would encode camera or directory images to h264 and just send those packets via zmq to a subscriber.
I have since enabled RTP transport, so the stream can be played with ffplay and VLC.
encode_video_fromdir
reads images from a directory and sends it to an rtp host and port. An SDP file is written and
needs to be used by the client (does not change unless you change parameters).
Usage is
./build/encode_video_fromdir ~/Downloads/images/ jpeg 127.0.0.1 5006
Beware that an even-numbered RTP port is necessary otherwise VLC will not receive packets. This is because the live555 library VLC used discards the last bit of the port number, so the port gets changed when odd (wtf).
VLC can stream this with
/Applications/VLC.app/Contents/MacOS/VLC -vvv test.sdp
On ios, you can also use the VLC app and then follow these steps.
- Serve the SDP file over HTTP somehow – there is no way to open a file on the device
itself. For test purposes you can run
python3 -m http.server
from the directory where the SDP file is located - Open the VLC app, go to Network, then enter
http://<host ip>:8000/test.sdp
and tap Open Network Stream - Start the stream on the host with
./build/encode_video_zmq ~/Downloads/images/ jpeg <ios ip> 5006
or similar - The stream should now appear
This streaming process has a delay of at least 0.5s, which I could not get down, even with
--network-caching=0
The lowest-latency invocation I have found is
ffplay -probesize 32 -analyzeduration 0 -fflags nobuffer -fflags discardcorrupt -flags low_delay -sync ext -framedrop -avioflags direct -protocol_whitelist "file,udp,rtp" test.sdp
And that also has 200ms delay.
Unfortunately this is a bit shitty because there is no cmake support for libffmpeg. I pilfered a cmake script for finding ffmpeg from VTK (i think), but they do not include a bunch of library dependencies (not sure if forgotten or not necessary for certain versions of libffmpeg), so I hacked them in there until it worked.
Basically you need most (but strangely not all) thats outlined here: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Depending on which codecs you have enabled during ffmpeg install
(--enable-libtheora
, --enable-libvpx
, --enable-libx264
), you need to adapt the
FindFFMPEG.cmake
script and add to libdeps
.
- Theora:
theora;theoraenc;vorbis;vorbisenc
- x264:
x264
- vpx:
vpx
Theora I actually could not get to work because libvacodec complains that Configuration is missing
or something. Not sure if I accidentally fixed this by now.
VP9 packetization for RTP transport is experimental, so you need
this->ofmt_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
However, VLC does not seem to support that so the point is moot, unless you can use ffplay or write your own receiver with libavcodec.
When sender and receiver run on the same host, no streaming delay is observed, save for the time it takes to encode and decode. There is not a single frame of delay, so the method can be considered to be optimal on a lossless link.
Over a VPN connection via Azure to the same country (germany), performance is still very good, the loss is low enough that only rarely a frame is missed.
Over a VPN connection via Azure US and a much more delayed and lossly link, this is
unuseable, as over UDP seemingly too few packets get lost to even decode a single frame
in time. We would need to try TCP for this, which libavcodec
does not support for RTP.
RTSP would have to be investigated.
Previously, I was sending plain h264 packets over Zmq, which can use TCP. Performance
for this was tolerable for high resolution images, with some artifacts and jankiness,
but might be improved with smaller image size. Special consideration must be taken with
setting conflate
, sendhwm
and recvhwm
options in senders/receivers to avoid
unintentional buffering and delays.