-
Notifications
You must be signed in to change notification settings - Fork 33
Advanced Examples
- Rapid image capture
- Raw video from resizer
- Raw video from splitter
- Raw video from resizer with splitter component
- Encode / Decode from Stream - Image
- Encode / Decode from Stream - Video
- Static render overlay
- FFmpeg
- Video record with Circular Buffer
- Store motion vectors
- Motion detection
- Hardware accelerated resizing (ISP component)
- ImageFx component
For FFmpeg functionality, you will need to install the latest version of FFmpeg from source - do not install from the Raspbian repositories as they don't have H.264 support.
A guide to installing FFmpeg from source including the H.264 codec can be found here
By utilising the camera's video port, we are able to retrieve image frames at a much higher speed than using the conventional still port. Images captured via the video port will be of a lesser quality and do not support EXIF.
public async Task TakePictureFromVideoPort()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var splitter = new MMALSplitterComponent())
using (var imgEncoder = new MMALImageEncoder(continuousCapture: true))
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 90);
// Create our component pipeline.
imgEncoder.ConfigureOutputPort(portConfig, imgCaptureHandler);
cam.Camera.VideoPort.ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(imgEncoder);
cam.Camera.PreviewPort.ConnectTo(nullSink);
// Camera warm up time
await Task.Delay(2000);
CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(15));
// Process images for 15 seconds.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Available in v0.5.1
The resizer component can adjust the resolution coming from the camera's video port, allowing you to record raw YUV420 frames. By passing VideoPort
to the generic constraint on ConfigureOutputPort
, we tell MMALSharp to use VideoPort behaviour on the resizer's output. By default, the resizer uses StillPort behaviour and would not work in this scenario.
public async Task RecordVideoDirectlyFromResizer()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var resizer = new MMALResizerComponent())
using (var preview = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
// Use the resizer to resize 1080p to 640x480.
var portConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 640, 480, 0, 0, 0, false, null);
resizer.ConfigureOutputPort<VideoPort>(0, portConfig, vidCaptureHandler);
// Create our component pipeline.
cam.Camera.VideoPort
.ConnectTo(resizer);
cam.Camera.PreviewPort
.ConnectTo(preview);
// Camera warm up time
await Task.Delay(2000);
CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(15));
// Record video for 20 seconds
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Available in v0.5.1
The splitter component can also be used to record raw video frames. As seen with the resizer example above, we can pass VideoPort
as a generic constraint to the ConfigureOutputPort
method, instructing MMALSharp to use VideoPort behaviour for the splitter's output port. If no type is passed in, the splitter will simply act as a pass-through component.
public async Task RecordVideoDirectlyFromSplitter()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var vidCaptureHandler2 = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var vidCaptureHandler3 = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var vidCaptureHandler4 = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var preview = new MMALVideoRenderer())
using (var splitter = new MMALSplitterComponent())
{
cam.ConfigureCameraSettings();
var splitterPortConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 0, 0, null);
// Create our component pipeline.
splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
splitter.ConfigureOutputPort<SplitterVideoPort>(0, splitterPortConfig, vidCaptureHandler);
splitter.ConfigureOutputPort<SplitterVideoPort>(1, splitterPortConfig, vidCaptureHandler2);
splitter.ConfigureOutputPort<SplitterVideoPort>(2, splitterPortConfig, vidCaptureHandler3);
splitter.ConfigureOutputPort<SplitterVideoPort>(3, splitterPortConfig, vidCaptureHandler4);
cam.Camera.VideoPort.ConnectTo(splitter);
cam.Camera.PreviewPort.ConnectTo(preview);
// Camera warm up time
await Task.Delay(2000);
CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(15));
// Record video for 20 seconds
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Available in v0.5.1
You can combine both previous examples into one by using the resizer component with the splitter. By combining the components, you can potentially resize up to 4 separate raw video streams which adds a lot of flexibility to your application.
public async Task RecordVideoDirectlyFromResizerWithSplitterComponent()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var vidCaptureHandler2 = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var vidCaptureHandler3 = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var vidCaptureHandler4 = new VideoStreamCaptureHandler("/home/pi/videos/tests", "raw"))
using (var preview = new MMALVideoRenderer())
using (var splitter = new MMALSplitterComponent())
using (var resizer = new MMALResizerComponent())
using (var resizer2 = new MMALResizerComponent())
using (var resizer3 = new MMALResizerComponent())
using (var resizer4 = new MMALResizerComponent())
{
cam.ConfigureCameraSettings();
var splitterPortConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420, 0, 0, null);
var portConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 1024, 768, 0, 0, 0, false, DateTime.Now.AddSeconds(20));
var portConfig2 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 800, 600, 0, 0, 0, false, DateTime.Now.AddSeconds(20));
var portConfig3 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 640, 480, 0, 0, 0, false, DateTime.Now.AddSeconds(15));
var portConfig4 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 320, 240, 0, 0, 0, false, DateTime.Now.AddSeconds(20));
// Create our component pipeline.
splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
splitter.ConfigureOutputPort(0, splitterPortConfig, null);
splitter.ConfigureOutputPort(1, splitterPortConfig, null);
splitter.ConfigureOutputPort(2, splitterPortConfig, null);
splitter.ConfigureOutputPort(3, splitterPortConfig, null);
resizer.ConfigureOutputPort<VideoPort>(0, portConfig, vidCaptureHandler);
resizer2.ConfigureOutputPort<VideoPort>(0, portConfig2, vidCaptureHandler2);
resizer3.ConfigureOutputPort<VideoPort>(0, portConfig3, vidCaptureHandler3);
resizer4.ConfigureOutputPort<VideoPort>(0, portConfig4, vidCaptureHandler4);
// Create our component pipeline.
cam.Camera.VideoPort.ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(resizer);
splitter.Outputs[1].ConnectTo(resizer2);
splitter.Outputs[2].ConnectTo(resizer3);
splitter.Outputs[3].ConnectTo(resizer4);
cam.Camera.PreviewPort.ConnectTo(preview);
// Camera warm up time
await Task.Delay(2000);
await cam.ProcessAsync(cam.Camera.VideoPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
MMALSharp provides the ability to encode/decode images fed from Streams. It supports GIF, BMP, JPEG and PNG file formats, and decoding must be carried out to the following:
- JPEG -> YUV420/422 (I420/422)
- GIF -> RGB565 (RGB16)
- BMP/PNG -> RGBA
Note: Please notice the <FileEncodeOutputPort>
generic constraint when calling .ConfigureOutputPort
, this is an important addition as this port has the ability to handle MMAL_EVENT_FORMAT_CHANGED
buffers that may be produced by the component.
Encode
public async Task EncodeFromFilestream()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/raw_jpeg_decode.raw"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "raw"))
using (var imgEncoder = new MMALImageEncoder())
{
var inputPortConfig = new MMALPortConfig(MMALEncoding.I420, null, 2560, 1920, 0, 0, 0, true, null);
var outputPortConfig = new MMALPortConfig(MMALEncoding.BMP, MMALEncoding.I420, 2560, 1920, 0, 0, 0, true, null);
// Create our component pipeline.
imgEncoder.ConfigureInputPort(inputPortConfig, inputCaptureHandler)
.ConfigureOutputPort<FileEncodeOutputPort>(0, outputPortConfig, outputCaptureHandler);
await standalone.ProcessAsync(imgEncoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
Decode
public async Task DecodeFromFilestream()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/test.jpg"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "raw"))
using (var imgDecoder = new MMALImageDecoder())
{
var inputPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 2560, 1920, 0, 0, 0, true, null);
var outputPortConfig = new MMALPortConfig(MMALEncoding.I420, null, 2560, 1920, 0, 0, 0, true, null);
// Create our component pipeline.
imgDecoder.ConfigureInputPort(inputPortConfig, inputCaptureHandler)
.ConfigureOutputPort<FileEncodeOutputPort>(0, outputPortConfig, outputCaptureHandler);
await standalone.ProcessAsync(imgDecoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
Decode -> Encode (and vice-versa)
Available in v0.6.
You can also connect encoder components to decoder components and process as a single operation.
public async Task DecodeThenEncodeImageFromFilestream()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/test.bmp"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpeg"))
using (var imgDecoder = new MMALImageDecoder())
using (var imgEncoder = new MMALImageEncoder())
{
var decoderInputPortConfig = new MMALPortConfig(MMALEncoding.BMP, MMALEncoding.RGB16, 0, 0, 0, 0, 0, true, null);
var decoderOutputPortConfig = new MMALPortConfig(MMALEncoding.RGBA, null, 640, 480, 0, 0, 0, true, null);
var encoderInputPortConfig = new MMALPortConfig(MMALEncoding.RGBA, null, 640, 480, 0, 0, 0, true, null);
var encoderOutputPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
// Create our component pipeline.
imgDecoder.ConfigureInputPort(decoderInputPortConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputPortConfig, null);
imgEncoder.ConfigureInputPort(encoderInputPortConfig, imgDecoder.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputPortConfig, outputCaptureHandler);
imgDecoder.Outputs[0].ConnectTo(imgEncoder);
standalone.PrintPipeline(imgDecoder);
await standalone.ProcessAsync(imgDecoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
You can also encode and decode video files fed from streams in MMALSharp.
Encode
public async Task EncodeVideoFromFilestream()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/videos/decoded_rgb.raw"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var vidEncoder = new MMALVideoEncoder())
{
var inputPortConfig = new MMALPortConfig(MMALEncoding.RGB16, null, 1280, 720, 25, 0, 1300000, true, null);
var outputPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.RGB16, 1280, 720, 25, 0, 1300000, true, null);
// Create our component pipeline.
vidEncoder.ConfigureInputPort(inputPortConfig, inputCaptureHandler)
.ConfigureOutputPort<FileEncodeOutputPort>(0, outputPortConfig, outputCaptureHandler);
await standalone.ProcessAsync(vidEncoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
Decode
public async Task DecodeVideoFromFilestream()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/videos/test.h264"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "raw"))
using (var vidDecoder = new MMALVideoDecoder())
{
var inputPortConfig = new MMALPortConfig(MMALEncoding.H264, null, 1280, 720, 25, 0, 1300000, true, null);
var outputPortConfig = new MMALPortConfig(MMALEncoding.RGB16, null, 1280, 720, 25, 0, 1300000, true, null);
// Create our component pipeline.
vidDecoder.ConfigureInputPort(inputPortConfig, inputCaptureHandler)
.ConfigureOutputPort<FileEncodeOutputPort>(0, outputPortConfig, outputCaptureHandler);
await standalone.ProcessAsync(vidDecoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
Decode -> Encode (and vice-versa)
Available in v0.6.
Again, you can also connect encoder components to decoder components and process as a single operation.
public async Task DecodeThenEncodeVideoFromFilestream()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/test.h264"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var imgDecoder = new MMALVideoDecoder())
using (var imgEncoder = new MMALVideoEncoder())
{
var decoderInputConfig = new MMALPortConfig(MMALEncoding.H264, null, 1280, 720, 25, 0, 0, true, null);
var decoderOutputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 0, 0, 0, true, null);
var encoderInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 0, 0, 0, true, null);
var encoderOutputConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 1280, 720, 25, 0, 0, true, null);
imgDecoder.ConfigureInputPort(decoderInputConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputConfig, null);
imgEncoder.ConfigureInputPort(encoderInputConfig, imgDecoder.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgDecoder.Outputs[0].ConnectTo(imgEncoder);
await standalone.ProcessAsync(imgDecoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
Decode -> Splitter -> Encode (and vice-versa)
Available in v0.6.
The flexibility of the MMAL pipeline allows you to also add a splitter component into the mix. You can start with a Decoder component, connect it to a splitter (giving you 4 outputs), and then attach 4 encoders onto each splitter output port.
In the example below, we are assuming that the input video file is H.264 YUV420 encoded with a resolution of 1280 x 720. This file is then decoded, sent to the splitter component and fed to 4 individual encoder components, re-encoding to exactly the same format. You are free to tinker around with the encodings/pixel formats.
public async Task DecodeThenEncodeVideoFromFilestreamWithSplitter()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/test.h264"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var outputCaptureHandler2 = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var outputCaptureHandler3 = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var outputCaptureHandler4 = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var splitter = new MMALSplitterComponent())
using (var imgDecoder = new MMALVideoDecoder())
using (var imgEncoder = new MMALVideoEncoder())
using (var imgEncoder2 = new MMALVideoEncoder())
using (var imgEncoder3 = new MMALVideoEncoder())
using (var imgEncoder4 = new MMALVideoEncoder())
{
var decoderInputConfig = new MMALPortConfig(MMALEncoding.H264, null, 1280, 720, 25, 0, 0, true, null);
var decoderOutputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 0, 0, 0, true, null);
var splitterInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 0, 0, 25, 0, 0, true, null);
var splitterOutputConfig = new MMALPortConfig(null, null, 0, 0, 0, 0, 0, true, null);
var encoderInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 0, 0, 0, true, null);
var encoderOutputConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 1280, 720, 25, 0, 0, true, null);
imgDecoder.ConfigureInputPort(decoderInputConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputConfig, null);
splitter.ConfigureInputPort(splitterInputConfig, null)
.ConfigureOutputPort(0, splitterOutputConfig, null);
imgEncoder.ConfigureInputPort(encoderInputConfig, splitter.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgEncoder2.ConfigureInputPort(encoderInputConfig, splitter.Outputs[1], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgEncoder3.ConfigureInputPort(encoderInputConfig, splitter.Outputs[2], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgEncoder4.ConfigureInputPort(encoderInputConfig, splitter.Outputs[3], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgDecoder.Outputs[0].ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(imgEncoder);
splitter.Outputs[1].ConnectTo(imgEncoder2);
splitter.Outputs[2].ConnectTo(imgEncoder3);
splitter.Outputs[3].ConnectTo(imgEncoder4);
await standalone.ProcessAsync(imgDecoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
Decode -> Splitter -> Resizer -> Encode (and vice-versa)
Available in v0.6.
In addition to the above, we can also introduce a resizer component too.
In the example below, we are assuming that the input video file is H.264 YUV420 encoded with a resolution of 1280 x 720. This file is then decoded, sent to the splitter component and fed to 4 individual encoder components, one of which via a resizer component where we resize the output to 640 x 480. We then re-encode to exactly the same format. You are free to tinker around with the encodings/pixel formats.
public async Task DecodeThenEncodeVideoFromFilestreamWithSplitterAndResizer()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/test.h264"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var outputCaptureHandler2 = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var outputCaptureHandler3 = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var outputCaptureHandler4 = new VideoStreamCaptureHandler("/home/pi/Videos", "h264"))
using (var splitter = new MMALSplitterComponent())
using (var imgDecoder = new MMALVideoDecoder())
using (var imgEncoder = new MMALVideoEncoder())
using (var imgEncoder2 = new MMALVideoEncoder())
using (var imgEncoder3 = new MMALVideoEncoder())
using (var imgEncoder4 = new MMALVideoEncoder())
using (var resizer = new MMALResizerComponent())
{
var decoderInputConfig = new MMALPortConfig(MMALEncoding.H264, null, 1280, 720, 25, 0, 0, true, null);
var decoderOutputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 0, 0, 0, true, null);
var splitterInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 0, 0, 25, 0, 0, true, null);
var splitterOutputConfig = new MMALPortConfig(null, null, 0, 0, 0, 0, 0, true, null);
var resizerInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 25, 0, 0, true, null);
var resizerOutputConfig = new MMALPortConfig(null, null, 640, 480, 0, 0, 0, true, null);
var encoderInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 1280, 720, 0, 0, 0, true, null);
var encoderOutputConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 1280, 720, 25, 0, 0, true, null);
imgDecoder.ConfigureInputPort(decoderInputConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputConfig, null);
splitter.ConfigureInputPort(splitterInputConfig, null)
.ConfigureOutputPort(0, splitterOutputConfig, null);
imgEncoder.ConfigureInputPort(encoderInputConfig, splitter.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgEncoder2.ConfigureInputPort(encoderInputConfig, splitter.Outputs[1], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgEncoder3.ConfigureInputPort(encoderInputConfig, splitter.Outputs[2], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
resizer.ConfigureInputPort(resizerInputConfig, splitter.Outputs[3], null)
.ConfigureOutputPort(0, resizerOutputConfig, null);
imgEncoder4.ConfigureInputPort(encoderInputConfig, resizer.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
imgDecoder.Outputs[0].ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(imgEncoder);
splitter.Outputs[1].ConnectTo(imgEncoder2);
splitter.Outputs[2].ConnectTo(imgEncoder3);
splitter.Outputs[3].ConnectTo(resizer);
resizer.Outputs[0].ConnectTo(imgEncoder4);
await standalone.ProcessAsync(imgDecoder);
}
// Only call when you no longer require the MMAL library, i.e. on app shutdown.
standalone.Cleanup();
}
MMAL allows you to create additional video preview renderers which sit alongside the usual Null Sink or Video renderers shown in previous examples. The purpose of the additional renderers is that they allow you to overlay static content which is shown onto the display your Pi is connected to.
The overlay renderers will only work with unencoded images and they must have one of the following pixel formats:
- YUV420 (I420)
- RGB888 (RGB24)
- RGBA
- BGR888 (BGR24)
- BGRA
An easy way to get an unencoded image for use with the overlay renderers is to use the Raw image capture functionality as described in this example, setting the MMALCameraConfig.StillEncoding
and MMALCameraConfig.StillSubFormat
properties to one of the accepted pixel formats. Once you have got your test frame, follow the below example to overlay your image:
public async Task StaticOverlayExample()
{
MMALCamera cam = MMALCamera.Instance;
PreviewConfiguration previewConfig = new PreviewConfiguration
{
FullScreen = false,
PreviewWindow = new Rectangle(160, 0, 640, 480),
Layer = 2,
Opacity = 1
};
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgEncoder = new MMALImageEncoder())
using (var video = new MMALVideoRenderer(previewConfig))
{
cam.ConfigureCameraSettings();
video.ConfigureRenderer();
PreviewOverlayConfiguration overlayConfig = new PreviewOverlayConfiguration
{
FullScreen = true,
PreviewWindow = new Rectangle(50, 0, 640, 480),
Layer = 1,
Resolution = new Resolution(640, 480),
Encoding = MMALEncoding.I420,
Opacity = 255
};
var overlay = cam.AddOverlay(video, overlayConfig, File.ReadAllBytes("/home/pi/test1.raw"));
overlay.ConfigureRenderer();
overlay.UpdateOverlay();
var portConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 90);
//Create our component pipeline.
imgEncoder.ConfigureOutputPort(portConfig, imgCaptureHandler);
cam.Camera.StillPort.ConnectTo(imgEncoder);
cam.Camera.PreviewPort.ConnectTo(video);
cam.PrintPipeline();
await cam.ProcessAsync(cam.Camera.StillPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
In this example, we are using an unencoded YUV420 image and configuring the renderer using the settings in overlayConfig
.
public async Task FFmpegRTMPStreaming()
{
MMALCamera cam = MMALCamera.Instance;
// An RTMP server needs to be listening on the address specified in the capture handler. I have used the Nginx RTMP module for testing.
using (var ffCaptureHandler = FFmpegCaptureHandler.RTMPStreamer("mystream", "rtmp://192.168.1.91:6767/live"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 10, MMALVideoEncoder.MaxBitrateLevel4, null);
// Create our component pipeline. Here we are using the H.264 standard with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, ffCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Note:
If you intend on using the YouTube live streaming service, you will need to create the below method to return your own FFmpegCaptureHandler
. You should replace the internal FFmpegCaptureHandler.RTMPStreamer
seen in the example above with your custom method. The reason for this is YouTube streaming requires your RTMP stream to contain an audio input or otherwise it won't work. Internally, our RTMP streaming method does not include an audio stream, and at the current time we don't intend on changing it for this specific purpose.
public static FFmpegCaptureHandler RTMPStreamerWithAudio(string streamName, string streamUrl)
=> new FFmpegCaptureHandler($"-re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv -metadata streamName={streamName} {streamUrl}");
Please see here which discusses the issue in-depth.
This is a useful capture mode as it will push the elementary H.264 stream into an AVI container which can be opened by media players such as VLC.
public async Task FFmpegRawVideoConvert()
{
MMALCamera cam = MMALCamera.Instance;
using (var ffCaptureHandler = FFmpegCaptureHandler.RawVideoToAvi("/home/pi/videos/", "testing1234"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 10, MMALVideoEncoder.MaxBitrateLevel4, null);
// Create our component pipeline. Here we are using the H.264 standard with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, ffCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
This example will push all images processed by an image capture handler into a playable video.
public async Task FFmpegImagesToVideo()
{
MMALCamera cam = MMALCamera.Instance;
// This example will take an image every 10 seconds for 4 hours
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
{
var cts = new CancellationTokenSource(TimeSpan.FromHours(4));
var tl = new Timelapse { Mode = TimelapseMode.Second, CancellationToken = cts.Token, Value = 10 };
await cam.TakePictureTimelapse(imgCaptureHandler, MMALEncoding.JPEG, MMALEncoding.I420, tl);
// Process all images captured into a video at 2fps.
imgCaptureHandler.ImagesToVideo("/home/pi/images/", 2);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
A CircularBufferCaptureHandler
class has been added in v0.6 of MMALSharp. This capture handler allows you to record image frames to a circular buffer which will overwrite contents once the buffer has been filled. The capture handler supports both MJPEG and H.264 encodings, with the latter having some slight caveats. An example of how to use the capture handler can be seen below:
public async Task VideoRecordCircularBufferMJPEG()
{
MMALCamera cam = MMALCamera.Instance;
MMALCameraConfig.VideoFramerate = new MMAL_RATIONAL_T(25, 1);
// Using a 6MB circular buffer.
using (var vidCaptureHandler = new CircularBufferCaptureHandler(6291456, "/home/pi/videos/", "mjpeg"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.MJPEG, MMALEncoding.I420, 0, MMALVideoEncoder.MaxBitrateMJPEG, null);
// Create our component pipeline. Here we are using MJPEG encoding with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var fireOffRecordingTask = Task.Run(async () =>
{
// Wait for 10 seconds before instructing the capture handler to store image frames to the FileStream.
await Task.Delay(10000);
vidCaptureHandler.StartRecording();
// Pause this for another 10 seconds. The capture handler will store image frames to the FileStream during this delay period.
await Task.Delay(10000);
// We now tell the capture handler to stop recording again for 10 seconds.
vidCaptureHandler.StopRecording();
await Task.Delay(10000);
// Continue storing image frames again until camera task stops.
vidCaptureHandler.StartRecording();
});
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(60));
var camTask = cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
await Task.WhenAll(fireOffRecordingTask, camTask);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
This example is rather trivial, but it demonstrates how to instruct the capture handler to start and stop recording to the FileStream. In an actual application, I would imagine you would want to set up the capture handler as a static object and handle the disposal elsewhere once you're finished with it.
I mentioned earlier that when using H.264 there are some slight caveats. H.264 recordings require a key frame to be present at the beginning of a video stream in order for it to be decoded; due to this, MMALCameraConfig.InlineHeaders
needs to be set to true to generate key frames at regular intervals throughout the duration of your recording. The CircularBufferCaptureHandler
will only start recording to your FileStream once it's received a key frame and this will be stored at the beginning of your video file, and any data stored in the Circular Buffer will be subsequently stored after.
A new method has been added against the VideoEncoderComponent
called RequestIFrame
, which as the name suggests, will allow you to immediately request a new key frame to be generated by the video encoder. This is optional, as key frames are generated at regular intervals by the video encoder when the MMALCameraConfig.InlineHeaders
config is set to true.
An example using H.264 encoding can be seen below:
public async Task VideoRecordCircularBufferH264()
{
MMALCamera cam = MMALCamera.Instance;
MMALCameraConfig.InlineHeaders = true;
MMALCameraConfig.VideoFramerate = new MMAL_RATIONAL_T(25, 1);
// Using a 6MB circular buffer.
using (var vidCaptureHandler = new CircularBufferCaptureHandler(6291456, "/home/pi/videos/", "h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 0, MMALVideoEncoder.MaxBitrateLevel4, null);
// Create our component pipeline. Here we are using H.264 encoding with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var fireOffRecordingTask = Task.Run(async () =>
{
// Wait for 10 seconds before instructing the capture handler to store image frames to the FileStream.
await Task.Delay(10000);
vidCaptureHandler.StartRecording();
// (Optionally) Request a key frame to be immediately generated by the video encoder.
vidEncoder.RequestIFrame();
// Pause this for another 10 seconds. The capture handler will store image frames to the FileStream during this delay period.
await Task.Delay(10000);
// We now tell the capture handler to stop recording again for 10 seconds.
vidCaptureHandler.StopRecording();
await Task.Delay(10000);
// Continue storing image frames again until camera task stops.
vidCaptureHandler.StartRecording();
vidEncoder.RequestIFrame();
});
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(60));
var camTask = cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
await Task.WhenAll(fireOffRecordingTask, camTask);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Starting with v0.6 of MMALSharp, users can now store motion vector data generated by a video encoder component using H.264 encoding. The MMALCameraConfig.InlineMotionVectors
config property must be set to true to instruct the video encoder to generate motion vector data. Additionally, the MMALPortConfig
class contains a parameter against its constructor called storeMotionVectors
which users should set to true, and finally, users should call InitialiseMotionStore
against capture handlers implementing IMotionVectorCaptureHandler
so the capture handler is aware of the stream you wish to store data to.
An example of how to do this can be seen below:
public async Task StoreMotionVectors()
{
MMALCamera cam = MMALCamera.Instance;
MMALCameraConfig.VideoFramerate = new MMAL_RATIONAL_T(25, 1);
// Motion vector data will not be generated unless this config property is set to true.
MMALCameraConfig.InlineMotionVectors = true;
// A new file called motion.dat will be created containing the motion vector data.
using (var motionVectorStore = File.Create("/home/pi/videos/motion.dat"))
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
// The storeMotionVectors parameter must be set to true to instruct the port to store motion vector data.
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 0, MMALVideoEncoder.MaxBitrateLevel4, null, storeMotionVectors: true);
// Create our component pipeline. Here we are using H.264 encoding with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(20));
// Initialise the motion vector stream.
vidCaptureHandler.InitialiseMotionStore(motionVectorStore);
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
A new interface IMotionCaptureHandler
has been added with initial support for Frame Difference motion detection being included in the CircularBufferCaptureHandler
class. This motion detection technique works by initially taking a test frame which is used as the 'background' image, with subsequent frames being tested against it to check for differences above a specified threshold value.
The MMALCamera
API has a new method WithMotionDetection
which allows you to configure the motion detection behaviour, including callbacks for when motion is detected, and optionally when the specified record duration has passed.
The sensitivity of the frame difference technique is configured using the Threshold
property on the MotionConfig
class. Lower values indicate higher sensitivity, and local testing has found a range between 120-150 to be suitable for indoor detection where lighting stays relatively stable. If you wish to use this for outdoor CCTV type scenarios, you may need to play around with the threshold to reduce false positive detections.
It is important to note that internally, the frame difference technique is carried out on raw RGB24/RGB32/RGBA video frames. It will not work with H.264/MJPEG encoded or YUV pixel format frames.
An example of how to configure frame difference motion detection can be seen below:
public async Task DetectMotion()
{
MMALCamera cam = MMALCamera.Instance;
// When using H.264 encoding we require key frames to be generated for the Circular buffer capture handler.
MMALCameraConfig.InlineHeaders = true;
// Two capture handlers are being used here, one for motion detection and the other to record a H.264 stream.
using (var vidCaptureHandler = new CircularBufferCaptureHandler(4000000, "/home/pi/videos/detections", "h264"))
using (var motionCircularBufferCaptureHandler = new CircularBufferCaptureHandler(4000000, "/home/pi/videos/detections", "raw"))
using (var splitter = new MMALSplitterComponent())
using (var resizer = new MMALIspComponent())
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var splitterPortConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420, 0, 0, null);
var vidEncoderPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 0,
MMALVideoEncoder.MaxBitrateLevel4, null);
// The ISP resizer is being used for better performance. Frame difference motion detection will only work if using raw video data. Do not encode to H.264/MJPEG.
// Resizing to a smaller image may improve performance, but ensure that the width/height are multiples of 32 and 16 respectively to avoid cropping.
var resizerPortConfig = new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, 640, 480, 0, 0, 0, false, null);
splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
splitter.ConfigureOutputPort(0, splitterPortConfig, null);
splitter.ConfigureOutputPort(1, splitterPortConfig, null);
resizer.ConfigureOutputPort<VideoPort>(0, resizerPortConfig, motionCircularBufferCaptureHandler);
vidEncoder.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[1], null);
vidEncoder.ConfigureOutputPort(vidEncoderPortConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(splitter);
cam.Camera.PreviewPort.ConnectTo(renderer);
splitter.Outputs[0].ConnectTo(resizer);
splitter.Outputs[1].ConnectTo(vidEncoder);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(60));
// Here we are instructing the capture handler to record for 10 seconds once motion has been detected. A threshold of 130 is used. Lower
// values indicate higher sensitivity. Suitable range for indoor detection between 120-150 with stable lighting conditions.
var motionConfig = new MotionConfig(TimeSpan.FromSeconds(10), 130);
await cam.WithMotionDetection(motionCircularBufferCaptureHandler, motionConfig,
() =>
{
// This callback will be invoked when motion has been detected.
// Stop motion detection while we are recording.
motionCircularBufferCaptureHandler.DisableMotionDetection();
// Optional, this will begin recording the raw video frames. Produces large video files which will need encoding afterwards.
motionCircularBufferCaptureHandler.StartRecording();
// Start recording our H.264 video.
vidCaptureHandler.StartRecording();
// (Optionally) Request a key frame to be immediately generated by the video encoder.
vidEncoder.RequestIFrame();
}, () =>
{
// This callback will be invoked when the record duration has passed.
// We want to re-enable the motion detection.
motionCircularBufferCaptureHandler.EnableMotionDetection();
// Stop recording on our capture handlers.
motionCircularBufferCaptureHandler.StopRecording();
vidCaptureHandler.StopRecording();
// Optionally create two new files for our next recording run.
vidCaptureHandler.Split();
motionCircularBufferCaptureHandler.Split();
})
.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The ISP component wraps the Image Sensor Processor hardware block to offer hardware accelerated format conversion and resizing. You can use the ISP component with both image stills and video recording pipelines and also connect it to the splitter component should you wish.
Note: The ISP component by definition features two output ports. At this stage, MMALSharp only supports port 361 (output 0) due to an outstanding issue whereby the native callback method is not called when port 362 is enabled. This is under investigation in ticket #131.
An example of how to use the ISP component is below:
public async Task TakePictureIsp()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/test1.bmp"))
using (var imgEncoder = new MMALImageEncoder())
using (var ispComponent = new MMALIspComponent())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var imgEncoderPortConfig = new MMALPortConfig(MMALEncoding.BMP, MMALEncoding.RGB16, 90);
var ispComponentPortConfig = new MMALPortConfig(MMALEncoding.RGB16, MMALEncoding.RGB16, 640, 480, 0, 0, 0, true, null);
// Create our component pipeline.
ispComponent.ConfigureOutputPort(0, ispComponentPortConfig, null);
imgEncoder.ConfigureOutputPort(imgEncoderPortConfig, imgCaptureHandler);
cam.Camera.StillPort.ConnectTo(ispComponent);
cam.Camera.PreviewPort.ConnectTo(nullSink);
ispComponent.Outputs[0].ConnectTo(imgEncoder);
// Camera warm up time
await Task.Delay(2000).ConfigureAwait(false);
await cam.ProcessAsync(cam.Camera.StillPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
As mentioned previously, you can also use the ISP component in a video recording pipeline too:
public async Task TakeVideoIsp()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/isp.h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var ispComponent = new MMALIspComponent())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var vidEncoderPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 0, 0, null);
var ispComponentPortConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
// Create our component pipeline.
ispComponent.ConfigureOutputPort(0, ispComponentPortConfig, null);
vidEncoder.ConfigureOutputPort(vidEncoderPortConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(ispComponent);
cam.Camera.PreviewPort.ConnectTo(nullSink);
ispComponent.Outputs[0].ConnectTo(vidEncoder);
// Camera warm up time
await Task.Delay(2000).ConfigureAwait(false);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The ImageFx component is used to apply image effects to a YUV_UV image or video frame and is exposed through the MMALImageFxComponent
class. The component supports YUV420 packed planar and YUV422 packed planar image formats. We already have support for effects via the MMALCameraComponent
class which is configured using MMALCameraConfig.ImageFx
, however the ImageFx component now allows users who are not using the camera to apply effects to images and videos* in a standalone environment.
Note: When working with video, the ImageFx component cannot be connected to a Video Encoder component directly so you need to add a splitter component inbetween.
Image
In this example, we first take a JPEG image stored as file /home/pi/images/imagefx/testi420.jpg
with resolution 640 x 480 and YUV420 pixel format which is then passed to a Image Decoder component. The decoder component will then decode the image to raw YUV420. From here we pass the decoded image to the ImageFx component where we apply the Solarize effect where the image is finally passed to a Image Encoder component where it is re-encoded as a JPEG with YUV420 pixel format.
public async Task DecodeThenEncodePictureFromFilestreamWithImageFxComponent()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/images/imagefx/testi420.jpg"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/imagefx", "jpeg"))
using (var imgDecoder = new MMALImageDecoder())
using (var imageFx = new MMALImageFxComponent())
using (var imgEncoder = new MMALImageEncoder())
{
var decoderInputPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 0, 0, 0, 0, 0, true, null);
var decoderOutputPortConfig = new MMALPortConfig(MMALEncoding.I420, null, 640, 480, 0, 0, 0, true, null);
var imageFxConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
var encoderInputPortConfig = new MMALPortConfig(MMALEncoding.I420, null, 640, 480, 0, 0, 0, true, null);
var encoderOutputPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
// Create our component pipeline.
imgDecoder.ConfigureInputPort(decoderInputPortConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputPortConfig, null);
imageFx.ConfigureInputPort(imageFxConfig, imgDecoder.Outputs[0], null)
.ConfigureOutputPort(0, imageFxConfig, null);
imgEncoder.ConfigureInputPort(encoderInputPortConfig, imageFx.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputPortConfig, outputCaptureHandler);
imageFx.ImageEffect = MMAL_PARAM_IMAGEFX_T.MMAL_PARAM_IMAGEFX_SOLARIZE;
imgDecoder.Outputs[0].ConnectTo(imageFx);
imageFx.Outputs[0].ConnectTo(imgEncoder);
standalone.PrintPipeline(imgDecoder);
await standalone.ProcessAsync(imgDecoder);
}
}
Video
In this example we first take a H.264 video stored as file /home/pi/videos/imagefx/testi420.h264
with resolution 640 x 480, YUV420 pixel format and 25fps which is then passed to a Video Decoder component. The decoder component will then decode the video to raw YUV420. From here we pass the decoded video to the ImageFx component where we apply the Solarize effect. Due to not being able to directly connect the ImageFx component to a Video Encoder, we need to employ the Splitter component to act as a middle-man. The Splitter is finally connected to the Video Encoder component where the video re-encoded as H.264 with YUV420 pixel format.
public async Task DecodeThenEncodeVideoFromFilestreamWithImageFx()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/videos/imagefx/testi420.h264"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/imagefx", "h264"))
using (var splitter = new MMALSplitterComponent())
using (var vidDecoder = new MMALVideoDecoder())
using (var vidEncoder = new MMALVideoEncoder())
using (var imageFx = new MMALImageFxComponent())
{
var decoderInputConfig = new MMALPortConfig(MMALEncoding.H264, null, 640, 480, 25, 0, 0, true, null);
var decoderOutputConfig = new MMALPortConfig(MMALEncoding.I420, null, 640, 480, 0, 0, 0, true, null);
var imageFxConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
var encoderInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 640, 480, 0, 0, 0, true, null);
var encoderOutputConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, 640, 480, 25, 0, 0, true, null);
var splitterInputConfig = new MMALPortConfig(MMALEncoding.I420, null, 0, 0, 25, 0, 0, true, null);
var splitterOutputConfig = new MMALPortConfig(null, null, 0, 0, 0, 0, 0, true, null);
vidDecoder.ConfigureInputPort(decoderInputConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputConfig, null);
imageFx.ConfigureInputPort(imageFxConfig, vidDecoder.Outputs[0], null)
.ConfigureOutputPort(0, imageFxConfig, null);
splitter.ConfigureInputPort(splitterInputConfig, imageFx.Outputs[0], null)
.ConfigureOutputPort(0, splitterOutputConfig, null);
vidEncoder.ConfigureInputPort(encoderInputConfig, splitter.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputConfig, outputCaptureHandler);
vidDecoder.Outputs[0].ConnectTo(imageFx);
imageFx.Outputs[0].ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(vidEncoder);
await standalone.ProcessAsync(vidDecoder);
}
}
Colour Enhancement
The ImageFx component also allows a user to apply a constant colour enhancement which is added after the effect. This can be added to both images and videos. The below example will demonstrate how to apply a blue colour to your effect using the image example seen earlier.
public async Task DecodeThenEncodePictureFromFilestreamWithImageFxComponentAndColourEnhancement()
{
MMALStandalone standalone = MMALStandalone.Instance;
using (var stream = File.OpenRead("/home/pi/images/imagefx/testi420.jpg"))
using (var inputCaptureHandler = new InputCaptureHandler(stream))
using (var outputCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/imagefx", "jpeg"))
using (var imgDecoder = new MMALImageDecoder())
using (var imageFx = new MMALImageFxComponent())
using (var imgEncoder = new MMALImageEncoder())
{
var decoderInputPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 0, 0, 0, 0, 0, true, null);
var decoderOutputPortConfig = new MMALPortConfig(MMALEncoding.I420, null, 640, 480, 0, 0, 0, true, null);
var imageFxConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
var encoderInputPortConfig = new MMALPortConfig(MMALEncoding.I420, null, 640, 480, 0, 0, 0, true, null);
var encoderOutputPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, 640, 480, 0, 0, 0, true, null);
// Create our component pipeline.
imgDecoder.ConfigureInputPort(decoderInputPortConfig, inputCaptureHandler)
.ConfigureOutputPort(0, decoderOutputPortConfig, null);
imageFx.ConfigureInputPort(imageFxConfig, imgDecoder.Outputs[0], null)
.ConfigureOutputPort(0, imageFxConfig, null);
imgEncoder.ConfigureInputPort(encoderInputPortConfig, imageFx.Outputs[0], null)
.ConfigureOutputPort<FileEncodeOutputPort>(0, encoderOutputPortConfig, outputCaptureHandler);
imageFx.ImageEffect = MMAL_PARAM_IMAGEFX_T.MMAL_PARAM_IMAGEFX_SOLARIZE;
imageFx.ColourEnhancement = new ColourEffects(true, Color.Blue);
imgDecoder.Outputs[0].ConnectTo(imageFx);
imageFx.Outputs[0].ConnectTo(imgEncoder);
standalone.PrintPipeline(imgDecoder);
await standalone.ProcessAsync(imgDecoder);
}
}