-
Notifications
You must be signed in to change notification settings - Fork 33
Examples
Note: You are currently viewing the v0.7 Alpha examples. If you are not cloning the latest code from this repository then you may wish to look at the v0.6 examples instead.
- v0.7 Alpha
- v0.6 (Current)
- v0.5
If you want to change any of the default configuration settings, this can be done by modifying the static properties within the MMALCameraConfig
class. The main class, MMALCamera
which interfaces to the rest of the functionality the library provides is a Singleton and is called as follows: MMALCamera cam = MMALCamera.Instance
.
Note: The await Task.Delay(2000);
is required to allow the camera sensor to "warm up". Due to the rolling shutter used in the Raspberry Pi camera modules, we need to wait for a few seconds before valid image data can be used, otherwise your images will likely be under-exposed. The value of 2 seconds is a safe amount of time to wait, but is only required after enabling the camera component, either on first run or after a manual disable.
Additionally, the call to ConfigureCameraSettings()
is only required if you have made changes to the camera's configuration.
Support for these encoders has been added in later firmware releases so will likely need a sudo rpi-update
in order for it to work. Please see this issue for reference.
The below examples describe how to take a simple JPEG image, either by using the built-in helper method or manual mode. Here we are using an Image Encoder component which will encode the raw image data into JPEG format; you can change the encoding format to be one of the following: JPEG, BMP, PNG, GIF. In addition, you can also change the pixel format you would like to encode with - in the below examples we are using YUV420.
Helper mode
public async Task TakePictureHelper()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
{
await cam.TakePicture(imgCaptureHandler, MMALEncoding.JPEG, MMALEncoding.I420);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Manual mode
public async Task TakePictureManual()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgEncoder = new MMALImageEncoder())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90);
// Create our component pipeline.
imgEncoder.ConfigureOutputPort(portConfig, imgCaptureHandler);
cam.Camera.StillPort.ConnectTo(imgEncoder);
cam.Camera.PreviewPort.ConnectTo(nullSink);
// Camera warm up time
await Task.Delay(2000);
await cam.ProcessAsync(cam.Camera.StillPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
In this example we are capturing raw unencoded image data directly from the camera sensor. You can change the pixel format of the raw data by changing the MMALCameraConfig.Encoding
and MMALCameraConfig.EncodingSubFormat
properties.
Helper mode
public async Task TakeRawPictureHelper()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "raw"))
{
await cam.TakeRawPicture(imgCaptureHandler);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Note: I422 encoding will prevent the native callback handler from being called, ultimately requiring a reboot of your Pi. I have tested I420, RGB24 and RGBA which work as expected.
The timelapse mode example describes how to take an image every 10 seconds for 4 hours. You can change the frequency and duration of the timelapse mode by changing the various properties in the Timelapse
object.
public async Task TakeTimelapsePicture()
{
MMALCamera cam = MMALCamera.Instance;
// This example will take an image every 10 seconds for 4 hours
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
{
var cts = new CancellationTokenSource(TimeSpan.FromHours(4));
var tl = new Timelapse { Mode = TimelapseMode.Second, CancellationToken = cts.Token, Value = 10 };
await cam.TakePictureTimelapse(imgCaptureHandler, MMALEncoding.JPEG, MMALEncoding.I420, tl);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The timeout mode example shows how to take simultaneous image captures for a set duration. This is done via a helper method in the MMALCamera
class. We pass in a CancellationToken
which will signal when image capturing should stop.
public async Task TakeTimeoutPicture()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
{
var cts = new CancellationTokenSource(TimeSpan.FromHours(4));
await cam.TakePictureTimeout(imgCaptureHandler, MMALEncoding.JPEG, MMALEncoding.I420, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
In the example below we show how to change the encoding type of a MMALImageEncoder
after you have taken an image. You do not need to dispose of the component to do this, simply reconfigure the Image Encoder's output port with the new encoding type and it will work as expected.
public async Task ChangeImageEncodingType()
{
MMALCamera cam = MMALCamera.Instance;
using (var jpgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var bmpCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "bmp"))
using (var imgEncoder = new MMALImageEncoder())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
// Camera warm up time
await Task.Delay(2000);
var portConfigJPEG = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90);
var portConfigBMP = new MMALPortConfig(MMALEncoding.BMP, MMALEncoding.RGBA);
// Create our component pipeline.
imgEncoder.ConfigureOutputPort(portConfigJPEG, jpgCaptureHandler);
cam.Camera.StillPort.ConnectTo(imgEncoder);
cam.Camera.PreviewPort.ConnectTo(nullSink);
await cam.ProcessAsync(cam.Camera.StillPort);
imgEncoder.ConfigureOutputPort(portConfigBMP, bmpCaptureHandler);
await cam.ProcessAsync(cam.Camera.StillPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The below examples show how to capture video using MMALSharp. For basic video recording, there is a built in helper method which uses H.264 encoding. If you wish to use a different encoding type, or would like to customise additional parameters such as bitrate, you can also do this manually.
Helper mode
// Self-contained method for recording H.264 video for a specified amount of time. Records at 30fps, 25Mb/s at the highest quality.
public async Task TakeVideoHelper()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
{
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.TakeVideo(vidCaptureHandler, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Manual mode
public async Task TakeVideoManual()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
// Create our component pipeline. Here we are using the H.264 standard with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The segmented recording mode allows us to split video recording into multiple files. The user is able to specify the frequency at which the split occurs via the Split
object.
Note: MMALCameraConfig.InlineHeaders
must be set to true in order for this to work.
public async Task TakeSegmentedVideo()
{
// Required for segmented recording mode
MMALCameraConfig.InlineHeaders = true;
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4, split: new Split { Mode = TimelapseMode.Second, Value = 30 });
// Create our component pipeline. Here we are using the H.264 standard with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
cam.ConfigureCameraSettings();
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(1));
// Record video for 1 minute, using segmented video record to split into multiple files every 30 seconds.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The Quantization parameter is used to apply compression when recording with H.264 encoding and affects the quality of the output recording. Typical values applied to this parameter are between 10 (highest quality) to 40 (lowest quality) with the latter also resulting in the lowest filesize. It is worth noting that if you want to record with variable bitrate, both the Bitrate and Quantisation parameters should have a value of 0.
public async Task QuantizationParameterExample()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10);
// Create our component pipeline. Here we are using the H.264 standard with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
In the example below we show how to change the encoding type of a MMALVideoEncoder
after you have taken a video. You do not need to dispose of the component to do this, simply reconfigure the Video Encoder's output port with the new encoding type and it will work as expected.
public async Task ChangeVideoEncodingType()
{
MMALCamera cam = MMALCamera.Instance;
using (var h264CaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var mjpgCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "mjpeg"))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
// Camera warm up time
await Task.Delay(2000);
var portConfigH264 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
var portConfigMJPEG = new MMALPortConfig(MMALEncoding.MJPEG, MMALEncoding.I420, quality: 90, bitrate: MMALVideoEncoder.MaxBitrateMJPEG);
// Create our component pipeline. Here we are using H.264 encoding with a YUV420 pixel format. The video will be taken at 25Mb/s.
vidEncoder.ConfigureOutputPort(portConfigH264, h264CaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
// Here we change the encoding type of the video encoder to MJPEG.
vidEncoder.ConfigureOutputPort(portConfigMJPEG, mjpgCaptureHandler);
cts = new CancellationTokenSource(TimeSpan.FromMinutes(3));
// Take video for 3 minutes.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Storing presentation timestamps is a helpful feature which you can provide to a tool such as mkvmerge
when converting the H.264 elementary stream into a containerised format. When timestamps are provided it will ensure that video players such as VLC are able to seek correctly and display the correct duration of a video file. To enable storing of timestamps, the VideoStreamCaptureHandler
class has a new parameter on each constructor called storeTimestamps
which is set to false by default. When enabled, a new file matching the filename of your video file will be created in the same directory with a .pts
extension.
public async Task TakeVideoManualAndSaveTimestamps()
{
MMALCamera cam = MMALCamera.Instance;
MMALCameraConfig.Resolution = new Resolution(640, 480);
// We are setting the "storeTimestamps" parameter to true to enable timestamp saving.
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264", true))
using (var vidEncoder = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
vidEncoder.ConfigureOutputPort(portConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
}
The MMALSplitterComponent
can be connected to either the camera still or video port. From here, the splitter provides 4 output ports, allowing you to further extend your pipeline and produce up to 4 file outputs at any given time. By default, the splitter component is configured for use with the camera's video port, but can easily be changed in order to work with the still port instead. We will start by discussing an example when connected to the video port, and then move on to how this can be changed for the still port.
In the below example we are connecting a splitter component to 4 video encoder components where the video is encoded to H.264 with a YUV420 pixel format. Each video encoder has a different port configuration where you can see the Quality
property varies from 10-40, this sets the Quantisation value for the H.264 encoder and affects the quality of the recording via compression, 10 being highest quality and 40 being lowest.
Important: You will see in the below example that we are writing to a ramdisk mount, this is because there are performance issues with the Pi's SD Card IO when writing to multiple output streams.
public async Task SplitterComponentExample()
{
MMALCamera cam = MMALCamera.Instance;
using (var handler = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var handler2 = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var handler3 = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var handler4 = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var splitter = new MMALSplitterComponent())
using (var vidEncoder = new MMALVideoEncoder())
using (var vidEncoder2 = new MMALVideoEncoder())
using (var vidEncoder3 = new MMALVideoEncoder())
using (var vidEncoder4 = new MMALVideoEncoder())
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var splitterPortConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420, bitrate: 13000000);
var portConfig1 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: 13000000, timeout: DateTime.Now.AddSeconds(20));
var portConfig2 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 20, bitrate: 13000000, timeout: DateTime.Now.AddSeconds(15));
var portConfig3 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 30, bitrate: 13000000, timeout: DateTime.Now.AddSeconds(10));
var portConfig4 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 40, bitrate: 13000000, timeout: DateTime.Now.AddSeconds(10));
// Create our component pipeline.
splitter.ConfigureInputPort( new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
splitter.ConfigureOutputPort(0, splitterPortConfig, null);
splitter.ConfigureOutputPort(1, splitterPortConfig, null);
splitter.ConfigureOutputPort(2, splitterPortConfig, null);
splitter.ConfigureOutputPort(3, splitterPortConfig, null);
vidEncoder.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[0], null);
vidEncoder.ConfigureOutputPort(0, portConfig1, handler);
vidEncoder2.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[1], null);
vidEncoder2.ConfigureOutputPort(0, portConfig2, handler2);
vidEncoder3.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[2], null);
vidEncoder3.ConfigureOutputPort(0, portConfig3, handler3);
vidEncoder4.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[3], null);
vidEncoder4.ConfigureOutputPort(0, portConfig4, handler4);
cam.Camera.VideoPort.ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(vidEncoder);
splitter.Outputs[1].ConnectTo(vidEncoder2);
splitter.Outputs[2].ConnectTo(vidEncoder3);
splitter.Outputs[3].ConnectTo(vidEncoder4);
cam.Camera.PreviewPort.ConnectTo(renderer);
// Camera warm up time
await Task.Delay(2000);
await cam.ProcessAsync(cam.Camera.VideoPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
You can also use the splitter component to record video and capture images at the same time:
public async Task VideoAndImages()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/", "h264"))
using (var splitter = new MMALSplitterComponent())
using (var imgEncoder = new MMALImageEncoder(continuousCapture: true))
using (var vidEncoder = new MMALVideoEncoder())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var imgEncoderPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90);
var vidEncoderPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
// Create our component pipeline.
imgEncoder.ConfigureOutputPort(imgEncoderPortConfig, imgCaptureHandler);
vidEncoder.ConfigureOutputPort(vidEncoderPortConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(imgEncoder);
splitter.Outputs[1].ConnectTo(vidEncoder);
cam.Camera.PreviewPort.ConnectTo(nullSink);
// Camera warm up time
await Task.Delay(2000);
CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(15));
// Process for 15 seconds.
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
As mentioned earlier, the splitter component can also be connected to the camera's still port which allows you to produce 4 individual still frames at the same time. Typical use could include connecting a Resizer/ISP component, producing 4 raw outputs, or you could also produce encoded output with image encoders*.
Note: *It should be noted that if you plan on connecting image encoders to the splitter component outputs then you will need to add a Resize/ISP component inbetween. If you do not add the Resize/ISP component between the splitter output port and the image encoder input port, then the splitter will only allow 1 image encoder to be connected to a given splitter output port. Please see here.
In the below example we are connecting the camera's still port to a splitter component, from here each splitter output port is connected to a hardware accelerated ISP component and finally each ISP component is connected to a JPEG image encoder. You will notice the generic constraint applied to each splitter output port (<SplitterStillPort>
), this is important as by default the splitter component is intended for use with video and would be passed to a VideoOutputCallbackHandler
which is incorrect for still image usage; recreating the ports as a SplitterStillPort
ensures the correct behaviour for still image captures.
public async Task SplitterToImage()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgCaptureHandler2 = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgCaptureHandler3 = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgCaptureHandler4 = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgEncoder = new MMALImageEncoder())
using (var imgEncoder2 = new MMALImageEncoder())
using (var imgEncoder3 = new MMALImageEncoder())
using (var imgEncoder4 = new MMALImageEncoder())
using (var splitter = new MMALSplitterComponent())
using (var isp1 = new MMALIspComponent())
using (var isp2 = new MMALIspComponent())
using (var isp3 = new MMALIspComponent())
using (var isp4 = new MMALIspComponent())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90, userPortName: "Image Encoder 1");
var portConfig2 = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90, userPortName: "Image Encoder 2");
var portConfig3 = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90, userPortName: "Image Encoder 3");
var portConfig4 = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90, userPortName: "Image Encoder 4");
var splitterConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420);
var resizeConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 1280, height: 720);
var resizeConfig2 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 1024, height: 720);
var resizeConfig3 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 640, height: 480);
var resizeConfig4 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 620, height: 310);
// Create our component pipeline.
splitter.ConfigureOutputPort<SplitterStillPort>(0, splitterConfig, null);
splitter.ConfigureOutputPort<SplitterStillPort>(1, splitterConfig, null);
splitter.ConfigureOutputPort<SplitterStillPort>(2, splitterConfig, null);
splitter.ConfigureOutputPort<SplitterStillPort>(3, splitterConfig, null);
isp1.ConfigureOutputPort(resizeConfig, null);
isp2.ConfigureOutputPort(resizeConfig2, null);
isp3.ConfigureOutputPort(resizeConfig3, null);
isp4.ConfigureOutputPort(resizeConfig4, null);
imgEncoder.ConfigureOutputPort(portConfig, imgCaptureHandler);
imgEncoder2.ConfigureOutputPort(portConfig2, imgCaptureHandler2);
imgEncoder3.ConfigureOutputPort(portConfig3, imgCaptureHandler3);
imgEncoder4.ConfigureOutputPort(portConfig4, imgCaptureHandler4);
cam.Camera.StillPort.ConnectTo(splitter);
cam.Camera.PreviewPort.ConnectTo(nullSink);
splitter.Outputs[0].ConnectTo(isp1);
splitter.Outputs[1].ConnectTo(isp2);
splitter.Outputs[2].ConnectTo(isp3);
splitter.Outputs[3].ConnectTo(isp4);
isp1.Outputs[0].ConnectTo(imgEncoder);
isp2.Outputs[0].ConnectTo(imgEncoder2);
isp3.Outputs[0].ConnectTo(imgEncoder3);
isp4.Outputs[0].ConnectTo(imgEncoder4);
cam.PrintPipeline();
// Camera warm up time
await Task.Delay(2000);
await cam.ProcessAsync(cam.Camera.StillPort);
}
}
The MMALResizerComponent
can be connected to your pipeline to change the width/height and encoding type/pixel format of frames captured by the camera component. This component would typically be used alongside the splitter component which would allow you to have multiple video outputs at different resolutions.
The resizer component itself is an MMALDownstreamHandlerComponent
meaning you can process data to a file directly from it without the need to connect an encoder.
If you are not using this component with a splitter then it would be better to change the native resolution of the camera using the MMALCameraConfig.Resolution
global property. There is an issue discussed in #136 where if the resizer component is connected to the camera's still port and used alongside an Image Encoder component then frames will intermittently be dropped.
In the below example we are connecting a splitter component to 4 resizer components which will resize to varying resolutions as specified in the port configuration objects. From there, the resizers are each connected to a video encoder where the video is encoded to H.264 with a YUV420 pixel format.
Important: You will see in the below example that we are writing to a ramdisk mount, this is because there are performance issues with the Pi's SD Card IO when writing to multiple output streams.
public async Task ResizerComponentExample()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var vidCaptureHandler2 = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var vidCaptureHandler3 = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var vidCaptureHandler4 = new VideoStreamCaptureHandler("/media/ramdisk/video", "h264"))
using (var preview = new MMALVideoRenderer())
using (var splitter = new MMALSplitterComponent())
using (var resizer = new MMALResizerComponent())
using (var resizer2 = new MMALResizerComponent())
using (var resizer3 = new MMALResizerComponent())
using (var resizer4 = new MMALResizerComponent())
using (var vidEncoder = new MMALVideoEncoder())
using (var vidEncoder2 = new MMALVideoEncoder())
using (var vidEncoder3 = new MMALVideoEncoder())
using (var vidEncoder4 = new MMALVideoEncoder())
{
cam.ConfigureCameraSettings();
// Camera warm up time
await Task.Delay(2000);
var now = DateTime.Now;
var splitterPortConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420);
var resizerPortConfig1 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 1024, height: 768, zeroCopy: true);
var resizerPortConfig2 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 800, height: 600, zeroCopy: true);
var resizerPortConfig3 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 640, height: 480, zeroCopy: true);
var resizerPortConfig4 = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 640, height: 480, zeroCopy: true);
var vidEncoderConfig1 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, bitrate: 1300000, timeout: DateTime.Now.AddSeconds(10));
var vidEncoderConfig2 = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, bitrate: 1300000, timeout: DateTime.Now.AddSeconds(20));
// Create our component pipeline.
splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
splitter.ConfigureOutputPort(0, splitterPortConfig, null);
splitter.ConfigureOutputPort(1, splitterPortConfig, null);
splitter.ConfigureOutputPort(2, splitterPortConfig, null);
splitter.ConfigureOutputPort(3, splitterPortConfig, null);
resizer.ConfigureOutputPort(resizerPortConfig1, null);
resizer2.ConfigureOutputPort(resizerPortConfig2, null);
resizer3.ConfigureOutputPort(resizerPortConfig3, null);
resizer4.ConfigureOutputPort(resizerPortConfig4, null);
vidEncoder.ConfigureOutputPort(vidEncoderConfig1, vidCaptureHandler);
vidEncoder2.ConfigureOutputPort(vidEncoderConfig2, vidCaptureHandler2);
vidEncoder3.ConfigureOutputPort(vidEncoderConfig1, vidCaptureHandler3);
vidEncoder4.ConfigureOutputPort(vidEncoderConfig2, vidCaptureHandler4);
// Create our component pipeline.
cam.Camera.VideoPort.ConnectTo(splitter);
splitter.Outputs[0].ConnectTo(resizer);
splitter.Outputs[1].ConnectTo(resizer2);
splitter.Outputs[2].ConnectTo(resizer3);
splitter.Outputs[3].ConnectTo(resizer4);
resizer.Outputs[0].ConnectTo(vidEncoder);
resizer2.Outputs[0].ConnectTo(vidEncoder2);
resizer3.Outputs[0].ConnectTo(vidEncoder3);
resizer4.Outputs[0].ConnectTo(vidEncoder4);
cam.Camera.PreviewPort.ConnectTo(preview);
await cam.ProcessAsync(cam.Camera.VideoPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
The ISP component wraps the Image Sensor Processor hardware block to offer hardware accelerated format conversion and resizing. You can use the ISP component with both image stills and video recording pipelines and also connect it to the splitter component should you wish.
Note: The ISP component by definition features two output ports. At this stage, MMALSharp only supports port 361 (output 0) due to an outstanding issue whereby the native callback method is not called when port 362 is enabled. This is under investigation in ticket #131.
An example of how to use the ISP component is below:
public async Task TakePictureIsp()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/test1.bmp"))
using (var imgEncoder = new MMALImageEncoder())
using (var ispComponent = new MMALIspComponent())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var imgEncoderPortConfig = new MMALPortConfig(MMALEncoding.BMP, MMALEncoding.RGB16, quality: 90);
var ispComponentPortConfig = new MMALPortConfig(MMALEncoding.RGB16, MMALEncoding.RGB16, width: 640, height: 480, zeroCopy: true);
// Create our component pipeline.
ispComponent.ConfigureOutputPort(0, ispComponentPortConfig, null);
imgEncoder.ConfigureOutputPort(imgEncoderPortConfig, imgCaptureHandler);
cam.Camera.StillPort.ConnectTo(ispComponent);
cam.Camera.PreviewPort.ConnectTo(nullSink);
ispComponent.Outputs[0].ConnectTo(imgEncoder);
// Camera warm up time
await Task.Delay(2000).ConfigureAwait(false);
await cam.ProcessAsync(cam.Camera.StillPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
As mentioned previously, you can also use the ISP component in a video recording pipeline too:
public async Task TakeVideoIsp()
{
MMALCamera cam = MMALCamera.Instance;
using (var vidCaptureHandler = new VideoStreamCaptureHandler("/home/pi/videos/isp.h264"))
using (var vidEncoder = new MMALVideoEncoder())
using (var ispComponent = new MMALIspComponent())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var vidEncoderPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420);
var ispComponentPortConfig = new MMALPortConfig(MMALEncoding.I420, MMALEncoding.I420, width: 640, height: 480, zeroCopy: true);
// Create our component pipeline.
ispComponent.ConfigureOutputPort(0, ispComponentPortConfig, null);
vidEncoder.ConfigureOutputPort(vidEncoderPortConfig, vidCaptureHandler);
cam.Camera.VideoPort.ConnectTo(ispComponent);
cam.Camera.PreviewPort.ConnectTo(nullSink);
ispComponent.Outputs[0].ConnectTo(vidEncoder);
// Camera warm up time
await Task.Delay(2000).ConfigureAwait(false);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Version 0.3 brings the ability to print out the current component pipeline you have configured - this can be useful when using many components and encoders (such as the splitter).
Calling the PrintPipeline()
method on the MMALCamera
instance will print your current pipeline to the log. Debug logging must be enabled.
public async Task PrintComponentPipeline()
{
MMALCamera cam = MMALCamera.Instance;
using (var imgCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/", "jpg"))
using (var imgEncoder = new MMALImageEncoder())
using (var nullSink = new MMALNullSinkComponent())
{
cam.ConfigureCameraSettings();
var portConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420, quality: 90);
// Create our component pipeline.
imgEncoder.ConfigureOutputPort(portConfig, imgCaptureHandler);
cam.Camera.StillPort.ConnectTo(imgEncoder);
cam.Camera.PreviewPort.ConnectTo(nullSink);
cam.PrintPipeline();
// Camera warm up time
await Task.Delay(2000);
await cam.ProcessAsync(cam.Camera.StillPort);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
Sometimes you may wish to gain access to the image data directly instead of saving to disk, this is especially useful when capturing raw unencoded image data from the camera directly. In this scenario, you can use either a InMemoryCaptureHandler
(wraps a List<byte>
) or a MemoryStreamCaptureHandler
.
public async Task StoreToMemory()
{
MMALCamera cam = MMALCamera.Instance;
var captureHandler = new InMemoryCaptureHandler();
await cam.TakeRawPicture(captureHandler);
// Access raw unencoded output.
var outputFrames = captureHandler.WorkingData;
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}