From bb712de79c820e78926249de019dc4b4dfd7c907 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 1 Oct 2024 03:00:38 +0000 Subject: [PATCH] build based on a74214a --- dev/.documenter-siteinfo.json | 2 +- dev/functionindex/index.html | 2 +- dev/index.html | 2 +- dev/lowlevel/index.html | 4 ++-- dev/reading/index.html | 8 ++++---- dev/utilities/index.html | 2 +- dev/writing/index.html | 6 +++--- 7 files changed, 13 insertions(+), 13 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index e7af1939..7f4eb098 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-23T13:18:33","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-10-01T03:00:33","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/functionindex/index.html b/dev/functionindex/index.html index a3d3aa90..5c5ce71f 100644 --- a/dev/functionindex/index.html +++ b/dev/functionindex/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-143027902-2', {'page_path': location.pathname + location.search + location.hash}); -
+
diff --git a/dev/index.html b/dev/index.html index bec51a85..a8b4da3d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-143027902-2', {'page_path': location.pathname + location.search + location.hash}); -

Introduction

This library provides methods for reading and writing video files.

Functionality is based on a dedicated build of ffmpeg, provided via JuliaPackaging/Yggdrasil

Explore the source at github.com/JuliaIO/VideoIO.jl

Platform Notes:

  • ARM: For truly lossless reading & writing, there is a known issue on ARM that results in small precision differences when reading/writing some video files. As such, tests for frame comparison are currently skipped on ARM. Issues/PRs welcome for helping to get this fixed.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add VideoIO

Or, equivalently, via the Pkg API:

julia> import Pkg; Pkg.add("VideoIO")
+

Introduction

This library provides methods for reading and writing video files.

Functionality is based on a dedicated build of ffmpeg, provided via JuliaPackaging/Yggdrasil

Explore the source at github.com/JuliaIO/VideoIO.jl

Platform Notes:

  • ARM: For truly lossless reading & writing, there is a known issue on ARM that results in small precision differences when reading/writing some video files. As such, tests for frame comparison are currently skipped on ARM. Issues/PRs welcome for helping to get this fixed.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add VideoIO

Or, equivalently, via the Pkg API:

julia> import Pkg; Pkg.add("VideoIO")
diff --git a/dev/lowlevel/index.html b/dev/lowlevel/index.html index dccd6e19..2d9376d4 100644 --- a/dev/lowlevel/index.html +++ b/dev/lowlevel/index.html @@ -3,11 +3,11 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-143027902-2', {'page_path': location.pathname + location.search + location.hash}); -

Low level functionality

FFMPEG log level

FFMPEG's built-in logging and warning level can be read and set with

VideoIO.loglevel!Function

loglevel!(loglevel::Integer)

Set FFMPEG log level. Options are:

  • VideoIO.AVUtil.AV_LOG_QUIET
  • VideoIO.AVUtil.AV_LOG_PANIC
  • VideoIO.AVUtil.AV_LOG_FATAL
  • VideoIO.AVUtil.AV_LOG_ERROR
  • VideoIO.AVUtil.AV_LOG_WARNING
  • VideoIO.AVUtil.AV_LOG_INFO
  • VideoIO.AVUtil.AV_LOG_VERBOSE
  • VideoIO.AVUtil.AV_LOG_DEBUG
  • VideoIO.AVUtil.AV_LOG_TRACE
source

FFMPEG interface

Each ffmpeg library has its own VideoIO subpackage:

libavcodec    -> AVCodecs
+

Low level functionality

FFMPEG log level

FFMPEG's built-in logging and warning level can be read and set with

VideoIO.loglevel!Function

loglevel!(loglevel::Integer)

Set FFMPEG log level. Options are:

  • VideoIO.AVUtil.AV_LOG_QUIET
  • VideoIO.AVUtil.AV_LOG_PANIC
  • VideoIO.AVUtil.AV_LOG_FATAL
  • VideoIO.AVUtil.AV_LOG_ERROR
  • VideoIO.AVUtil.AV_LOG_WARNING
  • VideoIO.AVUtil.AV_LOG_INFO
  • VideoIO.AVUtil.AV_LOG_VERBOSE
  • VideoIO.AVUtil.AV_LOG_DEBUG
  • VideoIO.AVUtil.AV_LOG_TRACE
source

FFMPEG interface

Each ffmpeg library has its own VideoIO subpackage:

libavcodec    -> AVCodecs
 libavdevice   -> AVDevice
 libavfilter   -> AVFilters
 libavformat   -> AVFormat
 libavutil     -> AVUtil
 libswscale    -> SWScale

The following three files are related to ffmpeg, but currently not exposed:

libswresample -> SWResample
 libpostproc   -> PostProc   (not wrapped)

After importing VideoIO, you can import and use any of the subpackages directly

import VideoIO
-import SWResample  # SWResample functions are now available

Note that much of the functionality of these subpackages is not enabled by default, to avoid long compilation times as they load. To control what is loaded, each library version has a file which imports that's modules files. For example, ffmpeg's libswscale-v2 files are loaded by VideoIO_PKG_DIR/src/ffmpeg/SWScale/v2/LIBSWSCALE.jl.

Check these files to enable any needed functionality that isn't already enabled. Note that you'll probably need to do this for each version of the package for ffmpeg, and that the interfaces do change some from version to version.

Note that, in general, the low-level functions are not very fun to use, so it is good to focus initially on enabling a nice, higher-level function for these interfaces.

+import SWResample # SWResample functions are now available

Note that much of the functionality of these subpackages is not enabled by default, to avoid long compilation times as they load. To control what is loaded, each library version has a file which imports that's modules files. For example, ffmpeg's libswscale-v2 files are loaded by VideoIO_PKG_DIR/src/ffmpeg/SWScale/v2/LIBSWSCALE.jl.

Check these files to enable any needed functionality that isn't already enabled. Note that you'll probably need to do this for each version of the package for ffmpeg, and that the interfaces do change some from version to version.

Note that, in general, the low-level functions are not very fun to use, so it is good to focus initially on enabling a nice, higher-level function for these interfaces.

diff --git a/dev/reading/index.html b/dev/reading/index.html index 42bff0f6..f80a0914 100644 --- a/dev/reading/index.html +++ b/dev/reading/index.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'UA-143027902-2', {'page_path': location.pathname + location.search + location.hash});

Video Reading

Note: Reading of audio streams is not yet implemented

Reading Video Files

VideoIO contains a simple high-level interface which allows reading of video frames from a supported video file (or from a camera device, shown later).

The simplest form will load the entire video into memory as a vector of image arrays.

using VideoIO
-VideoIO.load("video.mp4")
VideoIO.loadFunction
load(filename::String, args...; kwargs...)

Load video file filename into memory as vector of image arrays, setting args and kwargs on the openvideo process.

source

Frames can be read sequentially until the end of the file:

using VideoIO
+VideoIO.load("video.mp4")
VideoIO.loadFunction
load(filename::String, args...; kwargs...)

Load video file filename into memory as vector of image arrays, setting args and kwargs on the openvideo process.

source

Frames can be read sequentially until the end of the file:

using VideoIO
 
 # Construct a AVInput object to access the video and audio streams in a video container
 # io = VideoIO.open(video_file)
@@ -20,7 +20,7 @@
     # Do something with frames
 end
 close(f)
VideoIO.openvideoFunction
openvideo(file[, video_stream = 1]; <keyword arguments>) -> reader
-openvideo(f, ...)

Open file and create an object to read and decode video stream number video_stream. file can either be a AVInput created by VideoIO.open, the name of a file as an AbstractString, or instead an IO object. However, support for IO objects is incomplete, and does not currently work with common video containers such as *.mp4 files.

Frames can be read from the reader with read or read!, or alternatively by using the iterator interface provided for reader. To close the reader, simply use close. Seeking within the reader can be accomplished using seek, seekstart. Frames can be skipped with skipframe, or skipframes. The current time in the video stream can be accessed with gettime. Details about the frame dimension can be found with out_frame_size. The total number of frames can be found with counttotalframes.

If called with a single argument function as the first argument, the reader will be passed to the function, and will be closed once the call returns whether or not an error occurred.

The decoder options and conversion to Julia arrays is controlled by the keyword arguments listed below.

Keyword arguments

  • transcode::Bool = true: Determines whether decoded frames are transferred into a Julia matrix with easily interpretable element type, or instead returned as raw byte buffers.
  • target_format::Union{Nothing, Cint} = nothing: Determines the target pixel format that decoded frames will be transformed into before being transferred to an output array. This can either by a VideoIO.AV_PIX_FMT_* value corresponding to a FFmpeg AVPixelFormat, and must then also be a format supported by the VideoIO, or instead nothing, in which case the format will be automatically chosen by FFmpeg. This list of currently supported pixel formats, and the matrix element type that each pixel format corresponds with, are elements of VideoIO.VIO_PIX_FMT_DEF_ELTYPE_LU.
  • pix_fmt_loss_flags = 0: Loss flags to control how transfer pixel format is chosen. Only valid if target_format = nothing. Flags must correspond to FFmpeg loss flags.
  • target_colorspace_details = nothing: Information about the color space of output Julia arrays. If nothing, then this will correspond to a best-effort interpretation of Colors.jl for the corresponding element type. To override these defaults, create a VideoIO.VioColorspaceDetails object using the appropriate AVCOL_ definitions from FFmpeg, or use VideoIO.VioColorspaceDetails() to use the FFmpeg defaults. To avoid rescaling limited color range data (mpeg) to full color range output (jpeg), then set this to VideoIO.VioColorspaceDetails() to avoid additional scaling by sws_scale.
  • allow_vio_gray_transform = true: Instead of using sws_scale for gray data, use a more accurate color space transformation implemented in VideoIO if allow_vio_gray_gransform = true. Otherwise, use sws_scale.
  • swscale_options::OptionsT = (;): A Namedtuple, or Dict{Symbol, Any} of options for the swscale object used to perform color space scaling. Options must correspond with options for FFmpeg's scaler filter.
  • sws_color_options::OptionsT = (;): Additional keyword arguments passed to sws_setColorspaceDetails.
  • thread_count::Union{Nothing, Int} = Sys.CPU_THREADS: The number of threads the codec is allowed to use or nothing for default codec behavior. Defaults to Sys.CPU_THREADS.
source

Alternatively, you can open the video stream in a file directly with VideoIO.openvideo(filename), without making an intermediate AVInput object, if you only need the video.

VideoIO also provides an iterator interface for VideoReader, which behaves like other mutable iterators in Julia (e.g. Channels). If iteration is stopped early, for example with a break statement, then it can be resumed in the same spot by iterating on the same VideoReader object. Consequently, if you have already iterated over all the frames of a VideoReader object, then it will be empty for further iteration unless its position in the video is changed with seek.

using VideoIO
+openvideo(f, ...)

Open file and create an object to read and decode video stream number video_stream. file can either be a AVInput created by VideoIO.open, the name of a file as an AbstractString, or instead an IO object. However, support for IO objects is incomplete, and does not currently work with common video containers such as *.mp4 files.

Frames can be read from the reader with read or read!, or alternatively by using the iterator interface provided for reader. To close the reader, simply use close. Seeking within the reader can be accomplished using seek, seekstart. Frames can be skipped with skipframe, or skipframes. The current time in the video stream can be accessed with gettime. Details about the frame dimension can be found with out_frame_size. The total number of frames can be found with counttotalframes.

If called with a single argument function as the first argument, the reader will be passed to the function, and will be closed once the call returns whether or not an error occurred.

The decoder options and conversion to Julia arrays is controlled by the keyword arguments listed below.

Keyword arguments

  • transcode::Bool = true: Determines whether decoded frames are transferred into a Julia matrix with easily interpretable element type, or instead returned as raw byte buffers.
  • target_format::Union{Nothing, Cint} = nothing: Determines the target pixel format that decoded frames will be transformed into before being transferred to an output array. This can either by a VideoIO.AV_PIX_FMT_* value corresponding to a FFmpeg AVPixelFormat, and must then also be a format supported by the VideoIO, or instead nothing, in which case the format will be automatically chosen by FFmpeg. This list of currently supported pixel formats, and the matrix element type that each pixel format corresponds with, are elements of VideoIO.VIO_PIX_FMT_DEF_ELTYPE_LU.
  • pix_fmt_loss_flags = 0: Loss flags to control how transfer pixel format is chosen. Only valid if target_format = nothing. Flags must correspond to FFmpeg loss flags.
  • target_colorspace_details = nothing: Information about the color space of output Julia arrays. If nothing, then this will correspond to a best-effort interpretation of Colors.jl for the corresponding element type. To override these defaults, create a VideoIO.VioColorspaceDetails object using the appropriate AVCOL_ definitions from FFmpeg, or use VideoIO.VioColorspaceDetails() to use the FFmpeg defaults. To avoid rescaling limited color range data (mpeg) to full color range output (jpeg), then set this to VideoIO.VioColorspaceDetails() to avoid additional scaling by sws_scale.
  • allow_vio_gray_transform = true: Instead of using sws_scale for gray data, use a more accurate color space transformation implemented in VideoIO if allow_vio_gray_gransform = true. Otherwise, use sws_scale.
  • swscale_options::OptionsT = (;): A Namedtuple, or Dict{Symbol, Any} of options for the swscale object used to perform color space scaling. Options must correspond with options for FFmpeg's scaler filter.
  • sws_color_options::OptionsT = (;): Additional keyword arguments passed to sws_setColorspaceDetails.
  • thread_count::Union{Nothing, Int} = Sys.CPU_THREADS: The number of threads the codec is allowed to use or nothing for default codec behavior. Defaults to Sys.CPU_THREADS.
source

Alternatively, you can open the video stream in a file directly with VideoIO.openvideo(filename), without making an intermediate AVInput object, if you only need the video.

VideoIO also provides an iterator interface for VideoReader, which behaves like other mutable iterators in Julia (e.g. Channels). If iteration is stopped early, for example with a break statement, then it can be resumed in the same spot by iterating on the same VideoReader object. Consequently, if you have already iterated over all the frames of a VideoReader object, then it will be empty for further iteration unless its position in the video is changed with seek.

using VideoIO
 
 f = VideoIO.openvideo("video.mp4")
 for img in f
@@ -31,7 +31,7 @@
 # Further iteration will show that f is now empty
 @assert isempty(f)
 
-close(f)

Seeking through the video can be achieved via seek(f, seconds::Float64) and seekstart(f) to return to the start.

Base.seekFunction
seek(reader::VideoReader, seconds)

Seeks into the parent AVInput using this video stream's index. See [seek] for AVInput.

source
seek(avin::AVInput, seconds::AbstractFloat, video_stream::Integer=1)

Seek through the container format avin so that the next frame returned by the stream indicated by video_stream will have a timestamp greater than or equal to seconds.

source
Base.seekstartFunction
seekstart(reader::VideoReader)

Seek to time zero of the parent AVInput using reader's stream index. See seekstart for AVInput objects.

source
seekstart(avin::AVInput{T}, video_stream_index=1) where T <: AbstractString

Seek to time zero of AVInput object.

source

Frames can be skipped without reading frame content via skipframe(f) and skipframes(f, n)

VideoIO.skipframeFunction
skipframe(s::VideoReader; throwEOF=true)

Skip the next frame. If End of File is reached, EOFError thrown if throwEOF=true. Otherwise returns true if EOF reached, false otherwise.

source
VideoIO.skipframesFunction
skipframes(s::VideoReader, n::Int; throwEOF=true) -> n

Skip the next n frames. If End of File is reached and throwEOF=true, a EOFError will be thrown. Returns the number of frames that were skipped.

source

Total available frame count is available via counttotalframes(f)

VideoIO.counttotalframesFunction
counttotalframes(reader) -> n::Int

Count the total number of frames in the video by seeking to start, skipping through each frame, and seeking back to the start.

For a faster alternative that relies on video container metadata, try get_number_frames.

source

!!! note H264 videos encoded with crf>0 have been observed to have 4-fewer frames available for reading.

Changing the target pixel format for reading

It can be helpful to be explicit in which pixel format you wish to read frames as. Here a grayscale video is read and parsed into a Vector(Array{UInt8}}

f = VideoIO.openvideo(filename, target_format=VideoIO.AV_PIX_FMT_GRAY8)
+close(f)

Seeking through the video can be achieved via seek(f, seconds::Float64) and seekstart(f) to return to the start.

Base.seekFunction
seek(reader::VideoReader, seconds)

Seeks into the parent AVInput using this video stream's index. See [seek] for AVInput.

source
seek(avin::AVInput, seconds::AbstractFloat, video_stream::Integer=1)

Seek through the container format avin so that the next frame returned by the stream indicated by video_stream will have a timestamp greater than or equal to seconds.

source
Base.seekstartFunction
seekstart(reader::VideoReader)

Seek to time zero of the parent AVInput using reader's stream index. See seekstart for AVInput objects.

source
seekstart(avin::AVInput{T}, video_stream_index=1) where T <: AbstractString

Seek to time zero of AVInput object.

source

Frames can be skipped without reading frame content via skipframe(f) and skipframes(f, n)

VideoIO.skipframeFunction
skipframe(s::VideoReader; throwEOF=true)

Skip the next frame. If End of File is reached, EOFError thrown if throwEOF=true. Otherwise returns true if EOF reached, false otherwise.

source
VideoIO.skipframesFunction
skipframes(s::VideoReader, n::Int; throwEOF=true) -> n

Skip the next n frames. If End of File is reached and throwEOF=true, a EOFError will be thrown. Returns the number of frames that were skipped.

source

Total available frame count is available via counttotalframes(f)

VideoIO.counttotalframesFunction
counttotalframes(reader) -> n::Int

Count the total number of frames in the video by seeking to start, skipping through each frame, and seeking back to the start.

For a faster alternative that relies on video container metadata, try get_number_frames.

source

!!! note H264 videos encoded with crf>0 have been observed to have 4-fewer frames available for reading.

Changing the target pixel format for reading

It can be helpful to be explicit in which pixel format you wish to read frames as. Here a grayscale video is read and parsed into a Vector(Array{UInt8}}

f = VideoIO.openvideo(filename, target_format=VideoIO.AV_PIX_FMT_GRAY8)
 
 while !eof(f)
     img = reinterpret(UInt8, read(f))
@@ -59,4 +59,4 @@
 julia> VideoIO.DEFAULT_CAMERA_OPTIONS["framerate"] = 30
 
 julia> julia> opencamera()
-VideoReader(...)

Video Properties & Metadata

VideoIO.get_start_timeFunction
get_start_time(file::String) -> DateTime

Return the starting date & time of the video file. Note that if the starting date & time are missing, this function will return the Unix epoch (00:00 1st January 1970).

source
VideoIO.get_time_durationFunction
get_time_duration(file::String) -> (DateTime, Microsecond)

Return the starting date & time as well as the duration of the video file. Note that if the starting date & time are missing, this function will return the Unix epoch (00:00 1st January 1970).

source
VideoIO.get_durationFunction
get_duration(file::String) -> Float64

Return the duration of the video file in seconds (float).

source
VideoIO.get_number_framesFunction
get_number_frames(file [, streamno])

Query the the container file for the number of frames in video stream streamno if applicable, instead returning nothing if the container does not report the number of frames. Will not decode the video to count the number of frames in a video.

source
+VideoReader(...)

Video Properties & Metadata

VideoIO.get_start_timeFunction
get_start_time(file::String) -> DateTime

Return the starting date & time of the video file. Note that if the starting date & time are missing, this function will return the Unix epoch (00:00 1st January 1970).

source
VideoIO.get_time_durationFunction
get_time_duration(file::String) -> (DateTime, Microsecond)

Return the starting date & time as well as the duration of the video file. Note that if the starting date & time are missing, this function will return the Unix epoch (00:00 1st January 1970).

source
VideoIO.get_durationFunction
get_duration(file::String) -> Float64

Return the duration of the video file in seconds (float).

source
VideoIO.get_number_framesFunction
get_number_frames(file [, streamno])

Query the the container file for the number of frames in video stream streamno if applicable, instead returning nothing if the container does not report the number of frames. Will not decode the video to count the number of frames in a video.

source
diff --git a/dev/utilities/index.html b/dev/utilities/index.html index ae2b92ac..b19f50b5 100644 --- a/dev/utilities/index.html +++ b/dev/utilities/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-143027902-2', {'page_path': location.pathname + location.search + location.hash}); -

Utilities

Test Videos

A small number of test videos are available through VideoIO.TestVideos. These are short videos in a variety of formats with non-restrictive (public domain or Creative Commons) licenses.

VideoIO.TestVideos.testvideoFunction
testvideo(name, ops...)

Returns an AVInput object for the given video name. The video will be downloaded if it isn't available.

source
+

Utilities

Test Videos

A small number of test videos are available through VideoIO.TestVideos. These are short videos in a variety of formats with non-restrictive (public domain or Creative Commons) licenses.

VideoIO.TestVideos.testvideoFunction
testvideo(name, ops...)

Returns an AVInput object for the given video name. The video will be downloaded if it isn't available.

source
diff --git a/dev/writing/index.html b/dev/writing/index.html index d530bb6a..77ce174d 100644 --- a/dev/writing/index.html +++ b/dev/writing/index.html @@ -5,7 +5,7 @@ gtag('config', 'UA-143027902-2', {'page_path': location.pathname + location.search + location.hash});

Writing Videos

Note: Writing of audio streams is not yet implemented

Single-step Encoding

Videos can be encoded directly from image stack using VideoIO.save(filename::String, imgstack::Array) where imgstack is an array of image arrays with identical type and size.

The entire image stack can be encoded in a single step:

import VideoIO
 encoder_options = (crf=23, preset="medium")
-VideoIO.save("video.mp4", imgstack, framerate=30, encoder_options=encoder_options)
VideoIO.saveFunction
save(filename::String, imgstack; ...)

Create a video container filename and encode the set of frames imgstack into it. imgstack must be an iterable of matrices and each frame must have the same dimensions and element type.

Encoding options, restrictions on frame size and element type, and other details are described in open_video_out. All keyword arguments are passed to open_video_out.

See also: open_video_out, write, close_video_out!

source

Iterative Encoding

Alternatively, videos can be encoded iteratively within custom loops.

using VideoIO
+VideoIO.save("video.mp4", imgstack, framerate=30, encoder_options=encoder_options)
VideoIO.saveFunction
save(filename::String, imgstack; ...)

Create a video container filename and encode the set of frames imgstack into it. imgstack must be an iterable of matrices and each frame must have the same dimensions and element type.

Encoding options, restrictions on frame size and element type, and other details are described in open_video_out. All keyword arguments are passed to open_video_out.

See also: open_video_out, write, close_video_out!

source

Iterative Encoding

Alternatively, videos can be encoded iteratively within custom loops.

using VideoIO
 framestack = map(x->rand(UInt8, 100, 100), 1:100) #vector of 2D arrays
 
 encoder_options = (crf=23, preset="medium")
@@ -33,5 +33,5 @@
 end
VideoIO.open_video_outFunction
open_video_out(filename, ::Type{T}, sz::NTuple{2, Integer};
                <keyword arguments>) -> writer
 open_video_out(filename, first_img::Matrix; ...)
-open_video_out(f, ...; ...)

Open file filename and prepare to encode a video stream into it, returning object writer that can be used to encode frames. The size and element type of the video can either be specified by passing the first frame of the movie first_img, which will not be encoded, or instead the element type T and 2-tuple size sz. If the size is explicitly specified, the first element will be the height, and the second width, unless keyword argument scanline_major = true, in which case the order is reversed. Both height and width must be even. The element type T must be one of the supported element types, which is any key of VideoIO.VIO_DEF_ELTYPE_PIX_FMT_LU, or instead the Normed or Unsigned type for a corresponding Gray element type. The container type will be inferred from filename.

Frames are encoded with write, which must use frames with the same size, element type, and obey the same value of scanline_major. The video must be closed once all frames are encoded with close_video_out!.

If called with a function as the first argument, f, then the function will be called with the writer object writer as its only argument. This writer object will be closed once the call is complete, regardless of whether or not an error occurred.

Keyword arguments

  • codec_name::Union{AbstractString, Nothing} = nothing: Name of the codec to use. Must be a name accepted by FFmpeg, and compatible with the chosen container type, or nothing, in which case the codec will be automatically selected by FFmpeg based on the container.
  • framerate::Real = 24: Framerate of the resulting video.
  • scanline_major::Bool = false: If false, then Julia arrays are assumed to have frame height in the first dimension, and frame width on the second. If true, then pixels that adjacent to eachother in the same scanline (i.e. horizontal line of the video) are assumed to be adjacent to eachother in memory. scanline_major = true videos must be StridedArrays with unit stride in the first dimension. For normal arrays, this corresponds to a matrix where frame width is in the first dimension, and frame height is in the second.
  • container_options::OptionsT = (;): A NamedTuple or Dict{Symbol, Any} of options for the container. Must correspond to option names and values accepted by FFmpeg.
  • container_private_options::OptionsT = (;): A NamedTuple or Dict{Symbol, Any} of private options for the container. Must correspond to private options names and values accepted by FFmpeg for the chosen container type.
  • encoder_options::OptionsT = (;): A NamedTuple or Dict{Symbol, Any} of options for the encoder context. Must correspond to option names and values accepted by FFmpeg.
  • encoder_private_options::OptionsT = (;): A NamedTuple or Dict{Symbol, Any} of private options for the encoder context. Must correspond to private option names and values accepted by FFmpeg for the chosen codec specified by codec_name.
  • swscale_options::OptionsT = (;): A Namedtuple, or Dict{Symbol, Any} of options for the swscale object used to perform color space scaling. Options must correspond with options for FFmpeg's scaler filter.
  • target_pix_fmt::Union{Nothing, Cint} = nothing: The pixel format that will be used to input data into the encoder. This can either by a VideoIO.AV_PIX_FMT_* value corresponding to a FFmpeg AVPixelFormat, and must then be a format supported by the encoder, or instead nothing, in which case it will be chosen automatically by FFmpeg.
  • pix_fmt_loss_flags = 0: Loss flags to control how encoding pixel format is chosen. Only valid if target_pix_fmt = nothing. Flags must correspond to FFmpeg loss flags.
  • input_colorspace_details = nothing: Information about the color space of input Julia arrays. If nothing, then this will correspond to a best-effort interpretation of Colors.jl for the corresponding element type. To override these defaults, create a VideoIO.VioColorspaceDetails object using the appropriate AVCOL_ definitions from FFmpeg, or use VideoIO.VioColorspaceDetails() to use the FFmpeg defaults. If data in the input Julia arrays is already in the mpeg color range, then set this to VideoIO.VioColorspaceDetails() to avoid additional scaling by sws_scale.
  • allow_vio_gray_transform = true: Instead of using sws_scale for gray data, use a more accurate color space transformation implemented in VideoIO if allow_vio_gray_transform = true. Otherwise, use sws_scale.
  • sws_color_options::OptionsT = (;): Additional keyword arguments passed to sws_setColorspaceDetails.
  • thread_count::Union{Nothing, Int} = nothing: The number of threads the codec is allowed to use, or nothing for default codec behavior. Defaults to nothing.

See also: write, close_video_out!

source
Base.writeMethod
write(writer::VideoWriter, img)
-write(writer::VideoWriter, img, index)

Prepare frame img for encoding, encode it, mux it, and either cache it or write it to the file described by writer. img must be the same size and element type as the size and element type that was used to create writer. If index is provided, it must start at zero and increment monotonically.

source
VideoIO.close_video_out!Function
close_video_out!(writer::VideoWriter)

Write all frames cached in writer to the video container that it describes, and then close the file. Once all frames in a video have been added to writer, then it must be closed with this function to flush any cached frames to the file, and then finally close the file and release resources associated with writer.

source

Supported Colortypes

Encoding of the following image element color types currently supported:

  • UInt8
  • Gray{N0f8}
  • RGB{N0f8}

Encoder Options

The encoder_options keyword argument allows control over FFmpeg encoding options. Optional fields can be found here.

More details about options specific to h264 can be found here.

Some example values for the encoder_options keyword argument are:

Goalencoder_options value
Perceptual compression, h264 default. Best for most cases(crf=23, preset="medium")
Lossless compression. Fastest, largest file size(crf=0, preset="ultrafast")
Lossless compression. Slowest, smallest file size(crf=0, preset="veryslow")
Direct control of bitrate and frequency of intra frames (every 10)(bit_rate = 400000, gop_size = 10, max_b_frames = 1)

If a hyphenated parameter is needed, it can be added using var"param-name" = value.

Lossless Encoding

Lossless RGB

If lossless encoding of RGB{N0f8} is required, true lossless requires passing codec_name = "libx264rgb" to the function to avoid the lossy RGB->YUV420 conversion, as well as adding crf=0 in encoder_options.

Lossless Grayscale

If lossless encoding of Gray{N0f8} or UInt8 is required, crf=0 should be set, as well as color_range=2 to ensure full 8-bit pixel color representation. i.e. (color_range=2, crf=0, preset="medium")

Encoding Performance

See util/lossless_video_encoding_testing.jl for testing of losslessness, speed, and compression as a function of h264 encoding preset, for 3 example videos.

+open_video_out(f, ...; ...)

Open file filename and prepare to encode a video stream into it, returning object writer that can be used to encode frames. The size and element type of the video can either be specified by passing the first frame of the movie first_img, which will not be encoded, or instead the element type T and 2-tuple size sz. If the size is explicitly specified, the first element will be the height, and the second width, unless keyword argument scanline_major = true, in which case the order is reversed. Both height and width must be even. The element type T must be one of the supported element types, which is any key of VideoIO.VIO_DEF_ELTYPE_PIX_FMT_LU, or instead the Normed or Unsigned type for a corresponding Gray element type. The container type will be inferred from filename.

Frames are encoded with write, which must use frames with the same size, element type, and obey the same value of scanline_major. The video must be closed once all frames are encoded with close_video_out!.

If called with a function as the first argument, f, then the function will be called with the writer object writer as its only argument. This writer object will be closed once the call is complete, regardless of whether or not an error occurred.

Keyword arguments

See also: write, close_video_out!

source
Base.writeMethod
write(writer::VideoWriter, img)
+write(writer::VideoWriter, img, index)

Prepare frame img for encoding, encode it, mux it, and either cache it or write it to the file described by writer. img must be the same size and element type as the size and element type that was used to create writer. If index is provided, it must start at zero and increment monotonically.

source
VideoIO.close_video_out!Function
close_video_out!(writer::VideoWriter)

Write all frames cached in writer to the video container that it describes, and then close the file. Once all frames in a video have been added to writer, then it must be closed with this function to flush any cached frames to the file, and then finally close the file and release resources associated with writer.

source

Supported Colortypes

Encoding of the following image element color types currently supported:

Encoder Options

The encoder_options keyword argument allows control over FFmpeg encoding options. Optional fields can be found here.

More details about options specific to h264 can be found here.

Some example values for the encoder_options keyword argument are:

Goalencoder_options value
Perceptual compression, h264 default. Best for most cases(crf=23, preset="medium")
Lossless compression. Fastest, largest file size(crf=0, preset="ultrafast")
Lossless compression. Slowest, smallest file size(crf=0, preset="veryslow")
Direct control of bitrate and frequency of intra frames (every 10)(bit_rate = 400000, gop_size = 10, max_b_frames = 1)

If a hyphenated parameter is needed, it can be added using var"param-name" = value.

Lossless Encoding

Lossless RGB

If lossless encoding of RGB{N0f8} is required, true lossless requires passing codec_name = "libx264rgb" to the function to avoid the lossy RGB->YUV420 conversion, as well as adding crf=0 in encoder_options.

Lossless Grayscale

If lossless encoding of Gray{N0f8} or UInt8 is required, crf=0 should be set, as well as color_range=2 to ensure full 8-bit pixel color representation. i.e. (color_range=2, crf=0, preset="medium")

Encoding Performance

See util/lossless_video_encoding_testing.jl for testing of losslessness, speed, and compression as a function of h264 encoding preset, for 3 example videos.