Skip to content

Commit

Permalink
Merge pull request #8 from Varde-s-Forks/fix-docs
Browse files Browse the repository at this point in the history
fix docs
  • Loading branch information
myrsloik authored Oct 4, 2021
2 parents 8ae5ba2 + eab5361 commit e034d95
Showing 1 changed file with 15 additions and 19 deletions.
34 changes: 15 additions & 19 deletions docs/subtext.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,9 @@ Subtext is a subtitle renderer that uses libass and ffmpeg.

TextFile has two modes of operation. With blend=True (the default),
it returns *clip* with the subtitles burned in. With blend=False, it
returns a list of two clips. The first one is an RGB24 clip
containing the rendered subtitles. The second one is a Gray8 clip
containing a mask, to be used for blending the rendered subtitles
into other clips.
returns an RGB24 clip containing the rendered subtitles, with a Gray8
frame attached to each frame in the ``_Alpha`` frame property. These
Gray8 frames can be extracted using std.PropToClip.

Parameters:
clip
Expand Down Expand Up @@ -98,12 +97,6 @@ Subtext is a subtitle renderer that uses libass and ffmpeg.

ImageFile renders image-based subtitles such as VOBSUB and PGS.

ImageFile has two modes of operation. With blend=True (the default),
it returns *clip* with the subtitles burned in. With blend=False, it
returns an RGB24 clip containing the rendered subtitles, with a Gray8
frame attached to each frame in the ``_Alpha`` frame property. These
Gray8 frames can be extracted using std.PropToClip.

Parameters:
*clip*
If *blend* is True, the subtitles will be burned into this
Expand Down Expand Up @@ -164,18 +157,21 @@ Subtext is a subtitle renderer that uses libass and ffmpeg.

Example with manual blending::

subs = core.sub.TextFile(clip=YUV420P10_video, file="asdf.ass", blend=False)
sub = core.sub.TextFile(clip=YUV420P10_video, file="asdf.ass", blend=False)
mask = core.std.PropToClip(clip=sub, prop='_Alpha')

gray10 = core.query_video_format(subs[1].format.color_family,
YUV420P10_video.format.sample_type,
YUV420P10_video.format.bits_per_sample,
subs[1].format.subsampling_w,
subs[1].format.subsampling_h)
gray10 = core.query_video_format(
mask.format.color_family,
YUV420P10_video.format.sample_type,
YUV420P10_video.format.bits_per_sample,
mask.format.subsampling_w,
mask.format.subsampling_h
)

subs[0] = core.resize.Bicubic(clip=subs[0], format=YUV420P10_video.format.id, matrix_s="470bg")
subs[1] = core.resize.Bicubic(clip=subs[1], format=gray10.id)
sub = core.resize.Bicubic(clip=sub, format=YUV420P10_video.format.id, matrix_s="470bg")
mask = core.resize.Bicubic(clip=mask, format=gray10.id)

hardsubbed_video = core.std.MaskedMerge(clipa=YUV420P10_video, clipb=subs[0], mask=subs[1])
hardsubbed_video = core.std.MaskedMerge(clipa=YUV420P10_video, clipb=sub, mask=mask)

Example with automatic blending (will use BT709 matrix)::

Expand Down

0 comments on commit e034d95

Please sign in to comment.