-
Notifications
You must be signed in to change notification settings - Fork 556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about Profile settings (for documentation) #3427
Comments
|
Interlacing
Yes, correct. When a format is not progressive there's a second parameter needed to fully describe how frame images are stored. A second true/false value, In terms of input metadata, libopenshot always shows a value for Sample aspect / pixel aspect
Any
Not always, depends on the profile. 1/1 is most common in formats intended to be shown on a digital display like a computer monitor, though. Typically those screens, and image data encoded for them, will have the same density/resolution/DPI in both dimensions. Each pixel horizontally or vertically in the data maps directly to a single pixel location in the output image. But broadcast formats, in particular analog ones, may be capable of very different resolution horizontally vs. vertically. The data is encoded in "grids" of pixel values with dimensions that don't match the actual shape of how the frames should be drawn, and that's where the pixel ratio comes into play. Using a non-uniform ( Other times, pixel aspect is important because it's used to encode as much resolution as possible into a format with a different aspect ratio. Anamorphic DVD is the classic example. DVD frames are formatted for 4:3 (non-widescreen) TVs and have a resolution of ~ All of this remains very confusing no matter how many times you've encountered it, in my experience. I assume at some point it must become second-nature and obvious, but I have yet to reach that point and don't expect to. Unless a person is dealing with this stuff professionally on a daily basis, I figure their chances aren't good. Typically the ratio is stored as a reduced fraction. The MLT profiles
It's probably best not to link to the MLT documentation or make any reference to it at all, in the OpenShot documentation. In fact, if there are any existing references/links we should think about removing them to prevent any confusion. Even though the libopenshot profile data format was originally based on MLT's (a piece of trivia with no practical relevance), the implementation isn't. Because there are major differences in how the file is interpreted, the MLT profile docs don't accurately describe libopenshot profiles even when they have identical data. Handling of sample/pixel aspect values
It's necessary for output videos created with libopenshot to be encoded into a format like the one in #3117, yes. Or, it would be, if that was working currently. But not only do we not correctly handle input media with a non-uniform pixel ratio (as #3117 documents), I later discovered (OpenShot/libopenshot#489) that we don't appear to be handling those cases properly in output media either. Unless I misinterpreted the code, libopenshot's current FFmpegWriter implementation doesn't seem to be applying the profile's Profile colorspace
As @SuslikV said, not only is 2020 not supported, but whatever the Profiles with a sample aspect that isn't
|
Or it could be that it's not actually very confusing, it should be old hat for me by now, and the reason I continue to find it confusing is because I'm a dummy. I have to at least acknowledge that possibility as well. (Tangentially / for background...)
That last part is critical, and gets to the "whys" of pixel aspect. For most encoded media formats, the amount of information they can encode will be limited by some sort of bandwidth capacity — whether it's the maximum transfer rate of a cable, the read speed of a physical media device like a DVD player or a hard drive interface, etc., at some point you reach the maximum available bandwidth and can't get any more data any faster than that. Whatever and wherever that limit is, in the context of the encoded video it can typically be expressed as a pixels-per-second value. The pipeline your data stream travels down doesn't care what size or shape you define for your two-dimensional video frame. All that matters is, in a given second you're only able to receive a certain number of pixels. Divide that by your frame rate, and you've got the total pixel count available to represent each frame. The pixel count may be a fixed number, but there are a multitude of different ways those pixels can be allocated in the frame. You're free to chop them up into rows and columns of any size and shape just as long as multiplying the dimensions results in the same total pixel count. So, when you want to encode 16:9 aspect frames and 4:3 aspect frames at the same bandwidth / data rate, one option (the one used by the anamorphic DVD format) is to set a fixed number of rows and columns regardless of the aspect ratio, and then use a different pixel aspect ratio to stretch that data onto a frame with the correct shape. |
While this was very educational for me, are you sure OpenShot supports those lines? If not, it should not be mentioned in the manual. (It may fit as trivia for the FAQ) Perhaps more importantly, does it matter for the output if your source video is interlaced or not? Speaking of source video, can OpenShot work with source videos if they have a different frame rate? If so, what profile should the user choose? I have replaced both images with new PNG's, and I think that is all I can do. The rest is up to someone with more technical experience. If you want video export settings covered too, I can take screenshots and make a paragrapgh setup. But again, I lack knowledge of how to actually use it. |
Sorry, I shouldn't have said "is needed". I'm 99% sure the profile format supports the parameter (I will try to remember to make that 100% when I'm back at my computer), so it's probably worth documenting even though I'm not sure any such interlaced FORMATS actually exist, so it may be a useless parameter in practice.
For the output, not at all. The reader will deinterlace all interlaced input videos — everything's treated as progressive internally. And when writing output video using an interlaced profile, the entire stream will be interlaced regardless what format the source video(s) had.
Absolutely, both different from the output frame rate AND potentially different from each other.
Whatever they want to export with. IOW, we often get requests to have OpenShot preserve the parameters of "the source video", because tools like HandBrake and etc. do that automatically. But HandBrake is a transcoder, its only job is to convert one input video into one output video. So it makes sense to match the input as a starting point. Anyone who's using OpenShot with just a single input video and no other media is probably using it wrong. The point of an editor like OpenShot is to create videos that combine multiple media files — there's almost never "the" input file, there are multiple input files in different formats. So we don't make assumptions about output format. The best results will usually come from picking a profile that's the same as or lesser than the input files. Meaning, creating a video from higher-quality sources is better than trying to create a high-quality video from lower-quality sources. But with frame rate, specifically, either matching the input or using a multiple/divisor of the input is best. If you have a 60fps source video, pick either 60, 30, or 15 fps for the output video. If you pick 50, then instead of every other frame getting skipped (@ 30fps), you'll have an output video that drops every 6th frame, and that's going to look a bit jerky. If you have a 29.97fps source video and a 60fps source video in the same project, you have to pick your poison and deal with the consequences of combining those two in the same project. (30fps may be a good choice, as you probably won't notice the occasional doubled frame on the lower-rate video. But it's really up to the user to see what works for them.) These are the kind of topics where there are no easy, clear-cut rules because it's very dependent on the user's situation and their goals. In terms of documenting OpenShot, the project profile should be set to whatever target format they intend to export their video in, independent of the properties of the input file(s)... though choosing a format similar to a "primary" video, if any, is usually a good idea.
I'll take a look, thanks! |
I should also mention, regarding the interlace info above -- I believe that at least some of that is, as I think I mentioned, quite broken. With no real timeframe on when or if it might be fixed, because there really isn't very much demand at all for interlaced video anymore. And the people who do want it are mostly users in professional/broadcast settings who aren't really OpenShot's target audience. However, the support is ostensibly there, and present in the UI. And I suspect any attempts to remove it entirely would be met with resistance. So maybe it's better to just studiously talk around it (rather than about it), in the manual. *shrug* |
Actually I think there are 3 reasons for that:
By cutting up and/or annotating a holiday video, the footage will probably be from the same camera with the same frame rate. 2 As you explain underneath, it does matter, unless you interpolate and rended the missing frames. (For going from 50 to 30 frames, you need to calculate 2 missing frames for each to get 150, if which you then take every 5th frame) 3 I am pretty sure I save seen it advised as important to set the profile to the footage framerate, either here on Github or on the Reddit. This as response to people with too short videos because it had less frames per minute. (That advise may have been wrong, but it's how I learned about the existence of profiles in the first place)
I think that is good advise to give new users. (Experienced users will know how to set things to their own hand). |
As for interlaced, I came across this issue from last year where you practically stated that interlaced did not work, may never have worked and is unlikely to every work again: @ferdnyc wrote
PS: As for chosing the same export as your profile (and possibly your import video?) there I read that advise
|
Pretty much. Because, honestly, I still haven't encountered more than one or two users since I posted that who've even noticed it doesn't work. And I don't think one of those two really cared that it didn't.
Yes, well, the key point there is what I've been saying: The project profile needs to be set to the one you're planning to use when exporting the project. That's by far the most important thing. The users who encounter serious difficulties with their projects (aside from the ones who just genuinely are attempting complex things that are somewhat beyond OpenShot's capabilities) are usually the ones who do all of their work with the project profile set to the default, often giving no thought to final product at all (maybe not even knowing that there is a project profile). Then they want to set the export profile to whatever they choose and have it "just work", which it never will. In part that's because most OpenShot users aren't working with uncompressed, high-bandwidth video streams. With video in those formats, like you get off pro equipment, you can edit, adjust, and rescale everything to your heart's content, and it's not a problem. But those are the formats where file size is measured in gigabytes per minute — that flexibility has a cost. Most OpenShot users are working with video encoded in a highly compressed, very efficient format (like H.264 or H.265) which doesn't lend itself to on-the-fly modification at all. And when working in those formats (especially if the video was captured and encoded on the fly, meaning it might not be properly indexed the way a non-realtime encoder would index it) a little planning ahead can greatly improve results. |
Thank you so much for submitting an issue to help improve OpenShot Video Editor. We are sorry about this, but this particular issue has gone unnoticed for quite some time. To help keep the OpenShot GitHub Issue Tracker organized and focused, we must ensure that every issue is correctly labelled and triaged, to get the proper attention. This issue will be closed, as it meets the following criteria:
We'd like to ask you to help us out and determine whether this issue should be reopened.
Thanks again for your help! |
I am working on the documentation for Profiles, but there are a few things I do not understand myself. Could someone clear them up before I give wrong info?
https://github.com/MBB232/openshot-qt/blob/develop/doc/profiles.rst
A
progressive=0
0=no = interlaced,
1= yes = progressive
Correct?
B sample_aspect_num=1 & sample_aspect_den=1
This too seems to be a fraction. , but in openshot they are both set to 1, whereas MLTframework uses higher different values for them. https://www.mltframework.org/docs/profiles/
Are there cases where these are not the same that need to be covered? Or just a trivia that should be linked to? Is there an important difference between sample and display aspect ratio?
Is it for cases like this? #3117
C: colorspace: I have added the line for Colorspace as it was not included yet.
I found that it is about YUV colorspace.
The current profiles only use 601 and 709. Does Openshot also support the new 2020 standard used for UHD video?
D: If a source video uses different sizes for "Video Resolution" and "Buffer dimensions", which resolution should be recommended for the profile?
The text was updated successfully, but these errors were encountered: