Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vsov plugin how to use? #47

Open
Usulyre opened this issue May 2, 2023 · 14 comments
Open

Vsov plugin how to use? #47

Usulyre opened this issue May 2, 2023 · 14 comments

Comments

@Usulyre
Copy link

Usulyre commented May 2, 2023

Any examples on how to use vsov vapoursynth plugin?

Are there any examples pertaining to motion interpolation?

Thank you in advance.

@WolframRhodium
Copy link
Contributor

Hello.

If you would like to use RIFE for motion interpolation, a snippet could be

import vapoursynth as vs
from vapoursynth import core
import vsmlrt
from vsmlrt import RIFEModel, Backend

backend = Backend.OV_CPU() # or `Backend.OV_GPU(fp16=True)` on Intel GPUs

src = core.lsmas.LWLibavSource("source.mp4")
ph = (src.height + 31) // 32 * 32 - src.height
pw = (src.width  + 31) // 32 * 32 - src.width
padded = src.std.AddBorders(right=pw, bottom=ph).resize.Bicubic(format=vs.RGBH, matrix_in_s="709")
flt = vsmlrt.RIFE(padded, model=RIFEModel.v4_4, backend=backend)
res = flt.resize.Bicubic(format=vs.YUV420P8, matrix_s="709").std.Crop(right=pw, bottom=ph)
res.set_output()

In general, you could use the vsov plugin through the vsmlrt.inference() wrapper from vsmlrt.py with Backend.OV_CPU or Backend.OV_GPU backends, or the plugin directly by providing the video source clip and the ai model in the onnx format to the core.ov.Model() interface.

@Usulyre
Copy link
Author

Usulyre commented May 2, 2023

Hello.

If you would like to use RIFE for motion interpolation, a snippet could be

import vapoursynth as vs
from vapoursynth import core
import vsmlrt
from vsmlrt import RIFEModel, Backend

backend = Backend.OV_CPU() # or `Backend.OV_GPU(fp16=True)` on Intel GPUs

src = core.lsmas.LWLibavSource("source.mp4")
ph = (src.height + 31) // 32 * 32 - src.height
pw = (src.width  + 31) // 32 * 32 - src.width
padded = src.std.AddBorders(right=pw, bottom=ph).resize.Bicubic(format=vs.RGBH, matrix_in_s="709")
flt = vsmlrt.RIFE(padded, model=RIFEModel.v4_4, backend=backend)
res = flt.resize.Bicubic(format=vs.YUV420P8, matrix_s="709").std.Crop(right=pw, bottom=ph)
res.set_output()

In general, you could use the vsov plugin through the vsmlrt.inference() wrapper from vsmlrt.py with Backend.OV_CPU or Backend.OV_GPU backends, or the plugin directly by providing the video source clip and the ai model in the onnx format to the core.ov.Model() interface.

Thanks for your reply.

I want to try it using this:

https://github.com/CrendKing/avisynth_filter

Specifically the vapoursynth filter.

Can you give me an example of that last part:

" or the plugin directly by providing the video source clip and the ai model in the onnx format to the core.ov.Model() interface."

Thanks again.

@WolframRhodium
Copy link
Contributor

WolframRhodium commented May 2, 2023

flt = core.ov.Model(padded, "path_to_onnx")

I will not answer questions related to other repositories.

@hooke007
Copy link
Contributor

hooke007 commented May 6, 2023

Tested with https://github.com/AmusementClub/vs-mlrt/releases/tag/v13.1
But it seems no full fp16 support in ov backend?

Python exception: operator (): expects clip with type fp32

Traceback (most recent call last):
  File "src\cython\vapoursynth.pyx", line 3115, in vapoursynth._vpy_evaluate
  File "src\cython\vapoursynth.pyx", line 3116, in vapoursynth._vpy_evaluate
  File "test.py", line 38, in <module>
    flt = vsmlrt.RIFE(padded, model=RIFEModel.v4_6, backend=backend)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\TOOLS\mpv-lazy\vsmlrt.py", line 1024, in RIFE
    output0 = RIFEMerge(
              ^^^^^^^^^^
  File "C:\TOOLS\mpv-lazy\vsmlrt.py", line 898, in RIFEMerge
    return inference_with_fallback(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\TOOLS\mpv-lazy\vsmlrt.py", line 1539, in inference_with_fallback
    raise e
  File "C:\TOOLS\mpv-lazy\vsmlrt.py", line 1518, in inference_with_fallback
    return _inference(
           ^^^^^^^^^^^
  File "C:\TOOLS\mpv-lazy\vsmlrt.py", line 1436, in _inference
    clip = core.ov.Model(
           ^^^^^^^^^^^^^^
  File "src\cython\vapoursynth.pyx", line 2847, in vapoursynth.Function.__call__
vapoursynth.Error: operator (): expects clip with type fp32

@WolframRhodium
Copy link
Contributor

WolframRhodium commented May 6, 2023

fp16 clip is only supported with TRT, use OV_GPU(fp16=True) instead.

Please never report problems under unrelated issues.

@hooke007
Copy link
Contributor

hooke007 commented May 6, 2023

I think you misunderstood sth...
I was testing the script you wrote #47 (comment) "or Backend.OV_GPU(fp16=True)"

use OV_GPU(fp16=True) instead.

Same error.

resize.Bicubic(format=vs.RGBH, matrix_in_s="709")

So, this is a typo right?

edit: I see it.

@WolframRhodium
Copy link
Contributor

import vapoursynth as vs
from vapoursynth import core
import vsmlrt
from vsmlrt import RIFEModel, Backend

backend = Backend.OV_CPU() # or `Backend.OV_GPU(fp16=True)` on Intel GPUs

src = core.lsmas.LWLibavSource("source.mp4")
ph = (src.height + 31) // 32 * 32 - src.height
pw = (src.width  + 31) // 32 * 32 - src.width
padded = src.std.AddBorders(right=pw, bottom=ph).resize.Bicubic(format=vs.RGBS, matrix_in_s="709")
flt = vsmlrt.RIFE(padded, model=RIFEModel.v4_4, backend=backend)
res = flt.resize.Bicubic(format=vs.YUV420P8, matrix_s="709").std.Crop(right=pw, bottom=ph)
res.set_output()

@Usulyre
Copy link
Author

Usulyre commented May 12, 2023

flt = core.ov.Model(padded, "path_to_onnx")

I will not answer questions related to other repositories.

Ok.

My sample script:

import vapoursynth as vs
from vapoursynth import core

import vsmlrt
from vsmlrt import RIFEModel, Backend
core = vs.core
ret = VpsFilterSource
backend = vsmlrt.Backend.TRT(fp16=True)

ret = core.resize.Bicubic(clip=ret, format=vs.RGBS, matrix_in_s="470bg", range_s="full")
ret = vsmlrt.RIFE(clip=ret, multi=5, model=RIFEModel.v4_6, backend=backend)

output_ret = core.resize.Bicubic(clip=ret,format=vs.YUV420P8, matrix_s="709")

output_ret.set_output()

@Usulyre
Copy link
Author

Usulyre commented May 12, 2023

I decided to use this because I have a GTX 1660.

My next question is how do I use a different model like "rife_v4.6_ensemble.onnx"

What do I put in the value for the script to specify that external model? "model= "

@hooke007
Copy link
Contributor

When you import a function from a module, you should know how to get all its available flags.
For vsmlrt.RIFE , you could found all flags here in the script:

vs-mlrt/scripts/vsmlrt.py

Lines 961 to 971 in cf2bfbf

def RIFE(
clip: vs.VideoNode,
multi: int = 2,
scale: float = 1.0,
tiles: typing.Optional[typing.Union[int, typing.Tuple[int, int]]] = None,
tilesize: typing.Optional[typing.Union[int, typing.Tuple[int, int]]] = None,
overlap: typing.Optional[typing.Union[int, typing.Tuple[int, int]]] = None,
model: typing.Literal[40, 42, 43, 44, 45, 46] = 44,
backend: backendT = Backend.OV_CPU(),
ensemble: bool = False,
_implementation: typing.Optional[typing.Literal[1, 2]] = None

To use rife_v4.6, just use model=46 instead.

@WolframRhodium
Copy link
Contributor

I decided to use this because I have a GTX 1660.

My next question is how do I use a different model like "rife_v4.6_ensemble.onnx"

What do I put in the value for the script to specify that external model? "model= "

ret = vsmlrt.RIFE(clip=ret, multi=5, model=RIFEModel.v4_6, backend=backend, ensemble=True)

@Usulyre
Copy link
Author

Usulyre commented May 12, 2023

I decided to use this because I have a GTX 1660.
My next question is how do I use a different model like "rife_v4.6_ensemble.onnx"
What do I put in the value for the script to specify that external model? "model= "

ret = vsmlrt.RIFE(clip=ret, multi=5, model=RIFEModel.v4_6, backend=backend, ensemble=True)

Ok.

How do i then specify rife_v2 one in my script?

@hooke007
Copy link
Contributor

_implementation=2

@Shinobuos
Copy link

Put up a blessed guide to install or compile these functions on Linux for the love of whatever deity they create.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants