-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capture frames from the canvas #754
Comments
It seem to change with the canvas used, for example the wpgu offscreen canvas will return a memory view of the current canvas with For video, my idea was to essentially use the snapshot method with precomputed timesteps, which would easy allow users to pick a start, duration and framerate. Would also not be limited to real time. And then handle encoding externally - likely ffmpeg I only look at pygfx for reference, so they might be something more useful I am not aware of. |
Yup the jupyter canvas method you mentioned is what we have implemented in fastplotlib, I'll do some digging to figure out how to do this with Qt and glfw. |
It is easy to obtain a real-time screenshot of the scene by reading the "ColorTexture" of the "RenderTarget", and off-screen rendering is not necessarily required. |
I noticed that the |
Partially! What's the best way to capture frames to make a video? Right now we basically run it in the main animation loop, i.e. the function we set as # multiprocessing queue
q = Queue()
def animation():
if time_elapsed > (1/30) # some timer used to capture frames at intervals so it doesn't run on every animation call because then it blocks
frame = renderer.snapshot()
q.put(frame)
canvas.request_draw(animation) |
I don't have much practical experience with this. 😅 However, I think getting video frames in the rendering loop may not guarantee an absolutely fixed interval, as it depends on the rendering time of each frame. If you need to obtain video frames at an absolutely fixed frame rate, you may need to use multi-threading (another thread to fetch from the WgpuRenderer object at a fixed frame rate), but maybe the internal objects of pygfx are not thread-safe. Therefore, an alternative approach is to cache the latest rendered frame in the main rendering loop, and have another thread read this cached frame at a fixed frame rate to generate a sequence of video frames. Maybe something like this: # multiprocessing queue
q = Queue()
latest_frame = None
def animation():
renderer.render(...)
latest_frame = renderer.snapshot()
canvas.request_draw()
def capture():
while True:
q.put(latest_frame)
time.sleep(1/30)
t = threading.Thread(target=capture)
canvas.request_draw(animation)
t.start()
run() |
The I think it makes sense to have more sophisticated snapshot functionality. I added a note in #492, because it relates to viewports too. We can leave this issue open to explicitly track this feature. |
I'm wondering what's the best way to basically create a video of the canvas. The offscreen canvas renders to a texture, and we could save that texture as video frames. But what if we're not using the offscreen canvas?
The text was updated successfully, but these errors were encountered: