-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
removes decord #33987
removes decord #33987
Conversation
optimize np Revert "optimize" This reverts commit faa136b. helpers as documentation pydoc missing keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very welcome for this! Thanks for the cleanup 🤗
Could you confirm that the test_pipeline_video_classification
tests work well locally?
Regarding the failing test: VideoClassificationPipeline requires the PyAv library but it was not found in your environment. You can install it with: pip install av
pip install av
the workflow for pipelines needs a small fix! |
Fixed the requirement, I wrongly confused "vision" with "video" and thought I didn't need the How can I troubleshoot them? Are they flaky? I dont see how they are related with this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, yep the other tests are unrelated!
But the tests you edited are skipped by the CIs because they need av (which was not installed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nit, LGTM otherwise!
if i > end_index: | ||
break | ||
if i >= start_index and i in indices: | ||
frames.append(frame) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
frames.append(frame) | |
frames.append(frame.to_rgb()) |
any reason we don't use this?
or directly convert before stacking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, we return the transformation at the end to directly transform the numpy data structure
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
It's more efficient than converting the frames to rgb:
https://github.com/PyAV-Org/PyAV/blob/main/av/video/frame.pyx#L252
https://github.com/PyAV-Org/PyAV/blob/main/av/audio/frame.pyx#L168
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Okay merging then! 🤗 |
* removes decord dependency optimize np Revert "optimize" This reverts commit faa136b. helpers as documentation pydoc missing keys * make fixup * require_av --------- Co-authored-by: ad <[email protected]>
* removes decord dependency optimize np Revert "optimize" This reverts commit faa136b. helpers as documentation pydoc missing keys * make fixup * require_av --------- Co-authored-by: ad <[email protected]>
* removes decord dependency optimize np Revert "optimize" This reverts commit faa136b. helpers as documentation pydoc missing keys * make fixup * require_av --------- Co-authored-by: ad <[email protected]>
The only usage that was left of decord is removed in this PR
The helper functions
read_video_pyav
andsample_frame_indices
are the used in all the documentation https://github.com/search?q=repo%3Ahuggingface%2Ftransformers%20read_video_pyav&type=code andvideo_classification
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/video_classification.py#L132