-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add clip resources to the transformers documentation #20190
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding these! I just added a comment on removing non-CLIP specific resources and also Spaces.
Co-authored-by: Steven Liu <[email protected]>
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thank you so much! Pinging @sgugger for a final look :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution!
<PipelineTag pipeline="image-to-text"/> | ||
- A notebook showing [Video to text matching with CLIP for videos](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/X-CLIP/Video_text_matching_with_X_CLIP.ipynb). | ||
|
||
|
||
<PipelineTag pipeline="zero-shot-classification"/> | ||
- A notebook showing [Zero shot video classification using CLIP for video](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/X-CLIP/Zero_shot_classify_a_YouTube_video_with_X_CLIP.ipynb). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are resources for X-CLIP, not CLIP.
- A blog post on [How to use CLIP to retrieve images from text](https://huggingface.co/blog/fine-tune-clip-rsicd). | ||
- A blog bost on [How to use CLIP for Japanese text to image generation](https://huggingface.co/blog/japanese-stable-diffusion). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first sentence should be something along the lines of "How to fine-tune CLIP on image-text pairs" rather than "How to retrieve...", the second one is not relevant for CLIP, as it's a blog about Stable Diffusion.
Folks, please make sure resources which are added are talking about the particular model. |
* WIP: Added CLIP resources from HuggingFace blog * ADD: Notebooks documentation to clip * Add link straight to notebook Co-authored-by: Steven Liu <[email protected]> * Change notebook links to colab Co-authored-by: Ambuj Pawar <[email protected]> Co-authored-by: Steven Liu <[email protected]>
What does this PR do?
Fixes #20055 (partially)
Before submitting
Pull Request section?
to it if that's the case. Model resources contribution #20055
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@stevhliu Please can you have a look?