Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add clip resources to the transformers documentation #20190

Merged
merged 4 commits into from
Nov 15, 2022
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions docs/source/en/model_doc/clip.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,33 @@ encode the text and prepare the images. The following example shows how to get t

This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/openai/CLIP).

## Resources

A list of official HuggingFace and community (indicated by 🌎) resources to help you get started with CLIP. If you're
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.

<PipelineTag pipeline="text-to-image"/>
- A blog post on [How to use CLIP to retrieve images from text](https://huggingface.co/blog/fine-tune-clip-rsicd)
ambujpawar marked this conversation as resolved.
Show resolved Hide resolved
- A blog bost on [How to use CLIP for Japanese text to image generation](https://huggingface.co/blog/japanese-stable-diffusion)
- A Huggingface space on [Finding best reaction gifs to a given text using CLIP](https://huggingface.co/spaces/flax-community/clip-reply-demo)
ambujpawar marked this conversation as resolved.
Show resolved Hide resolved

<PipelineTag pipeline="image-to-text"/>
- A Huggingface space on [Finding a prompt based on the image](https://huggingface.co/spaces/pharma/CLIP-Interrogator)
- A Huggingface space showing [Guided diffusion using CLIP](https://huggingface.co/spaces/EleutherAI/clip-guided-diffusion)
- A notebook showing [Video to text matching with CLIP for videos](https://github.com/NielsRogge/Transformers-Tutorials/blob/2d80e7293ccc417cd4f8f4fcca21ebcbda8f5d8f/X-CLIP/Video_text_matching_with_X_CLIP.ipynb)

<PipelineTag pipeline="zero-shot-classification"/>
- A notebook showing [Zero shot video classification using CLIP for video](https://github.com/NielsRogge/Transformers-Tutorials/blob/2d80e7293ccc417cd4f8f4fcca21ebcbda8f5d8f/X-CLIP/Zero_shot_classify_a_YouTube_video_with_X_CLIP.ipynb)
ambujpawar marked this conversation as resolved.
Show resolved Hide resolved


🚀 Deploy
ambujpawar marked this conversation as resolved.
Show resolved Hide resolved

- A blog post on how to [deploy CLIP on Google Cloud](https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds).
- A blog post on how to [deploy CLIP with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker).
- A blog post on how to [Deploy CLIP with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker).


## CLIPConfig

[[autodoc]] CLIPConfig
Expand Down