Change the repository type filter
All
Repositories list
138 repositories
Qwen2-VL-7B-Instruct
Public templateQwen2-VL-7B-Instruct is a 7-billion-parameter multimodal language model developed by Alibaba Cloud’s Qwen team, designed for instruction-based tasks with advanced visual and multilingual capabilities.FluxUpscalerXS
PublicCodellama-34B
PublicCode Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 34B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding.Ministral-8B-Instruct
PublicMinistral-8B-Instruct is an LLM developed by Mistral AI, specifically designed for instruction-based tasks.Whisper-large-v3-turbo
PublicWhisper-large-v3-turbo is an efficient automatic speech recognition model by OpenAI, featuring 809 million parameters and significantly faster than its predecessor, Whisper large-v3.bark
Public templateBark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying.bark-streaming
PublicLlama-3.2-11B-Vision-Instruct
Public templatePhi-3.5-MoE-instruct
PublicPhi-3-128k
PublicLlama-2-7b-hf
PublicLlama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.InternVL2-Llama3-76B-AWQ
Publichifi-gan-template
PublicDINet
Publictemplate-method
PublicGPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model.template_input_batch
Publicspeaker-pipeline
PublicDemuc-Pipeline
Publictranslation-pipeline
Public- ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Canny edges. It can be used in combination with Stable Diffusion.
Facebook-bart-cnn
PublicBART model pre-trained on English language, and fine-tuned on CNN Daily Mail. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in [this repository (https://github.com/pytorch/fairseq/tree/master/examples/bart).idefics-9b-instruct-8bit
PublicIDEFICS (Image-aware Decoder Enhanced à la Flamingo with Interleaved Cross-attentionS) is an open-access reproduction of Flamingo, a closed-source visual language model developed by Deepmind. Like GPT-4, the multimodal model accepts arbitrary sequences of image and text inputs and produces text outputs.Vicuna-13b-8k
PublicVicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.- Universal-Sentence-Encoder-Multilingual-QA is a model developed by researchers at Google mainly for the purpose of question answering. You can use this template to import the model in Inferless.
DreamShaper
PublicControlnet v1.1 was released in lllyasviel/ControlNet-v1-1 by Lvmin Zhang. This checkpoint is a conversion of the original checkpoint into diffusers format. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5.Falcon-7b-instruct
PublicFalcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.