Explore

Fine-tune FLUX fast
Customize FLUX.1 [dev] with the fast FLUX trainer on Replicate
Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. It's fast (under 2 minutes), cheap (under $2), and gives you a warm, runnable model plus LoRA weights to download.
Featured models

luma / reframe-video
Change the aspect ratio of any video up to 30 seconds long, outputs will be 720p

google / imagen-4-fast
Use this fast version of Imagen 4 when speed and cost are more important than quality

google / imagen-4-ultra
Use this ultra version of Imagen 4 when quality matters more than speed and cost

google / imagen-4
Google's Imagen 4 flagship model

replicate / fast-flux-trainer
Train subjects or styles faster than ever

google / veo-3
Sound on: Google’s flagship Veo 3 text to video model, with audio

black-forest-labs / flux-kontext-pro
A state-of-the-art text-based image editing model that delivers high-quality outputs with excellent prompt following and consistent results for transforming images through natural language

black-forest-labs / flux-kontext-max
A premium text-based image editing model that delivers maximum performance and improved typography generation for transforming images through natural language prompts

ideogram-ai / ideogram-v3-turbo
Turbo is the fastest and cheapest Ideogram v3. v3 creates images with stunning realism, creative designs, and consistent styles
Official models
Official models are always on, maintained, and have predictable pricing.
I want to…
Generate images
Models that generate images from text prompts
Make videos with Wan2.1
Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.
Generate videos
Models that create and edit videos
Caption images
Models that generate text from images
Transcribe speech
Models that convert speech to text
Use the FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Remove backgrounds
Models that remove backgrounds from images and videos
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Caption videos
Models that generate text from videos
Edit images
Tools for manipulating images.
Use a face to make images
Make realistic images of people instantly
Get embeddings
Models that generate embeddings from inputs
Generate speech
Convert text to speech
Generate music
Models to generate and modify music
Generate text
Models that can understand and generate text
Use handy tools
Toolbelt-type models for videos and images.
Upscale images
Upscaling models that create high-quality images from low-quality images
Use official models
Official models are always on, maintained, and have predictable pricing.
Enhance videos
Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.
Detect objects
Models that detect or segment objects in images and videos.
Sing with voices
Voice-to-voice cloning and musical prosody
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Chat with images
Ask language models about images
Extract text from images
Optical character recognition (OCR) and text extraction
Use FLUX fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Control image generation
Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.
Popular models
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
This is an optimised version of the FLUX.1 [schnell] model from Black Forest Labs made with Pruna. We achieve a ~3x speedup over the original model with minimal quality loss.
Generate CLIP (clip-vit-large-patch14) text & image embeddings
Return CLIP features for the clip-vit-large-patch14 model
🦙 LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions
High resolution image Upscaler and Enhancer. Use at ClarityAI.co. A free Magnific alternative. Twitter/X: @philz1337x
Latest models
phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture
BGE-M3, the first embedding model which supports multiple retrieval mode, multilingual and multi-granularity retrieval.
MetaVoice-1B: 1.2B parameter base model trained on 100K hours of speech
incredibly fast whisper using openai/whisper-medium.en NOT the distil model
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
NeverSleep's MiquMaid v1 70B Miqu Finetune, GGUF Q3_K_M quantized by NeverSleep.
Base version of Mamba 2.8B, a 2.8 billion parameter state space language model
Base version of Mamba 130M, a 130 million parameter state space language model
Base version of Mamba 370M, a 370 million parameter state space language model
Base version of Mamba 790M, a 790 million parameter state space language model
Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model
Base version of Mamba 1.4B, a 1.4 billion parameter state space language model
@pharmapsychotic 's CLIP-Interrogator, but 3x faster and more accurate. Specialized on SDXL.
Bokeh Prediction, a hybrid bokeh rendering framework that combines a neural renderer with a classical approach. It generates high-resolution, adjustable bokeh effects from a single image and potentially imperfect disparity maps.
Finetuned E5 embeddings for instruct based on Mistral.
LLaVA v1.6: Large Language and Vision Assistant (Vicuna-13B)
LLaVA v1.6: Large Language and Vision Assistant (Mistral-7B)
Yi-VL-34B is the first open-source 34B VL model worldwide. It demonstrates exceptional performance, ranking first among all existing open-source models in the latest benchmarks including MMMU and CMMMU.
🖼️ Super fast 1.5B Image Captioning/VQA Multimodal LLM (Image-to-Text) 🖋️
The Segment Anything Model (SAM) is a powerful and versatile image segmentation model. It leverages a "foundation model" approach, meaning it can be used for various segmentation tasks without needing to be specifically trained for each one.
Source: pipizhao/Pandalyst_13B_V1.0 ✦ Quant: TheBloke/Pandalyst_13B_V1.0-AWQ ✦ Pandalyst: A large language model for mastering data analysis using pandas
A better alternative to SDXL refiners, providing a lot of quality and detail. Can also be used for inpainting or upscaling.
'''Last update: Now supports img2img.''' SDXL Canny controlnet with LoRA support.
VideoCrafter2: Text-to-Video and Image-to-Video Generation and Editing
A 70 billion parameter Llama tuned for coding and conversation
A 70 billion parameter Llama tuned for coding with Python