Explore

Fine-tune FLUX fast
Customize FLUX.1 [dev] with the fast FLUX trainer on Replicate
Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. It's fast (under 2 minutes), cheap (under $2), and gives you a warm, runnable model plus LoRA weights to download.
Featured models

black-forest-labs / flux-kontext-dev
Open-weight version of FLUX.1 Kontext

bytedance / seedream-3
A text-to-image model with support for native high-resolution (2K) image generation
bytedance / seedance-1-pro
A pro version of Seedance that offers text-to-video and image-to-video support for 5s or 10s videos, at 480p and 1080p resolution

bytedance / seedance-1-lite
A video generation model that offers text-to-video and image-to-video support for 5s or 10s videos, at 480p and 720p resolution

kwaivgi / kling-v2.1
Use Kling v2.1 to generate 5s and 10s videos in 720p and 1080p resolution from a starting image (image-to-video)

google / veo-3
Sound on: Google’s flagship Veo 3 text to video model, with audio

google / imagen-4-ultra
Use this ultra version of Imagen 4 when quality matters more than speed and cost

black-forest-labs / flux-kontext-pro
A state-of-the-art text-based image editing model that delivers high-quality outputs with excellent prompt following and consistent results for transforming images through natural language

black-forest-labs / flux-kontext-max
A premium text-based image editing model that delivers maximum performance and improved typography generation for transforming images through natural language prompts
Official models
Official models are always on, maintained, and have predictable pricing.
I want to…
Generate images
Models that generate images from text prompts
Generate videos
Models that create and edit videos
Edit images
Tools for editing images.
Upscale images
Upscaling models that create high-quality images from low-quality images
Generate speech
Convert text to speech
Transcribe speech
Models that convert speech to text
Use LLMs
Models that can understand and generate text
Caption videos
Models that generate text from videos
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Generate music
Models to generate and modify music
Caption images
Models that generate text from images
Make videos with Wan2.1
Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.
Use handy tools
Toolbelt-type models for videos and images.
Control image generation
Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.
Extract text from images
Optical character recognition (OCR) and text extraction
Chat with images
Ask language models about images
Sing with voices
Voice-to-voice cloning and musical prosody
Get embeddings
Models that generate embeddings from inputs
Use a face to make images
Make realistic images of people instantly
Remove backgrounds
Models that remove backgrounds from images and videos
Try for free
Get started with these models without adding a credit card. Whether you're making videos, generating images, or upscaling photos, these are great starting points.
Use the FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Use official models
Official models are always on, maintained, and have predictable pricing.
Enhance videos
Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.
Detect objects
Models that detect or segment objects in images and videos.
Use FLUX fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Popular models
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai
Return CLIP features for the clip-vit-large-patch14 model
Practical face restoration algorithm for *old photos* or *AI-generated faces*
High resolution image Upscaler and Enhancer. Use at ClarityAI.co. A free Magnific alternative. Twitter/X: @philz1337x
Real-ESRGAN with optional face correction and adjustable upscale
Latest models
GPU accelerated replay renderer / video data clipper for comma.ai connect's openpilot route data. SEE README.
FramePack video generation with image + motion prompt. Based on Stanford's 2025 model.
An enhanced version of sd-interior-design, featuring improved diffusion model
Text-to-Audio (T2A) that offers voice synthesis, emotional expression, and multilingual capabilities. Designed for real-time applications with low latency
Text-to-Audio (T2A) that offers voice synthesis, emotional expression, and multilingual capabilities. Optimized for high-fidelity applications like voiceovers and audiobooks.
Clone voices to use with Minimax's speech-02-hd and speech-02-turbo
This is the hidream-e1 model accelerated with the pruna optimisation engine.
🎧 Kimi-Audio-7B-Instruct, ASR, audio reasoning, captioning, emotion sensing, and TTS into one universal model 🔊
Balance speed, quality and cost. Ideogram v3 creates images with stunning realism, creative designs, and consistent styles
Turbo is the fastest and cheapest Ideogram v3. v3 creates images with stunning realism, creative designs, and consistent styles
The highest quality Ideogram v3 model. v3 creates images with stunning realism, creative designs, and consistent styles
This is the f-lite model from FAL & Freepik optimised for 2x speedups through pruna
Classifies Pokémon Yellow game screens for automated gameplay (Battle, Menu, Overworld, Dialogue)
Train FLUX.1 [pro] and FLUX 1.1 [pro] Ultra. Upload images to create a custom finetune_id to use with the inference model
A multimodal image generation model that creates high-quality images. You need to bring your own verified OpenAI key to use this model. Your OpenAI account will be charged for usage.
SOTA image-to-3D generator TRELLIS. Equipped with **All image-condition types** you need: (1) single-image (2) multi-images (3) different images for geometry and texture generation (4) mesh+images for detail variation(texture painting)
DeepAudio-V1:Towards Multi-Modal Multi-Stage End-to-End Video to Speech and Audio Generation
Enhance Generation Quality of Flow Matching V2A Model via Multi-Step CoT-Like Guidance and Combined Preference Optimization
IMG2IMG for HiDream FULL AND DEV - does creative variations
Dia 1.6B by Nari Labs, Generates realistic dialogue audio from text, including non-verbal cues and voice cloning
hunyuan3d-2 optimised with the pruna toolkit: https://github.com/PrunaAI/pruna
Image Inpainting with Flux.1-dev + ControlNet, by Alimama Team, **BETA** version
Another face swap model? 🧐 Yep, but with indexes. Swap exactly the faces you want by picking their positions. Simple, flexible, and works great on group photos.
Classifies text with fine tuned BERT. The model handles batched input texts in a single API call to improve performance and reduce costs.
This is an optimised version of the hidream-l1 model using the pruna ai optimisation toolkit!
Kimi-VL-A3B-Thinking is a multi-modal LLM that can understand text and images, and generate text with thinking processes
Ghiblify your image – ChatGPT-level quality, 10× faster and cheaper.
This is an optimised version of the hidream-full model using the pruna ai optimisation toolkit!