Explore
Featured models

minimax / image-01
Minimax's first image model, with character reference support

topazlabs / video-upscale
Video Upscaling from Topaz Labs

zsxkib / dia
Dia 1.6B by Nari Labs, Generates realistic dialogue audio from text, including non-verbal cues and voice cloning

fofr / color-matcher
Color match and white balance fixes for images

prunaai / hidream-l1-fast
This is an optimised version of the hidream-l1 model using the pruna ai optimisation toolkit!

meta / llama-4-scout-instruct
A 17 billion parameter model with 16 experts

wavespeedai / wan-2.1-i2v-480p
Accelerated inference for Wan 2.1 14B image to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

easel / advanced-face-swap
Face swap one or two people into a target image

anthropic / claude-3.7-sonnet
The most intelligent Claude model and the first hybrid reasoning model on the market (claude-3-7-sonnet-20250219)

Fine-tune FLUX
Customize FLUX.1 [dev] with Ostris's AI Toolkit on Replicate. Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. (Generated with davisbrown/flux-half-illustration.)
I want to…
Make videos with Wan2.1
Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.
Generate images
Models that generate images from text prompts
Generate videos
Models that create and edit videos
Caption images
Models that generate text from images
Transcribe speech
Models that convert speech to text
Upscale images
Upscaling models that create high-quality images from low-quality images
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Use a face to make images
Make realistic images of people instantly
Edit images
Tools for manipulating images.
Caption videos
Models that generate text from videos
Generate text
Models that can understand and generate text
Use official models
Official models are always on, maintained, and have predictable pricing.
Enhance videos
Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.
Generate speech
Convert text to speech
Remove backgrounds
Models that remove backgrounds from images and videos
Use handy tools
Toolbelt-type models for videos and images.
Detect objects
Models that detect or segment objects in images and videos.
Generate music
Models to generate and modify music
Sing with voices
Voice-to-voice cloning and musical prosody
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Chat with images
Ask language models about images
Extract text from images
Optical character recognition (OCR) and text extraction
Get embeddings
Models that generate embeddings from inputs
Use the FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Use FLUX fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Control image generation
Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.
Popular models
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
Generate CLIP (clip-vit-large-patch14) text & image embeddings
Return CLIP features for the clip-vit-large-patch14 model
Latest models
📖 PuLID: Pure and Lightning ID Customization via Contrastive Alignment
PaliGemma 3B, an open VLM by Google, pre-trained with 224*224 input images and 128 token input/output text sequences
A model which generates text in response to an input image and prompt.
Generate image with transparent background
Yi-1.5 is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view LRMs
Blip 3 / XGen-MM, Answers questions about images ({blip3,xgen-mm}-phi3-mini-base-r-v1)
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant.
return CLIP features for the dfn5b-clip-vit-h-14-384, current highest average perf. in openclip models leaderboard.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant.
BLIP3(XGen-MM) is a series of foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research
Transcribe audios using OpenAI's Whisper with stabilizing timestamps by stable-ts python package.
Use a face to instantly make images. Uses SDXL Lightning checkpoints.
Cog to turn minimally-formatted plaintext into pdfs (using tex on the backend)
Dark Sushi Mix 2.25D Model with vae-ft-mse-840000-ema (Text2Img, Img2Img and Inpainting)
DeepSeek LLM, an advanced language model comprising 67 billion parameters. Trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese
A llama-3 based moderation and safeguarding language model
InstantID. ControlNets. More base SDXL models. And the latest ByteDance's ⚡️SDXL-Lightning !⚡️
The img2img pipeline that makes an anime-style image of a person. It uses one of sd1.5 models as a base, depth-estimation as a ControleNet and IPadapter model for face consistency.
Consistent Self-Attention for Long-Range Image and Video Generation
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Robust face restoration algorithm for old photos / AI-generated faces (adapted to work with video inputs)
Just some good ole beautifulsoup scrapping URL magic. (some sites don't work as they block scrapping, but still useful)
PyTorch implementation of AnimeGAN for fast photo animation
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis
AbsoluteReality V1.8.1 Model (Text2Img, Img2Img and Inpainting)
Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets