Explore
Featured models

nvidia / sana-sprint-1.6b
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation

black-forest-labs / flux-dev-lora
A version of flux-dev, a text to image model, that supports fast fine-tuned lora inference

wavespeedai / wan-2.1-t2v-480p
Accelerated inference for Wan 2.1 14B text to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

wavespeedai / wan-2.1-i2v-480p
Accelerated inference for Wan 2.1 14B image to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

deepseek-ai / deepseek-v3
DeepSeek-V3-0324 is the leading non-reasoning model, a milestone for open source

lucataco / orpheus-3b-0.1-ft
Orpheus 3B - high quality, emotive Text to Speech

kwaivgi / kling-v1.6-pro
Generate 5s and 10s videos in 1080p resolution

fofr / wan2.1-with-lora
Run Wan2.1 14b or 1.3b with a lora

anthropic / claude-3.7-sonnet
The most intelligent Claude model and the first hybrid reasoning model on the market (claude-3-7-sonnet-20250219)
I want to…
Make videos with Wan2.1
Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.
Generate images
Models that generate images from text prompts
Generate videos
Models that create and edit videos
Caption images
Models that generate text from images
Transcribe speech
Models that convert speech to text
Generate text
Models that can understand and generate text
Use a face to make images
Make realistic images of people instantly
Upscale images
Upscaling models that create high-quality images from low-quality images
Use official models
Official models are always on, maintained, and have predictable pricing.
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Enhance videos
Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.
Generate speech
Convert text to speech
Caption videos
Model s that generate text from videos
Remove backgrounds
Models that remove backgrounds from images and videos
Use handy tools
Toolbelt-type models for videos and images.
Detect objects
Models that detect or segment objects in images and videos.
Generate music
Models to generate and modify music
Sing with voices
Voice-to-voice cloning and musical prosody
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Chat with images
Ask language models about images
Extract text from images
Optical character recognition (OCR) and text extraction
Get embeddings
Models that generate embeddings from inputs
Use the FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Use FLUX fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Control image generation
Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.
Edit images
Tools for manipulating images.
Popular models
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
Generate CLIP (clip-vit-large-patch14) text & image embeddings
Create photos, paintings and avatars for anyone in any style within seconds.
Practical face restoration algorithm for *old photos* or *AI-generated faces*
High resolution image Upscaler and Enhancer. Use at ClarityAI.co. A free Magnific alternative. Twitter/X: @philz1337x
Latest models
Mediapipe Blendshape Labeler - Predicts the blend shapes of an image.
Fast FLUX DEV -> Flux Controlnet Canny, Controlnet Depth , Controlnet Line Art, Controlnet Upscaler - You can use just one controlnet or All - LORAs: HyperFlex LoRA , Add Details LoRA , Realism LoRA
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
A version of flux-dev, a text to image model, that supports fast fine-tuned lora inference
The fastest image generation model tailored for fine-tuned use
Open-weight inpainting model for editing and extending images. Guidance-distilled from FLUX.1 Fill [pro].
FLUX1.1 [pro] in ultra and raw modes. Images are up to 4 megapixels. Use raw mode for realism.
Faster, better FLUX Pro. Text-to-image model with excellent image quality, prompt adherence, and output diversity.
State-of-the-art image generation with top of the line prompt following, visual quality, image detail and output diversity.
Professional inpainting and outpainting model with state-of-the-art performance. Edit or extend images with natural, seamless results.
Professional edge-guided image generation. Control structure and composition using Canny edge detection
Professional depth-aware image generation. Edit images while preserving spatial relationships.
An optimized version of sdxl-lightning from Bytedance that is more than 2x faster and 2x cheaper
For the paper "Structured 3D Latents for Scalable and Versatile 3D Generation".
This is an optimised version of the flux schnell model from black forest labs with the pruna tool. We achieve a ~3x speedup over the original model with minimal quality loss.
This model is an optimised version of stable-diffusion by stability AI that is 3x faster and 3x cheaper.
Transform your portrait photos into any style or setting while preserving your facial identity
A model Flux.1-dev-Controlnet-Upscaler by www.androcoders.in
Accelerated inference for Wan 2.1 14B text to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Accelerated inference for Wan 2.1 14B text to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Accelerated inference for Wan 2.1 14B image to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Accelerated inference for Wan 2.1 14B image to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Indic Parler-TTS Pretrained is a multilingual Indic extension of Parler-TTS Mini.
State of the art video generation model. Veo 2 can faithfully follow simple and complex instructions, and convincingly simulates real-world physics as well as a wide range of visual styles.
DeepSeek-V3-0324 is the leading non-reasoning model, a milestone for open source
This model generates pose variation of a cartoon character. It preserves the cartoon identity. Use this model to augment training dataset for any cartoon character created through AI. The augmented dataset can be used to train a LoRA model.
Best-in-class clothing virtual try on in the wild (non-commercial use only)
Easily create video datasets with auto-captioning for Hunyuan-Video LoRA finetuning