Explore

Fine-tune FLUX fast

Customize FLUX.1 [dev] with the fast FLUX trainer on Replicate

Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. It's fast (under 2 minutes), cheap (under $2), and gives you a warm, runnable model plus LoRA weights to download.

Official models

Official models are always on, maintained, and have predictable pricing.

View all official models

I want to…

Make videos with Wan2.1

Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.

Restore images

Models that improve or restore images by deblurring, colorization, and removing noise

Upscale images

Upscaling models that create high-quality images from low-quality images

Enhance videos

Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.

Detect objects

Models that detect or segment objects in images and videos.

Make 3D stuff

Models that generate 3D objects, scenes, radiance fields, textures and multi-views.

Use FLUX fine-tunes

Browse the diverse range of fine-tunes the community has custom-trained on Replicate

Control image generation

Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.

Latest models

phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture

Updated 1.5K runs

Updated 389 runs

Updated 606 runs

BGE-M3, the first embedding model which supports multiple retrieval mode, multilingual and multi-granularity retrieval.

Updated 268 runs

MetaVoice-1B: 1.2B parameter base model trained on 100K hours of speech

Updated 12.2K runs

Remove background from image

Updated 166.4K runs

incredibly fast whisper using openai/whisper-medium.en NOT the distil model

Updated 733 runs

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Updated 4.1K runs

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Updated 3.5K runs

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Updated 1.4K runs

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Updated 739 runs

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Updated 61 runs

Updated 175 runs

NeverSleep's MiquMaid v1 70B Miqu Finetune, GGUF Q3_K_M quantized by NeverSleep.

Updated 14.1K runs

Base version of Mamba 2.8B, a 2.8 billion parameter state space language model

Updated 843 runs

Base version of Mamba 130M, a 130 million parameter state space language model

Updated 145 runs

Base version of Mamba 370M, a 370 million parameter state space language model

Updated 53 runs

Base version of Mamba 790M, a 790 million parameter state space language model

Updated 53 runs

Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model

Updated 75 runs

Base version of Mamba 1.4B, a 1.4 billion parameter state space language model

Updated 104 runs

Merge two images, with an optional third for controlnet.

Updated 6.6K runs

@pharmapsychotic 's CLIP-Interrogator, but 3x faster and more accurate. Specialized on SDXL.

Updated 2.2M runs

this is a first model

Updated 58 runs

A Visual Language Model for GUI Agents

Updated 2.3K runs

Bokeh Prediction, a hybrid bokeh rendering framework that combines a neural renderer with a classical approach. It generates high-resolution, adjustable bokeh effects from a single image and potentially imperfect disparity maps.

Updated 606 runs

AnimateLCM Cartoon3D Model

Updated 1.3K runs

Finetuned E5 embeddings for instruct based on Mistral.

Updated 138 runs

MoE-LLaVA

Updated 1.4M runs

LLaVA v1.6: Large Language and Vision Assistant (Vicuna-13B)

Updated 3.4M runs

LLaVA v1.6: Large Language and Vision Assistant (Mistral-7B)

Updated 4.9M runs

one-shot-talking-face-replicate

Updated 1.8K runs

unet clothing segment

Updated 856 runs

Updated 32 runs

Yi-VL-34B is the first open-source 34B VL model worldwide. It demonstrates exceptional performance, ranking first among all existing open-source models in the latest benchmarks including MMMU and CMMMU.

Updated 308 runs

🖼️ Super fast 1.5B Image Captioning/VQA Multimodal LLM (Image-to-Text) 🖋️

Updated 2.3K runs

High-Quality Image Restoration Following Human Instructions

Updated 12.3K runs

Updated 1.4K runs

Generates speech from text

Updated 131.1K runs

The Segment Anything Model (SAM) is a powerful and versatile image segmentation model. It leverages a "foundation model" approach, meaning it can be used for various segmentation tasks without needing to be specifically trained for each one.

Updated 333 runs

Source: pipizhao/Pandalyst_13B_V1.0 ✦ Quant: TheBloke/Pandalyst_13B_V1.0-AWQ ✦ Pandalyst: A large language model for mastering data analysis using pandas

Updated 20 runs

A better alternative to SDXL refiners, providing a lot of quality and detail. Can also be used for inpainting or upscaling.

Updated 933.8K runs

'''Last update: Now supports img2img.''' SDXL Canny controlnet with LoRA support.

Updated 901.9K runs

VideoCrafter2: Text-to-Video and Image-to-Video Generation and Editing

Updated 96.6K runs

DiffusionLight: Light Probes by Painting a Chrome Ball

Updated 796 runs

Phi-2 by Microsoft

Updated 3.6K runs

A 70 billion parameter Llama tuned for coding and conversation

Updated 32.1K runs

Generate panoramic images with text prompts

Updated 121 runs

Locality-enhanced Projector for Multimodal LLM

Updated 27 runs

A 70 billion parameter Llama tuned for coding with Python

Updated 1.1K runs