Explore

Generated with davisbrown/flux-half-illustration

Fine-tune FLUX

Customize FLUX.1 [dev] with Ostris's AI Toolkit on Replicate. Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. (Generated with davisbrown/flux-half-illustration.)

I want to…

Make videos with Wan2.1

Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.

Upscale images

Upscaling models that create high-quality images from low-quality images

Restore images

Models that improve or restore images by deblurring, colorization, and removing noise

Use official models

Official models are always on, maintained, and have predictable pricing.

Enhance videos

Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.

Detect objects

Models that detect or segment objects in images and videos.

Make 3D stuff

Models that generate 3D objects, scenes, radiance fields, textures and multi-views.

Use FLUX fine-tunes

Browse the diverse range of fine-tunes the community has custom-trained on Replicate

Control image generation

Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.

Latest models

Apollo 3B - An Exploration of Video Understanding in Large Multimodal Models

Updated 109 runs

Video Background Removal

Updated 1.8K runs

Prepare arXiv papers for processing by Large Language Models (LLMs) by converting them into a single, expanded LaTeX file.

Updated 12 runs

Arbitrary-steps Image Super-resolution via Diffusion Inversion

Updated 2.6K runs

Video Preprocessing tool for captioning multiple videos using GPT, Claude or Gemini

Updated 112 runs

Simple tool to split apart a video into snippets

Updated 118 runs

Create ads for marketing, social media with your own company logo on any object you want.

Updated 292 runs

luma/ray

Fast, high quality text-to-video and image-to-video (Also known as Dream Machine)

Updated 26.7K runs

a-r-r-o-w/cogvideox-factory for Mochi-1 LoRA Training

Updated 28 runs

Make realistic images of real people instantly

Updated 860.7K runs

A state-of-the-art text-to-video generation model capable of creating high-quality videos with realistic motion from text descriptions

Updated 1.9K runs

MEMO is a state-of-the-art open-weight model for audio-driven talking video generation.

Updated 695 runs

Updated 359 runs

High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training

Updated 147 runs

A fast image model with wide artistic range and resolutions up to 4096x4096

Updated 130.2K runs

Moondream 0.5B, the world's smallest vision language model

Updated 49 runs

The Qwen2.5-Coder-32B-Instruct is a state-of-the-art, open-source large language model (LLM). It is specifically designed for coding tasks and is part of the Qwen2.5-Coder series, featuring 32 billion parameters.

Updated 49 runs

luma/photon-flash

Accelerated variant of Photon prioritizing speed while maintaining quality

Updated 61.3K runs

luma/photon

High-quality image generation model optimized for creative professional workflows and ultra-high fidelity outputs

Updated 310.7K runs

The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).

Updated 29 runs

Hey, this is a fork of flux pulid to support multiple ids, use with a depth map and define bounding boxes for each face

Updated 2.4K runs

haiper-ai/haiper-video-2

Generate 4s and 6s videos from a prompt or image

Updated 9.5K runs

Updated 385 runs

flux.1-dev: hyper-sd 8 steps + instanx ip adataper + pulid + depth controlnet

Updated 221 runs

A version of mochi-1 (a text to video model) that supports fine-tuned lora inference

Updated 99 runs

Let Vision Language Models Reason Step-by-Step

Updated 38 runs

Mochi 1 preview is an open video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation

Updated 2.5K runs

SVDQuant Optimized Flux.Schnell

Updated 30 runs

SmolVLM-Instruct by HuggingFaceTB

Updated 939 runs

AnimateDiff-Lightning: Cross-Model Diffusion Distillation

Updated 46 runs

Updated 522 runs

Jina-CLIP v2: 0.9B multimodal embedding model with 89-language multilingual support, 512x512 image resolution, and Matryoshka representations

Updated 76K runs

SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory

Updated 100 runs

Segment Anything with prompts

Updated 620.9K runs

Convert speech in audio to text

Updated 81.8M runs

Anima Pencil XL v5 Model (Text2Img, Img2Img and Inpainting)

Updated 15.4K runs

music generation with fine tuned stable audio

Updated 7K runs

cerate music with open source

Updated 67.5K runs

DiT-based video generation model for generating high-quality videos in real-time

Updated 3K runs

Pencil XL v2 Model (Text2Img, Img2Img and Inpainting)

Updated 4.6K runs

Unlimited XL Model (Text2Img, Img2Img and Inpainting)

Updated 22.3K runs

Updated 1.4K runs

A model using microsoft/Florence-2-large to create mask of watermarked images

Updated 30 runs

xl

Updated 318 runs

Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation

Updated 53K runs

Playground v2.0: A diffusion-based text-to-image generation model trained from scratch by the research team at Playground

Updated 54 runs

Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis

Updated 77 runs

CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion

Updated 19 runs