Explore

Generated with davisbrown/flux-half-illustration

Fine-tune FLUX

Customize FLUX.1 [dev] with Ostris's AI Toolkit on Replicate. Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. (Generated with davisbrown/flux-half-illustration.)

I want to…

Make videos with Wan2.1

Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.

Upscale images

Upscaling models that create high-quality images from low-quality images

Restore images

Models that improve or restore images by deblurring, colorization, and removing noise

Use official models

Official models are always on, maintained, and have predictable pricing.

Enhance videos

Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.

Detect objects

Models that detect or segment objects in images and videos.

Make 3D stuff

Models that generate 3D objects, scenes, radiance fields, textures and multi-views.

Use FLUX fine-tunes

Browse the diverse range of fine-tunes the community has custom-trained on Replicate

Control image generation

Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.

Latest models

📖 PuLID: Pure and Lightning ID Customization via Contrastive Alignment

Updated 2.6M runs

An example model created from cli

Updated 20 runs

PaliGemma 3B, an open VLM by Google, pre-trained with 224*224 input images and 128 token input/output text sequences

Updated 1.3K runs

A model which generates text in response to an input image and prompt.

Updated 1.6M runs

Generate image with transparent background

Updated 631 runs

Yi-1.5 is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples

Updated 63 runs

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view LRMs

Updated 258.2K runs

Blip 3 / XGen-MM, Answers questions about images ({blip3,xgen-mm}-phi3-mini-base-r-v1)

Updated 1.2M runs

Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant.

Updated 1.6K runs

return CLIP features for the dfn5b-clip-vit-h-14-384, current highest average perf. in openclip models leaderboard.

Updated 385 runs

Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant.

Updated 74K runs

Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant.

Updated 6.1K runs

Updated 6.3K runs

Implementation of the RemBG library

Updated 345 runs

BLIP3(XGen-MM) is a series of foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research

Updated 383 runs

Transcribe audios using OpenAI's Whisper with stabilizing timestamps by stable-ts python package.

Updated 132 runs

Updated 1.3K runs

Use a face to instantly make images. Uses SDXL Lightning checkpoints.

Updated 122.1K runs

Cog to turn minimally-formatted plaintext into pdfs (using tex on the backend)

Updated 100 runs

Dark Sushi Mix 2.25D Model with vae-ft-mse-840000-ema (Text2Img, Img2Img and Inpainting)

Updated 59.2K runs

DeepSeek LLM, an advanced language model comprising 67 billion parameters. Trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese

Updated 455 runs

Updated 1.6K runs

turns text into pdf files with TeX

Updated 252 runs

A llama-3 based moderation and safeguarding language model

Updated 733.3K runs

Updated 9 runs

a fine-tuned model to detect dragon in images.

Updated 32 runs

InstantID. ControlNets. More base SDXL models. And the latest ByteDance's ⚡️SDXL-Lightning !⚡️

Updated 286.6K runs

The img2img pipeline that makes an anime-style image of a person. It uses one of sd1.5 models as a base, depth-estimation as a ControleNet and IPadapter model for face consistency.

Updated 117 runs

Consistent Self-Attention for Long-Range Image and Video Generation

Updated 71K runs

Optimized model

Updated 247 runs

StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation

Updated 1K runs

Robust face restoration algorithm for old photos / AI-generated faces (adapted to work with video inputs)

Updated 495 runs

Generate anime-style image

Updated 457 runs

Semantic Segmentation

Updated 1.1M runs

Just some good ole beautifulsoup scrapping URL magic. (some sites don't work as they block scrapping, but still useful)

Updated 55.1K runs

Tango 2: Use text prompts to make sound effects

Updated 25.2K runs

🗣️ TalkNet-ASD: Detect who is speaking in a video

Updated 95 runs

Transfer a material from an image to a subject

Updated 9K runs

Updated 70 runs

Uses 'Align your steps' for faster higher quality images

Updated 5K runs

PyTorch implementation of AnimeGAN for fast photo animation

Updated 31.3K runs

Updated 11 runs

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Updated 2.7K runs

Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis

Updated 1.4K runs

llm model ,for CN

Updated 215 runs

Reliberate v3 Model (Text2Img, Img2Img and Inpainting)

Updated 2.2M runs

Deliberate V6 Model (Text2Img, Img2Img and Inpainting)

Updated 11.5K runs

AbsoluteReality V1.8.1 Model (Text2Img, Img2Img and Inpainting)

Updated 79.2K runs

Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets

Updated 52.8K runs