Explore
Featured models
minimax / video-01
Generate 6s videos with prompts or images. (Also known as Hailuo)
black-forest-labs / flux-fill-pro
Professional inpainting and outpainting model with state-of-the-art performance. Edit or extend images with natural, seamless results.
black-forest-labs / flux-1.1-pro-ultra
FLUX1.1 [pro] in ultra and raw modes. Images are up to 4 megapixels. Use raw mode for realism.
black-forest-labs / flux-redux-dev
Open-weight image variation model. Create new versions while preserving key elements of your original.
recraft-ai / recraft-v3
Recraft V3 (code-named red_panda) is a text-to-image model with the ability to generate long texts, and images in a wide list of styles. As of today, it is SOTA in image generation, proven by the Text-to-Image Benchmark by Artificial Analysis
davisbrown / flux-half-illustration
Flux lora, use "in the style of TOK" to trigger generation, creates half photo half illustrated elements
I want to…
Generate images
Models that generate images from text prompts
Use a language model
Models that can understand and generate text
Upscale images
Upscaling models that create high-quality images from low-quality images
Caption images
Models that generate text from images
The FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Get embeddings
Models that generate embeddings from inputs
Extract text from images
Optical character recognition (OCR) and text extraction
Transcribe speech
Models that convert speech to text
Use handy tools
Toolbelt-type models for videos and images.
Chat with images
Ask language models about images
Edit images
Tools for manipulating images.
Use a face to make images
Make realistic images of people instantly
Flux fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Generate music
Models to generate and modify music
Generate videos
Models that create and edit videos
Generate speech
Convert text to speech
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Get structured data
Language models that support grammar-based decoding as well as jsonschema constraints.
Popular models
A simple OCR Model that can easily extract text from an image.
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
Fine-Tuned Vision Transformer (ViT) for NSFW Image Classification
A text-to-image generative AI model that creates beautiful images
Latest models
Pushing the Limits of Mathematical Reasoning in Open Language Models - Instruct model
Pushing the Limits of Mathematical Reasoning in Open Language Models - Base model
λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space
Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing with ControlNet
incrediblye fast whisper using openai/whisper-large-v3 NOT the distil model
Guiding Instruction-based Image Editing via Multimodal Large Language Models
A capable large language model for natural language to SQL generation.
Yuan2.0 is a new generation LLM developed by IEIT System, enhanced the model's understanding of semantics, mathematics, reasoning, code, knowledge, and other aspects.
LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation
phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture
BGE-M3, the first embedding model which supports multiple retrieval mode, multilingual and multi-granularity retrieval.
MetaVoice-1B: 1.2B parameter base model trained on 100K hours of speech
incredibly fast whisper using openai/whisper-medium.en NOT the distil model
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Background removal model developed by BRIA.AI, trained on a carefully selected dataset and is available as an open-source model for non-commercial use.
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
NeverSleep's MiquMaid v1 70B Miqu Finetune, GGUF Q3_K_M quantized by NeverSleep.
Base version of Mamba 2.8B, a 2.8 billion parameter state space language model
Base version of Mamba 130M, a 130 million parameter state space language model
Base version of Mamba 370M, a 370 million parameter state space language model
Base version of Mamba 790M, a 790 million parameter state space language model
Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model
Base version of Mamba 1.4B, a 1.4 billion parameter state space language model
Bokeh Prediction, a hybrid bokeh rendering framework that combines a neural renderer with a classical approach. It generates high-resolution, adjustable bokeh effects from a single image and potentially imperfect disparity maps.
Finetuned E5 embeddings for instruct based on Mistral.
LLaVA v1.6: Large Language and Vision Assistant (Nous-Hermes-2-34B)
LLaVA v1.6: Large Language and Vision Assistant (Vicuna-13B)
LLaVA v1.6: Large Language and Vision Assistant (Mistral-7B)