lucataco / omnigen2

OmniGen2: a powerful and efficient unified multimodal model

  • Public
  • 191 runs
  • GitHub
  • Weights
  • Paper
  • License
Iterate in playground

Run time and cost

This model costs approximately $0.13 to run on Replicate, or 7 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 130 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Introduction

OmniGen2 is a powerful and efficient unified multimodal model. Unlike OmniGen v1, OmniGen2 features two distinct decoding pathways for text and image modalities, utilizing unshared parameters and a decoupled image tokenizer. OmniGen2 has competitive performance across four primary capabilities:

  • Visual Understanding: Inherits the robust ability to interpret and analyze image content from its Qwen-VL-2.5 foundation.
  • Text-to-Image Generation: Creates high-fidelity and aesthetically pleasing images from textual prompts.
  • Instruction-guided Image Editing: Executes complex, instruction-based image modifications with high precision, achieving state-of-the-art performance among open-source models.
  • In-context Generation: A versatile capability to process and flexibly combine diverse inputs—including humans, reference objects, and scenes—to produce novel and coherent visual outputs.

As an open-source project, OmniGen2 provides a powerful yet resource-efficient foundation for researchers and developers exploring the frontiers of controllable and personalized generative AI.

We will release the training code, dataset, and data construction pipeline soon. Stay tuned!


Demonstration of OmniGen2's overall capabilities.


Demonstration of OmniGen2's image editing capabilities.


Demonstration of OmniGen2's in-context generation capabilities.

💡 Usage Tips

To achieve optimal results with OmniGen2, you can adjust the following key hyperparameters based on your specific use case.

  • text_guidance_scale: Controls how strictly the output adheres to the text prompt (Classifier-Free Guidance).
  • image_guidance_scale: This controls how much the final image should resemble the input reference image.
    • The Trade-off: A higher value makes the output more faithful to the reference image’s structure and style, but it might ignore parts of your text prompt. A lower value (~1.5) gives the text prompt more influence.
    • Tip: For image editing task, we recommend to set it between 1.2 and 2.0; for in-context generateion task, a higher image_guidance_scale will maintian more details in input images, and we recommend to set it between 2.5 and 3.0.
  • max_pixels: Automatically resizes images when their total pixel count (width × height) exceeds this limit, while maintaining its aspect ratio. This helps manage performance and memory usage.
  • Tip: Default value is 1024*1024. You can reduce this value if you encounter memory issues.
  • max_input_image_side_length: Maximum side length for input images.
  • negative_prompt: Tell the model what you don’t want to see in the image.
    • Example: blurry, low quality, text, watermark
    • Tip: For the best results, try experimenting with different negative prompts. If you’re not sure, just use the default negative prompt.
  • enable_model_cpu_offload: Reduces VRAM usage by nearly 50% with a negligible impact on speed.
  • This is achieved by offloading the model weights to CPU RAM when they are not in use.
  • See: Model Offloading
  • enable_sequential_cpu_offload: Minimizes VRAM usage to less than 3GB, but at the cost of significantly slower performance.
  • This works by offloading the model in submodules and loading them onto the GPU sequentially as needed.
  • See: CPU Offloading
  • cfg_range_start, cfg_range_end: Define the timestep range where CFG is applied. Per this paper, reducing cfg_range_end can significantly decrease inference time with a negligible impact on quality.

Some suggestions for improving generation quality:

  1. Use High-Quality Images
  2. Provide clear images, preferably with a resolution greater than 512×512 pixels.
  3. Small or blurry inputs will result in low-quality outputs.
  4. Be Specific with Instructions
  5. Clearly describe both what to change and how you want it changed.
  6. For in-context generation tasks, explicitly state which elements should come from which image. For example, instead of “Add bird to desk”, say “Add the bird from image 1 onto the desk in image 2.”
  7. Prioritize English The model currently performs best with English prompts.

❤️ Citing Us

If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated (OmniGen2 report will be available as soon as possible):

@article{wu2025omnigen2,
  title={OmniGen2: Exploration to Advanced Multimodal Generation},
  author={Chenyuan Wu and Pengfei Zheng and Ruiran Yan and Shitao Xiao and Xin Luo and Yueze Wang and Wanli Li and Xiyan Jiang and Yexin Liu and Junjie Zhou and Ze Liu and Ziyi Xia and Chaofan Li and Haoge Deng and Jiahao Wang and Kun Luo and Bo Zhang and Defu Lian and Xinlong Wang and Zhongyuan Wang and Tiejun Huang and Zheng Liu},
  journal={arXiv preprint arXiv:2506.18871},
  year={2025}
}

License

This work is licensed under Apache 2.0 license.