Readme
Here’s the improved version of your README:
HunyuanVideo API (8-bit Version)
HunyuanVideo is a cutting-edge text-to-video generation model capable of creating high-quality videos from text descriptions. It surpasses many closed-source alternatives in text alignment, motion quality, and overall visual fidelity.
This API provides access to the 8-bit version of the model, which is optimized for performance on less expensive GPUs and offers faster inference compared to the full HunyuanVideo model.
Examples
import replicate
output = replicate.run(
"zurk/hunyuan-video-8bit:main",
input={
"prompt": "A cat walks on the grass, realistic style.",
"negative_prompt": "Ugly",
"width": 960,
"height": 544,
"video_length": 65,
"embedded_guidance_scale": 6.0,
"num_inference_steps": 40,
"seed": 43,
}
)
Parameters
- prompt (string, required): Text description of the video you want to generate.
- negative_prompt (string, optional): Text describing elements you want to exclude from the video.
- width (integer, default: 960): Video width in pixels.
- height (integer, default: 544): Video height in pixels.
- video_length (integer, default: 65): Number of frames (maximum 129).
- seed (integer, optional): Random seed for reproducibility. If not specified, you can find its value in the logs.
- embedded_guidance_scale (float, default: 6.0): Scale for embedded guidance during generation.
- num_inference_steps (integer, default: 40): Number of denoising steps.
- flow_shift (float, default: 7.0): Parameter for motion control (flow shift).
Limitations
- The maximum video length is 129 frames (approximately 5.3 seconds).
- The
video_length
parameter must follow the formula4*n+1
(e.g., 17, 21, 25, etc.).
Feedback
If you encounter any issues while using this API, please report them by creating an issue at GitHub Issues. I will address them as soon as possible.
For further details, visit the HunyuanVideo GitHub repository or explore the ComfyUI wrapper nodes for HunyuanVideo.