You're looking at a specific version of this model. Jump to the model overview.
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
source_image |
string
|
Upload the source image, it can be video.mp4 or picture.png
|
|
driven_audio |
string
|
Upload the driven audio, accepts .wav and .mp4 file
|
|
use_enhancer |
boolean
|
False
|
Use GFPGAN as Face enhancer
|
pose_style |
integer
|
0
Max: 45 |
Pose style
|
expression_scale |
number
|
1
|
a larger value will make the expression motion stronger
|
use_eyeblink |
boolean
|
True
|
Use eye blink
|
preprocess |
string
(enum)
|
crop
Options: crop, resize, full, extcrop, extfull |
Choose how to preprocess the images
|
size_of_image |
integer
(enum)
|
256
Options: 256, 512 |
Face model resolution
|
facerender |
string
(enum)
|
facevid2vid
Options: facevid2vid, pirender |
Choose face render
|
still_mode |
boolean
|
True
|
Still Mode (fewer head motion, works with preprocess 'full')
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}