You're looking at a specific version of this model. Jump to the model overview.

tiger-ai-lab /anyv2v:25352ada

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
video
string
Input video
instruct_pix2pix_prompt
string
turn man into robot
The first step invovles using timbrooks/instruct-pix2pix to edit the first frame. Specify the prompt for editing the first frame.
editing_prompt
string
a man doing exercises for the body and mind
Describe the input video
editing_negative_prompt
string
Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms
Things not to see int the edited video
num_inference_steps
integer
50

Min: 1

Max: 500

Number of denoising steps
guidance_scale
number
9

Min: 1

Max: 20

Scale for classifier-free guidance
pnp_f_t
number
0.2

Max: 1

Convolution injection value
pnp_spatial_attn_t
number
0.2

Max: 1

Self-Attention injection value
pnp_temp_attn_t
number
0.5

Max: 1

Temporal Attention injection value
ddim_init_latents_t_idx
integer
0
Index of the starting latent, raning from 0 to (num_inference_steps - 1)
ddim_inversion_steps
integer
500
Number of ddim inversion steps
seed
integer
Random seed. Leave blank to randomize the seed

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}