You're looking at a specific version of this model. Jump to the model overview.

afiaka87 /clip-guided-diffusion:a9650e4b

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Text prompt to use.
init_image
string
an image to blend with diffusion before clip guidance begins. Uses half as many timesteps.
respace
string (enum)
250

Options:

25, 50, 100, 200, 250, ddim25, ddim50, ddim100, ddim200, ddim250

Number of timesteps. Fewer is faster, but less accurate.
clip_guidance_scale
integer
1000

Max: 2500

Scale for CLIP spherical distance loss. Values will need tinkering for different settings.
tv_scale
number
50

Max: 250

Scale for a denoising loss that effects the last half of the diffusion process. 0, 100, 150 and 200
range_scale
number
50

Max: 250

Controls how far out of RGB range values may get.
sat_scale
number
0

Max: 128

Controls how much saturation is allowed. Use for ddim. From @nshepperd.
use_augmentations
boolean
False
Whether to use augmentation during prediction. May help with ddim and respacing <= 100.
use_magnitude
boolean
False
Use the magnitude of the loss. May help (only) with ddim and respacing <= 100
seed
integer
0
Seed for reproducibility

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'items': {'format': 'uri', 'type': 'string'},
 'title': 'Output',
 'type': 'array',
 'x-cog-array-type': 'iterator'}