You're looking at a specific version of this model. Jump to the model overview.
afiaka87 /clip-guided-diffusion:840b2ec1
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
a caption to visualize
|
|
size |
integer
(enum)
|
Options: 128, 256, 512 |
image size
|
clip_guidance_scale |
integer
|
1000
Max: 2500 |
Scale for CLIP spherical distance loss. Values will need tinkering for different settings.
|
tv_scale |
number
|
150.0
Max: 250.0 |
Scale for TV loss. 0, 100, 150 and 200
|
range_scale |
number
|
50.0
Max: 250.0 |
Controls how far out of RGB range values may get.
|
sat_scale |
number
|
0.0
Max: 128.0 |
Controls how much saturation is allowed. Use for ddim. From @nshepperd.
|
respace |
string
(enum)
|
250
Options: 25, 50, 100, 200, 250, 500, 1000, ddim25, ddim50, ddim100, ddim200, ddim250, ddim500, ddim1000 |
Number of timesteps
|
init_image |
string
|
an image to blend with diffusion before clip guidance begins. Uses half as many timesteps.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'properties': {'file': {'format': 'uri',
'type': 'string',
'x-order': 0},
'text': {'type': 'string', 'x-order': 1}},
'type': 'object'},
'type': 'array',
'x-cog-array-type': 'iterator'}