You're looking at a specific version of this model. Jump to the model overview.
fofr /sd3-explorer:a9f4aebd
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
|
This prompt is ignored when using the triple prompt mode. See below.
|
model |
string
(enum)
|
sd3_medium_incl_clips_t5xxlfp16.safetensors
Options: sd3_medium_incl_clips.safetensors, sd3_medium_incl_clips_t5xxlfp16.safetensors, sd3_medium_incl_clips_t5xxlfp8.safetensors |
Pick whether to use T5-XXL in fp16, fp8 or not at all. We recommend fp16 for this model as it has the best image quality. When running locally we recommend fp8 for lower memory usage. We've included all versions here for exploration.
|
width |
integer
|
1024
|
The width of the image (best output at ~1 megapixel. Resolution must be divisible by 64)
|
height |
integer
|
1024
|
The height of the image (best output at ~1 megapixel. Resolution must be divisible by 64)
|
steps |
integer
|
28
|
The number of steps to run the model for (more steps = better image but slower generation. Best results for this model are around 26 to 36 steps.)
|
sampler |
string
(enum)
|
dpmpp_2m
Options: euler, euler_ancestral, heun, heunpp2, dpm_2, dpm_2_ancestral, lms, dpm_fast, dpm_adaptive, dpmpp_2s_ancestral, dpmpp_sde, dpmpp_sde_gpu, dpmpp_2m, dpmpp_2m_sde, dpmpp_2m_sde_gpu, dpmpp_3m_sde, dpmpp_3m_sde_gpu, ddpm, lcm, ddim, uni_pc, uni_pc_bh2 |
The sampler to use (used to manage noise)
|
scheduler |
string
(enum)
|
sgm_uniform
Options: normal, karras, exponential, sgm_uniform, simple, ddim_uniform |
The scheduler to use (used to manage noise; do not use karras)
|
shift |
number
|
3
Max: 20 |
The timestep scheduling shift; shift values higher than 1.0 are better at managing noise in higher resolutions. Try values 6.0 and 2.0 to experiment with effects.
|
guidance_scale |
number
|
3.5
Max: 20 |
The guidance scale tells the model how similar the output should be to the prompt. (Recommend between 3.5 and 4.5; if images look 'burnt,' lower the value.)
|
number_of_images |
integer
|
1
Min: 1 Max: 10 |
The number of images to generate
|
use_triple_prompt |
boolean
|
False
|
None
|
triple_prompt_clip_g |
string
|
|
The prompt that will be passed to just the CLIP-G model.
|
triple_prompt_clip_l |
string
|
|
The prompt that will be passed to just the CLIP-L model.
|
triple_prompt_t5 |
string
|
|
The prompt that will be passed to just the T5-XXL model.
|
triple_prompt_empty_padding |
boolean
|
True
|
Whether to add padding for empty prompts. Useful if you only want to pass a prompt to one or two of the three text encoders. Has no effect when all prompts are filled. Disable this for interesting effects.
|
negative_prompt |
string
|
|
Negative prompts do not really work in SD3. This will simply cause your output image to vary in unpredictable ways.
|
negative_conditioning_end |
number
|
0
Max: 1 |
When the negative conditioning should stop being applied. By default it is disabled. If you want to try a negative prompt, start with a value of 0.1
|
output_format |
string
(enum)
|
webp
Options: webp, jpg, png |
Format of the output images
|
output_quality |
integer
|
80
Max: 100 |
Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
|
seed |
integer
|
Set a seed for reproducibility. Random by default.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
{'items': {'format': 'uri', 'type': 'string'},
'title': 'Output',
'type': 'array'}