You're looking at a specific version of this model. Jump to the model overview.

cloneofsimo /lora_pti:8a752dd3

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
instance_data
string
A ZIP file containing your training images (JPG, PNG, etc. size not restricted). These images contain your 'subject' that you want the trained model to embed in the output domain for later generating customized scenes beyond the training images. For best results, use images without noise or unrelated objects in the background.
class_data
string
An optional ZIP file containing the training data of class images. This corresponds to `class_prompt` above, also with the purpose of keeping the model generalizable. By default, the pretrained stable-diffusion model will generate N images (determined by the `num_class_images` you set) based on the `class_prompt` provided. But to save time or to have your preferred specific set of `class_data`, you can also provide them in a ZIP file.
seed
integer
1337
A seed for reproducible training
resolution
integer
512
The resolution for input images. All the images in the train/validation dataset will be resized to this resolution.
train_text_encoder
boolean
True
Whether to train the text encoder
train_batch_size
integer
1
Batch size (per device) for the training dataloader.
gradient_accumulation_steps
integer
4
Number of updates steps to accumulate before performing a backward/update pass.
gradient_checkpointing
boolean
False
Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.
scale_lr
boolean
True
Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.
lr_scheduler
string (enum)
constant

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

The scheduler type to use
lr_warmup_steps
integer
0
Number of steps for the warmup in the lr scheduler.
use_8bit_adam
boolean
False
Whether or not to use 8-bit Adam from bitsandbytes.
clip_ti_decay
boolean
True
Whether or not to clip the TI decay to be between 0 and 1.
color_jitter
boolean
True
Whether or not to use color jitter.
continue_inversion
boolean
False
Whether or not to continue an inversion.
continue_inversion_lr
number
0.0001
The learning rate for continuing an inversion.
device
string
cuda:0
The device to use. Can be 'cuda' or 'cpu'.
initializer_tokens
string
The tokens to use for the initializer. If not provided, will randomly initialize from gaussian N(0,0.017^2)
learning_rate_text
number
0.00001
The learning rate for the text encoder.
learning_rate_ti
number
0.0005
The learning rate for the TI.
learning_rate_unet
number
0.0001
The learning rate for the unet.
lora_rank
integer
4
The rank for the LORA loss.
lr_scheduler_lora
string (enum)
constant

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

The scheduler type to use
lr_warmup_steps_lora
integer
0
Number of steps for the warmup in the lr scheduler.
max_train_steps_ti
integer
500
The maximum number of training steps for the TI.
max_train_steps_tuning
integer
1000
The maximum number of training steps for the tuning.
perform_inversion
boolean
True
Whether or not to perform an inversion.
placeholder_token_at_data
string
Whether or not to use a placeholder token at the data.
placeholder_tokens
string
<s1>|<s2>
The placeholder tokens to use for the initializer. If not provided, will use the first tokens of the data.
save_steps
integer
100
The number of steps between saving checkpoints.
use_face_segmentation_condition
boolean
True
Whether or not to use the face segmentation condition.
use_template
string
object
The template to use for the inversion.
weight_decay_lora
number
0.001
The weight decay for the LORA loss.
weight_decay_ti
number
0
The weight decay for the TI.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}