You're looking at a specific version of this model. Jump to the model overview.

replicate /dreambooth:7ed00f4e

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
instance_prompt
string
The prompt with identifier specifying the instance
class_prompt
string
The prompt to specify images in the same class as provided instance images.
instance_data
string
A ZIP file containing the training data of instance images
class_data
string
A ZIP file containing the training data of class images. Images will be generated if you do not provide.
num_class_images
integer
50
Minimal class images for prior preservation loss. If not enough images are provided in class_data, additional images will be sampled with class_prompt.
save_sample_prompt
string
The prompt used to generate sample outputs to save.
save_sample_negative_prompt
string
The negative prompt used to generate sample outputs to save.
n_save_sample
integer
4
The number of samples to save.
save_guidance_scale
number
7.5
CFG for save sample.
save_infer_steps
integer
50
The number of inference steps for save sample.
pad_tokens
boolean
False
Flag to pad tokens to length 77.
with_prior_preservation
boolean
True
Flag to add prior preservation loss.
prior_loss_weight
number
1
Weight of prior preservation loss.
seed
integer
1337
A seed for reproducible training
resolution
integer
512
The resolution for input images. All the images in the train/validation dataset will be resized to this resolution.
center_crop
boolean
False
Whether to center crop images before resizing to resolution
train_text_encoder
boolean
True
Whether to train the text encoder
train_batch_size
integer
1
Batch size (per device) for the training dataloader.
sample_batch_size
integer
4
Batch size (per device) for sampling images.
num_train_epochs
integer
1
None
max_train_steps
integer
2000
Total number of training steps to perform. If provided, overrides num_train_epochs.
gradient_accumulation_steps
integer
1
Number of updates steps to accumulate before performing a backward/update pass.
gradient_checkpointing
boolean
False
Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.
learning_rate
number
0.000001
Initial learning rate (after the potential warmup period) to use.
scale_lr
boolean
False
Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.
lr_scheduler
string (enum)
constant

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

The scheduler type to use
lr_warmup_steps
integer
0
Number of steps for the warmup in the lr scheduler.
use_8bit_adam
boolean
False
Whether or not to use 8-bit Adam from bitsandbytes.
adam_beta1
number
0.9
The beta1 parameter for the Adam optimizer.
adam_beta2
number
0.999
The beta2 parameter for the Adam optimizer.
adam_weight_decay
number
0.01
Weight decay to use
adam_epsilon
number
0.00000001
Epsilon value for the Adam optimizer
max_grad_norm
number
1
Max gradient norm.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}