anotherjesse / dreambooth-21

preview release of Dreambooth supporting Stable Diffusion 2.1

  • Public
  • 25 runs

Run anotherjesse/dreambooth-21 with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
pretrained_model
string (enum)
stabilityai/stable-diffusion-2-1-base

Options:

stabilityai/stable-diffusion-2-1-base, stabilityai/stable-diffusion-2-1

Model identifier from huggingface.co/models
instance_prompt
string
The prompt you use to describe your training images, in the format: `a [identifier] [class noun]`, where the `[identifier]` should be a rare token. Relatively short sequences with 1-3 letters work the best (e.g. `sks`, `xjy`). `[class noun]` is a coarse class descriptor of the subject (e.g. cat, dog, watch, etc.). For example, your `instance_prompt` can be: `a sks dog`, or with some extra description `a photo of a sks dog`. The trained model will learn to bind a unique identifier with your specific subject in the `instance_data`.
class_prompt
string
The prompt or description of the coarse class of your training images, in the format of `a [class noun]`, optionally with some extra description. `class_prompt` is used to alleviate overfitting to your customised images (the trained model should still keep the learnt prior so that it can still generate different dogs when the `[identifier]` is not in the prompt). Corresponding to the examples of the `instant_prompt` above, the `class_prompt` can be `a dog` or `a photo of a dog`.
instance_data
string
A ZIP file containing your training images (JPG, PNG, etc. size not restricted). These images contain your 'subject' that you want the trained model to embed in the output domain for later generating customized scenes beyond the training images. For best results, use images without noise or unrelated objects in the background.
class_data
string
An optional ZIP file containing the training data of class images. This corresponds to `class_prompt` above, also with the purpose of keeping the model generalizable. By default, the pretrained stable-diffusion model will generate N images (determined by the `num_class_images` you set) based on the `class_prompt` provided. But to save time or to have your preferred specific set of `class_data`, you can also provide them in a ZIP file.
num_class_images
integer
50
Minimal class images for prior preservation loss. If not enough images are provided in class_data, additional images will be sampled with class_prompt.
save_sample_prompt
string
The prompt used to generate sample outputs to save.
save_sample_negative_prompt
string
The negative prompt used to generate sample outputs to save.
n_save_sample
integer
4
The number of samples to save.
save_guidance_scale
number
7.5
CFG for save sample.
save_infer_steps
integer
50
The number of inference steps for save sample.
pad_tokens
boolean
False
Flag to pad tokens to length 77.
with_prior_preservation
boolean
True
Flag to add prior preservation loss.
prior_loss_weight
number
1
Weight of prior preservation loss.
seed
integer
1337
A seed for reproducible training
resolution
integer (enum)
512

Options:

512, 768

The resolution for input images. All the images in the train/validation dataset will be resized to this resolution.
center_crop
boolean
False
Whether to center crop images before resizing to resolution
train_text_encoder
boolean
True
Whether to train the text encoder
train_batch_size
integer
1
Batch size (per device) for the training dataloader.
sample_batch_size
integer
4
Batch size (per device) for sampling images.
num_train_epochs
integer
1
None
max_train_steps
integer
2000
Total number of training steps to perform. If provided, overrides num_train_epochs.
gradient_accumulation_steps
integer
1
Number of updates steps to accumulate before performing a backward/update pass.
gradient_checkpointing
boolean
False
Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.
learning_rate
number
0.000001
Initial learning rate (after the potential warmup period) to use.
scale_lr
boolean
False
Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.
lr_scheduler
string (enum)
constant

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

The scheduler type to use
lr_warmup_steps
integer
0
Number of steps for the warmup in the lr scheduler.
use_8bit_adam
boolean
False
Whether or not to use 8-bit Adam from bitsandbytes.
adam_beta1
number
0.9
The beta1 parameter for the Adam optimizer.
adam_beta2
number
0.999
The beta2 parameter for the Adam optimizer.
adam_weight_decay
number
0.01
Weight decay to use
adam_epsilon
number
0.00000001
Epsilon value for the Adam optimizer
max_grad_norm
number
1
Max gradient norm.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}