Issue training a Flux LoRa of myself
I'm trying to train a LoRa for myself using ai-toolkit, however every time it instead generates this cat like creature is some different pose. I get this creature was the result of the seed. But why isn't it changing as the sampling proceeds. It just stays a cat. The dataset contains ~20 photos of different headshots of myself (512, 768, and 1024 resolution) that I have captioned as "a picture of jrminty". Below is my config. Anyone know what might be causing this? My son's LoRa worked great using the same method.
job: extension
config:
name: jrminty_lora
process:
- type: sd_trainer
training_folder: output
device: cuda:0
trigger_word: jrminty
network:
type: lora
linear: 16
linear_alpha: 16
save:
dtype: float16
save_every: 250
max_step_saves_to_keep: 4
push_to_hub: false
datasets:
- folder_path: C:\ai-toolkit\trainer\jrminty
caption_ext: txt
caption_dropout_rate: 0.05
shuffle_tokens: false
cache_latents_to_disk: true
resolution:
- 512
- 768
- 1024
train:
batch_size: 1
steps: 2000
gradient_accumulation_steps: 1
train_unet: true
train_text_encoder: false
gradient_checkpointing: true
noise_scheduler: flowmatch
optimizer: adamw8bit
lr: 0.0001
ema_config:
use_ema: true
ema_decay: 0.99
dtype: bf16
model:
name_or_path: black-forest-labs/FLUX.1-dev
is_flux: true
quantize: true
sample:
sampler: flowmatch
sample_every: 250
width: 1024
height: 1024
prompts:
- jrminty
neg: ''
seed: 42
walk_seed: true
guidance_scale: 4
sample_steps: 20
meta:
name: jrminty_lora
version: '1.0'