【小(xiǎo)編推薦】fluxgym訓練模型參數(shù)說(shu₽✔§ō)明(míng)

2025-01-30   |&ε¶ nbsp;  發布者:  &≤♥; |   查看(kàn¶↔∞♠):3320次

comfyui

fluxgym安裝步驟就(jiù)不(b☆↓ù)說(shuō)明(míng)了(le),可(♣>€ kě)以在另一(yī)篇文(wén)章(zhāng)中查看(ε∏★'kàn)


fluxgym如(rú)果使用(yòng)默認參數(shù)直接開(k ←āi)煉,那(nà)麽大(dà)概率出來(lái)"$"的(de)模型跟想要(yào)的(de)結果不(bù)‍π一(yī)樣。


Sample Image Prompts (Separate with ♣Ωσnew lines)   樣圖生(shēng ' §)成,一(yī)般填寫其中一(yī)張素材的(de÷↑)關鍵字


Sample Image Every N'≤ε★ Steps    素材數(shù)量乘以Rβ✔Ω€epeat trains per image(αε默認10)










--network_dim    真人(rén)32或者₹πγ'64 ,二次元 4-16
     ne♣&×twork_dim:線性 dim,代表模∏↔型大(dà)小(xiǎo),數(shù)值越大(dà)模型越精細,常用(‍ ‍yòng) 4~128,如(rú)果設置為(wèi) 128✔©§♣,則 LoRA 模型大(dà)小(xiǎo)為(wèi) 144M。
重點:如(rú)果是(shì)12G顯存,建議(yε≈ì)寫16或者32。圖片建議(yì)512px,可(k‌ε​₩ě)以使用(yòng)ps截圖(如(rú)果其他(→​ ♣tā)報(bào)錯(cuò),隻寫這(zhè)個(gè)就(jβ→​iù)可(kě)以有(yǒu)不(bù)錯(cuò)測效果)


--apply_t5_attn_mask ±∏ £  将注意力掩碼應用(yòng)于T5-XXL編碼和(hé)β∏™FLUX雙塊  勾選


--enable_bucket  啓用(yòng)桶進行(•≈"xíng)多(duō)縱橫比訓練  勾σ₽±©選

--fp8_base_unet 勾選

--full_bf16 勾選


--lr_warmup_steps  學 ↔±¥習(xí)預熱(rè)步數(shù)  這(zhèδΩ"π)個(gè)和(hé)第一(yī)部中的(de)Expect♦γ↔↕ed training steps有(yǒu)關。一(yī)般是(sh≥λε§ì)10%。










--max_bucket_reso 訓練素材中最大(dà)± ✘尺寸

--min_bucket_reso 訓練 '素材中最小(xiǎo)尺寸


--network_alpha  一(y¶λī)般和(hé)--network_dim 同↕€×​步
線性 alpha,一(yī)般設↔δ₩↑置為(wèi)比 Network Dim 小(xiǎo)或者相(≈ 'xiàng)同,通(tōng)常将 network ↔"↑Ωdim 設置為(wèi) 128,netλ€work alpha 設置為(wèi) 64。











--resolution  &nbs→≤♥p;訓練素材中的(de)圖片尺寸  比如(rú):512,7©×↕©68,1280


--text_encoder_lr   4e∏™±-e 文(wén)本編碼器(qì)的(de)學習(xí)α∑→率,一(yī)般為(wèi) unet 學習(x ®™ í)率的(de)十分(fēn)之一(yī) 0.00001

--unet_lr   8e-e≠¥   unet 學習(xí)率,默認值為(wèi) 0.0§✔001


--log_with   tensorboard

--logging_dir  啓用(yòng)日(rì)志(zγ₹ hì)記錄并将TensorBoard日(rì÷>)志(zhì)輸出到(dào)此目錄  随便寫一(yī)個→₩↑(gè)文(wén)件(jiàn)夾,日(rì)志 ✔‍(zhì)會(huì)記錄在這(zhè)個(gè)文(wén)件(jiàn≠​§)夾下(xià)


以下(xià)是(shì)另一(yī)個(gè)大(dà)神的(₽♣£de)參數(shù):
model_train_type = "flux-lora"≤✘¶✔
pretrained_model_name_or_path = "/me®"↓ga-models/flux/unet/fluβπx1-dev.safetensors"
ae = "/mega-models/flux/vae/ae.saf©♥‍∏etensors"
clip_l = "/mega-models/f§←lux/clip/clip_l.safete↔↑≤nsors"
t5xxl = "/mega-models/flux/clip/t5Ω✔xxl_fp16.safetensors"
timestep_sampling = "s₽€•σigma"
sigmoid_scale = 1
model_prediction_type =÷≤δ "raw"
discrete_flow_shift = 1
loss_type = "l2"
guidance_scale = 1
train_data_dir = "/root/images"
prior_loss_weight = 1
resolution = "768,768"
enable_bucket = true
min_bucket_reso = 25β☆6
max_bucket_reso = 2048
bucket_reso_steps = 64£↕♠
bucket_no_upscale = trueπ₩∑♠
output_name = "Yoko-flux"
output_dir = "/root/ComfyUI/models/lora&♦s"
save_model_as = "safetensors"
save_precision = "bf1§↓‌<6"
save_every_n_epochs = 2
max_train_epochs = 16
train_batch_size = 1
gradient_checkpointing↕₽γ = true
gradient_accumulation_steps = 1
network_train_unet_only = true
network_train_text_encoder_only = f≤$λ↕alse
learning_rate = 0.0001
unet_lr = 0.0005
text_encoder_lr = 0.00001
lr_scheduler = "cosine_with_restarts"
lr_warmup_steps = 0
lr_scheduler_num_cycles = 1
optimizer_type = "Pa'ΩgedAdamW8bit"
network_module = "network•€←s.lora_flux"
network_dim = 32
network_alpha = 32
log_with = "tensorboard"
logging_dir = "./logs"
caption_extension = ".txt"
shuffle_caption = false
keep_tokens = 0
seed = 1337
clip_skip = 2
mixed_precision = "bf16÷™"
fp8_base = true
no_half_vae = true
sdpa = true
lowram = false
cache_latents = true
cache_latents_to_disk = true
cache_text_encoder_outputs = true
cache_text_encoder_outputs_to_d©¥→isk = true
persistent_data_loader_€↔workers = true