jaeyong2's picture
Update README.md
64ca005 verified
|
raw
history blame
1.24 kB
metadata
library_name: diffusers
license: other
license_name: flux-1-dev-non-commercial-license
language:
  - ko
base_model:
  - black-forest-labs/FLUX.1-dev

Model Card for Model ID

ํ•œ๊ตญ์–ด ์ ์šฉ ๊ฐ€๋Šฅ์„ฑ ํ…Œ์ŠคํŠธ์šฉ (์›๋ž˜ ๋ชจ๋ธ์— ๋น„ํ•ด์„œ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ์„ฑ๋Šฅ์ด ๋‚ฎ์Œ)

Model Details

ํ•œ๊ตญ์–ด ์ ์šฉ ๋ฐฉ๋ฒ•

  • black-forest-labs/FLUX.1-dev๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์šฉ
  • Bingsu/clip-vit-large-patch14-ko๋ฅผ ์ ์šฉ
  • T5 Encoder์— ํ•œ๊ตญ์–ด vocab ํ™•์žฅ์„ ์ ์šฉ + fine-tuning (์ฝ”๋žฉ ์ฟผํƒ€ ์ด์Šˆ๋กœ ์ธํ•ด์„œ, 1 X A100์„ 2์‹œ๊ฐ„ ํ•™์Šต)

example

import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("jaeyong2/FLUX.1-dev-ko", torch_dtype=torch.bfloat16)

pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power

prompt = "ํ•˜๋Š˜ ์œ„๋ฅผ ๋‹ฌ๋ฆฌ๋Š” ๊ณ ์–‘์ด'"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=3.5,
    num_inference_steps=50,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]