|
--- |
|
library_name: diffusers |
|
license: other |
|
license_name: flux-1-dev-non-commercial-license |
|
language: |
|
- ko |
|
base_model: |
|
- black-forest-labs/FLUX.1-dev |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
ํ๊ตญ์ด ์ ์ฉ ๊ฐ๋ฅ์ฑ **ํ
์คํธ์ฉ** (์๋ ๋ชจ๋ธ์ ๋นํด์ ์ด๋ฏธ์ง ์์ฑ ์ฑ๋ฅ์ด ๋ฎ์) |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### ํ๊ตญ์ด ์ ์ฉ ๋ฐฉ๋ฒ |
|
|
|
- black-forest-labs/FLUX.1-dev๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์ฌ์ฉ |
|
- Bingsu/clip-vit-large-patch14-ko๋ฅผ ์ ์ฉ |
|
- T5 Encoder์ ํ๊ตญ์ด vocab ํ์ฅ์ ์ ์ฉ + fine-tuning (์ฝ๋ฉ ์ฟผํ ์ด์๋ก ์ธํด์, 1 X A100์ 2์๊ฐ ํ์ต) |
|
|
|
### example |
|
``` |
|
import torch |
|
from diffusers import FluxPipeline |
|
pipe = FluxPipeline.from_pretrained("jaeyong2/FLUX.1-dev-ko", torch_dtype=torch.bfloat16) |
|
|
|
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power |
|
|
|
prompt = "ํ๋ ์๋ฅผ ๋ฌ๋ฆฌ๋ ๊ณ ์์ด'" |
|
image = pipe( |
|
prompt, |
|
height=1024, |
|
width=1024, |
|
guidance_scale=3.5, |
|
num_inference_steps=50, |
|
max_sequence_length=512, |
|
generator=torch.Generator("cpu").manual_seed(0) |
|
).images[0] |
|
``` |
|
|
|
|
|
<div style="max-width: 350px; margin: 0 auto;"> |
|
<img src='https://huggingface.co/jaeyong2/FLUX.1-dev-ko/resolve/main/flux-dev3.png' /> |
|
</div> |