File size: 1,239 Bytes
5005afa 8a09e64 5005afa 8a09e64 5005afa 64ca005 5005afa 23b8030 492cb63 0c1a11c 9616054 0c1a11c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
library_name: diffusers
license: other
license_name: flux-1-dev-non-commercial-license
language:
- ko
base_model:
- black-forest-labs/FLUX.1-dev
---
# Model Card for Model ID
ํ๊ตญ์ด ์ ์ฉ ๊ฐ๋ฅ์ฑ **ํ
์คํธ์ฉ** (์๋ ๋ชจ๋ธ์ ๋นํด์ ์ด๋ฏธ์ง ์์ฑ ์ฑ๋ฅ์ด ๋ฎ์)
## Model Details
### ํ๊ตญ์ด ์ ์ฉ ๋ฐฉ๋ฒ
- black-forest-labs/FLUX.1-dev๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์ฌ์ฉ
- Bingsu/clip-vit-large-patch14-ko๋ฅผ ์ ์ฉ
- T5 Encoder์ ํ๊ตญ์ด vocab ํ์ฅ์ ์ ์ฉ + fine-tuning (์ฝ๋ฉ ์ฟผํ ์ด์๋ก ์ธํด์, 1 X A100์ 2์๊ฐ ํ์ต)
### example
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("jaeyong2/FLUX.1-dev-ko", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "ํ๋ ์๋ฅผ ๋ฌ๋ฆฌ๋ ๊ณ ์์ด'"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
```
<div style="max-width: 350px; margin: 0 auto;">
<img src='https://huggingface.co/jaeyong2/FLUX.1-dev-ko/resolve/main/flux-dev3.png' />
</div> |