File size: 1,239 Bytes
5005afa
 
8a09e64
 
 
 
 
 
5005afa
 
 
 
8a09e64
5005afa
 
 
 
 
64ca005
5005afa
23b8030
 
 
492cb63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0c1a11c
 
 
9616054
0c1a11c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
library_name: diffusers
license: other
license_name: flux-1-dev-non-commercial-license
language:
- ko
base_model:
- black-forest-labs/FLUX.1-dev
---

# Model Card for Model ID

ํ•œ๊ตญ์–ด ์ ์šฉ ๊ฐ€๋Šฅ์„ฑ **ํ…Œ์ŠคํŠธ์šฉ** (์›๋ž˜ ๋ชจ๋ธ์— ๋น„ํ•ด์„œ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ์„ฑ๋Šฅ์ด ๋‚ฎ์Œ)


## Model Details


### ํ•œ๊ตญ์–ด ์ ์šฉ ๋ฐฉ๋ฒ•

- black-forest-labs/FLUX.1-dev๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์šฉ
- Bingsu/clip-vit-large-patch14-ko๋ฅผ ์ ์šฉ
- T5 Encoder์— ํ•œ๊ตญ์–ด vocab ํ™•์žฅ์„ ์ ์šฉ + fine-tuning (์ฝ”๋žฉ ์ฟผํƒ€ ์ด์Šˆ๋กœ ์ธํ•ด์„œ, 1 X A100์„ 2์‹œ๊ฐ„ ํ•™์Šต)

### example
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("jaeyong2/FLUX.1-dev-ko", torch_dtype=torch.bfloat16)

pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power

prompt = "ํ•˜๋Š˜ ์œ„๋ฅผ ๋‹ฌ๋ฆฌ๋Š” ๊ณ ์–‘์ด'"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=3.5,
    num_inference_steps=50,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]
```


<div style="max-width: 350px; margin: 0 auto;">
<img src='https://huggingface.co/jaeyong2/FLUX.1-dev-ko/resolve/main/flux-dev3.png' />
</div>