jaeyong2 commited on
Commit
492cb63
ยท
verified ยท
1 Parent(s): 8a09e64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -18,4 +18,24 @@ base_model:
18
 
19
  ### Model Description
20
 
21
- black-forest-labs/FLUX.1-dev๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ Bingsu/clip-vit-large-patch14-ko๋ฅผ ์ ์šฉํ•˜๊ณ , T5 Encoder์—์„œ ํ•œ๊ตญ์–ด vocab ํ™•์žฅ์„ ์ ์šฉํ•ด์„œ fine-tuning(์ฝ”๋žฉ ์ฟผํƒ€ ์ด์Šˆ๋กœ ์ธํ•ด์„œ, 1 X A100์„ 2์‹œ๊ฐ„ ํ•™์Šต) ์ ์šฉ
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ### Model Description
20
 
21
+ black-forest-labs/FLUX.1-dev๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ Bingsu/clip-vit-large-patch14-ko๋ฅผ ์ ์šฉํ•˜๊ณ , T5 Encoder์—์„œ ํ•œ๊ตญ์–ด vocab ํ™•์žฅ์„ ์ ์šฉํ•ด์„œ fine-tuning(์ฝ”๋žฉ ์ฟผํƒ€ ์ด์Šˆ๋กœ ์ธํ•ด์„œ, 1 X A100์„ 2์‹œ๊ฐ„ ํ•™์Šต) ์ ์šฉ
22
+
23
+ ### example
24
+ ```
25
+ import torch
26
+ from diffusers import FluxPipeline
27
+ pipe = FluxPipeline.from_pretrained("jaeyong2/FLUX.1-dev-ko", torch_dtype=torch.bfloat16)
28
+
29
+ pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
30
+
31
+ prompt = "ํ•˜๋Š˜ ์œ„๋ฅผ ๋‹ฌ๋ฆฌ๋Š” ๊ณ ์–‘์ด'"
32
+ image = pipe(
33
+ prompt,
34
+ height=1024,
35
+ width=1024,
36
+ guidance_scale=3.5,
37
+ num_inference_steps=50,
38
+ max_sequence_length=512,
39
+ generator=torch.Generator("cpu").manual_seed(0)
40
+ ).images[0]
41
+ ```