|
--- |
|
base_model: stabilityai/stable-diffusion-3-medium-diffusers |
|
library_name: diffusers |
|
license: openrail++ |
|
tags: |
|
- text-to-image |
|
- diffusers-training |
|
- diffusers |
|
- lora |
|
- template:sd-lora |
|
- stable-diffusion-3 |
|
- stable-diffusion-3-diffusers |
|
instance_prompt: <leaf microstructure> |
|
widget: [] |
|
--- |
|
|
|
# Stable Diffusion 3 Medium Fine-tuned with Leaf Images |
|
|
|
<Gallery /> |
|
|
|
## Model description |
|
|
|
These are LoRA adaption weights for stabilityai/stable-diffusion-3-medium-diffusers. |
|
|
|
## Trigger words |
|
|
|
The following image were used during fine-tuning using the keyword <leaf microstructure>: |
|
|
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F623ce1c6b66fedf374859fe7%2FsI_exTnLy6AtOFDX1-7eq.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|
|
You should use <leaf microstructure> to trigger the image generation. |
|
|
|
#### How to use |
|
|
|
Defining some helper functions: |
|
|
|
```python |
|
from diffusers import DiffusionPipeline |
|
import torch |
|
import os |
|
from datetime import datetime |
|
from PIL import Image |
|
|
|
def generate_filename(base_name, extension=".png"): |
|
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") |
|
return f"{base_name}_{timestamp}{extension}" |
|
|
|
def save_image(image, directory, base_name="image_grid"): |
|
|
|
filename = generate_filename(base_name) |
|
file_path = os.path.join(directory, filename) |
|
image.save(file_path) |
|
print(f"Image saved as {file_path}") |
|
|
|
def image_grid(imgs, rows, cols, save=True, save_dir='generated_images', base_name="image_grid", |
|
save_individual_files=False): |
|
|
|
if not os.path.exists(save_dir): |
|
os.makedirs(save_dir) |
|
|
|
assert len(imgs) == rows * cols |
|
|
|
w, h = imgs[0].size |
|
grid = Image.new('RGB', size=(cols * w, rows * h)) |
|
grid_w, grid_h = grid.size |
|
|
|
for i, img in enumerate(imgs): |
|
grid.paste(img, box=(i % cols * w, i // cols * h)) |
|
if save_individual_files: |
|
save_image(img, save_dir, base_name=base_name+f'_{i}-of-{len(imgs)}_') |
|
|
|
if save and save_dir: |
|
save_image(grid, save_dir, base_name) |
|
|
|
return grid |
|
``` |
|
|
|
Model loading and generation pipeline: |
|
|
|
```python |
|
|
|
repo_id_load='lamm-mit/stable-diffusion-3-medium-leaf-inspired' |
|
|
|
pipeline = DiffusionPipeline.from_pretrained ("stabilityai/stable-diffusion-3-medium-diffusers", |
|
torch_dtype=torch.float16 |
|
) |
|
|
|
pipeline.load_lora_weights(repo_id_load) |
|
pipeline=pipeline.to('cuda') |
|
|
|
prompt = "a cube in the shape of a <leaf microstructure>" |
|
negative_prompt = "" |
|
|
|
num_samples = 3 |
|
num_rows = 3 |
|
n_steps=75 |
|
guidance_scale=15 |
|
all_images = [] |
|
|
|
for _ in range(num_rows): |
|
image = pipeline(prompt,num_inference_steps=n_steps,num_images_per_prompt=num_samples, |
|
guidance_scale=guidance_scale,negative_prompt=negative_prompt).images |
|
|
|
all_images.extend(image) |
|
|
|
grid = image_grid(all_images, num_rows, num_samples, |
|
save_individual_files=True, |
|
save_dir='generated_images', |
|
base_name="image_grid", |
|
) |
|
grid |
|
``` |
|
|
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F623ce1c6b66fedf374859fe7%2Fqk5kRJJmetvhZ0ctltc3z.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|