File size: 5,066 Bytes
ebeda56
 
 
 
 
 
 
 
 
 
 
 
 
 
17eed10
 
 
ebeda56
 
 
 
 
 
 
 
 
eaa27f5
ac17a4e
 
 
 
 
b121f8d
ac17a4e
 
 
c531426
eaa27f5
c531426
 
ebeda56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apache-2.0
---

This repository is for official ckeckpoints of microbudget diffusion models from our work "Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget".

**Paper:** https://arxiv.org/abs/2407.15811


<figure style="text-align: center;">
  <img src="demo.jpg" alt="Alt text" />
  Prompt: <em>'Image of an astronaut riding a horse in {} style'.</em> Styles: Origami, Pixel art, Line art, Cyberpunk, Van Gogh Starry Night, Animation, Watercolor, Stained glass
</figure>


**Abstract:** As scaling laws in generative AI push performance, they simultaneously concentrate the development of these models among actors with large computational resources. With a focus on text-to-image (T2I) generative models, we aim to unlock this bottleneck by demonstrating very low-cost training of large-scale T2I diffusion transformer models. As the computational cost of transformers increases with the number of patches in each image, we propose randomly masking up to 75% of the image patches during training. We propose a deferred masking strategy that preprocesses all patches using a patch-mixer before masking, thus significantly reducing the performance degradation with masking, making it superior to model downscaling in reducing computational cost. We also incorporate the latest improvements in transformer architecture, such as the use of mixture-of-experts layers, to improve performance and further identify the critical benefit of using synthetic images in micro-budget training. Finally, using only 37M publicly available real and synthetic images, we train a 1.16 billion parameter sparse transformer with only 1,890 USD economical cost and achieve a 12.7 FID in zero-shot generation on the COCO dataset. Notably, our model achieves competitive performance across both automated and human-centric evaluations, as well as high-quality generations, while incurring 118x lower costs than Stable Diffusion models and 14x lower costs than the current state-of-the-art approach, which costs $28,400. We also further investigate the influence of synthetic images on performance and demonstrate that micro-budget training on only synthetic images is sufficient for achieving high-quality data generation.

We provide checkpoints of four pre-trained models. The table below provides description of each model and it's quantitative performance. 

| Model Description | VAE (channels) | FID  | GenEval Score | Model's filename |
|------------------|-----|:------: |:------:|---------------|
| MicroDiT_XL_2 trained on 22M real images  | SDXL-VAE (4 channel) | 12.72 | 0.46 | dit_4_channel_22M_real_only_data.pt  |
| MicroDiT_XL_2 trained on 37M images (22M real, 15 synthetic) | SDXL-VAE (4 channel) | **12.66** | 0.46 | dit_4_channel_37M_real_and_synthetic_data.pt  |
| MicroDiT_XL_2 trained on 37M images (22M real, 15 synthetic) | Ostris-VAE (16 channel) | 13.04 | 0.40 | dit_16_channel_37M_real_and_synthetic_data.pt  |
| MicroDiT_XL_2 trained on 490M synthetic images | SDXL-VAE (4 channel) | 13.26 | **0.52** | dit_4_channel_0.5B_synthetic_data.pt |

**Image generation:** These checkpoints can be used with the official micro_diffusion codebase for image generation. First install the micro_diffusion code as a python package `pip install git+https://github.com/SonyResearch/micro_diffusion.git

Next use the following straightforward steps to generate images from the final model at 512×512 resolution.
```
from micro_diffusion.models.model import create_latent_diffusion
model = create_latent_diffusion(latent_res=64, in_channels=4, pos_interp_scale=2.0).to('cuda')
model.dit.load_state_dict(torch.load(ckpt_path_on_local_disk))
gen_images = model.generate(prompt=['An elegant squirrel pirate on a ship']*4, num_inference_steps=30, 
                                    guidance_scale=5.0, seed=2024)
```

**Training pipeline:** All four models are trained with nearly identical training configurations and computational budgets. We progressively train each model from low resolution to high resolution. We first train the model on 256×256 resolution images for 280K steps and then fine-tune the model for 55K steps on 512×512 resolution images. The estimated training time for the end-to-end model on an 8×H100 machine is 2.6 days. Our MicroDiT models by default use a patch-mixer before the backbone transformer architecture. Using the patch-mixer significantly reduces performance degradation with masking while providing a large reduction in training time. We mask 75% of the patches after the patch mixer across both resolutions. After training with masking, we perform a follow-up fine-tuning with a mask ratio of 0 to slightly improve performance.

The models are released under Apache 2.0 License.

## BibTeX
```bibtext
@article{Sehwag2024MicroDiT,
  title={Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget},
  author={Sehwag, Vikash and Kong, Xianghao and Li, Jingtao and Spranger, Michael and Lyu, Lingjuan},
  journal={arXiv preprint arXiv:2407.15811},
  year={2024}
}
```