Blackroot's picture
Update README.md
5cb6672 verified
---
license: mit
---
[![Discord](https://img.shields.io/discord/232596713892872193?logo=discord)](https://discord.gg/2JhHVh7CGu)
This is a severely undertrained research network as a POC for the architecture. It was trained on ~700 example images for 2000 epochs reaching a minimal MSE loss of ~0.06. The generation is unconditioned (No text knowledge yet, simply generates something plauible from the flow objective.) This repo is meant only as a demo of a strong, <100M parameter example model that can achieve strong color balance and achieve low loss on pixel diffusion. The next step is scaling up the data.
A semi custom network based on the follow paper [Simpler Diffusion (SiD2)](https://arxiv.org/abs/2410.19324v1)
This network uses the optimal transport flow matching objective outlined [Flow Matching for Generative Modeling](https://arxiv.org/abs/2210.02747)
xATGLU Layers are used instead of linears for entry into the transformer MLP layer [Expanded Gating Ranges
Improve Activation Functions](https://arxiv.org/pdf/2405.20768)
```python train.py``` will train a new image network on the provided dataset (Currently the dataset is being fully rammed into GPU and is defined in the preload_dataset function)
```python test_sample.py step_1799.safetensors``` Where step_1799.safetensors is the desired model to test inference on. This will always generate a sample grid of 16x16 images.
![samples](./1.png)
![samples](./2.png)
![samples](./3.png)
![samples](./4.png)