Spaces:
Running
Running
File size: 1,347 Bytes
b10121d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
# Diffusion model (Image to Video)
This benchmark suite benchmarks diffusion models with the image-to-video task.
## Setup
### Docker images
```sh
docker build -t mlenergy/leaderboard:diffusion-i2v .
```
### HuggingFace cache directory
The scripts assume the HuggingFace cache directory will be under `/data/leaderboard/hfcache` on the node that runs this benchmark.
## Benchmarking
### Obtaining one datapoint
The Docker image we've build runs `python scripts/benchmark_one_datapoint.py` as its `ENTRYPOINT`.
```sh
docker run \
--gpus '"device=0"' \
--cap-add SYS_ADMIN \
-v /data/leaderboard/hfcache:/root/.cache/huggingface
-v $(pwd):/workspace/image-to-video \
mlenergy/leaderboard:diffusion-i2v \
--result-root results \
--batch-size 2 \
--power-limit 300 \
--save-every 5 \
--model ali-vilab/i2vgen-xl \
--dataset-path sharegpt4video/sharegpt4video_100.json \
--add-text-prompt \
--num-frames 16 \
--fps 16 \
--huggingface-token $HF_TOKEN
```
### Obtaining all datapoints for a single model
Export your HuggingFace hub token as environment variable `$HF_TOKEN`.
Run `scripts/benchmark_one_model.py`.
### Running the entire suite with Pegasus
You can use [`pegasus`](https://github.com/jaywonchung/pegasus) to run the entire benchmark suite.
Queue and host files are in [`./pegasus`](./pegasus).
|