Hardware???

#8
by dollarpound - opened

Why is it that no one finds it less important to mention hardware requirements? For a model that can be downloaded, isn't that the most important parameter?

FastVideo org

https://github.com/hao-ai-lab/FastVideo
Please check our github repo for hardware requirements.

Inference FastHunyuan on single RTX4090
We now support NF4 and LLM-INT8 quantized inference using BitsAndBytes for FastHunyuan. With NF4 quantization, inference can be performed on a single RTX 4090 GPU, requiring just 20GB of VRAM.

⚡ Lora Finetune

Minimum Hardware Requirement

40 GB GPU memory each for 2 GPUs with lora
30 GB GPU memory each for 2 GPUs with CPU offload and lora.

Sign up or log in to comment