is mms compatible with PEFT (parameter efficient fine tuning)?
You can check out this blog-post for a low-resource approach to fine-tuning MMS for ASR: https://huggingface.co/blog/mms_adapters
We only fine tune the adapter weights (a few million params, only a fraction of the full model weights), so I expect 16GB VRAM is more than enough here! Let me know if it's not though - happy to provide some more pointers for reducing memory with MMS fine-tuning if it's still too high!
You can check out this blog-post for a low-resource approach to fine-tuning MMS for ASR: https://huggingface.co/blog/mms_adapters
We only fine tune the adapter weights (a few million params, only a fraction of the full model weights), so I expect 16GB VRAM is more than enough here! Let me know if it's not though - happy to provide some more pointers for reducing memory with MMS fine-tuning if it's still too high!
I have taken a quick look and looks like I'm in for a treat, Thank you @sanchit-gandhi