Please Quantize MiniMaxAI/MiniMax-VL-01
#1
by
chilegazelle
- opened
Dear colleagues,
First of all, a huge thank you for your work—your contributions to AI optimization are invaluable.
If possible, could MiniMaxAI/MiniMax-VL-01 be quantized? Having a quantized version would accelerate the development of VL models by making inference more accessible, which in turn could increase interest in them.
It would be great to have multiple quantized versions optimized for different hardware, precision levels, and use cases.
If anyone is willing to take this on, it would be greatly appreciated. Thank you in advance!