4-bit OmniQuant quantized version of FuseChat-Gemma-2-9B-Instruct for inference with the Private LLM app.
- Downloads last month
- 0
Inference API (serverless) does not yet support mlc-llm models for this pipeline type.
4-bit OmniQuant quantized version of FuseChat-Gemma-2-9B-Instruct for inference with the Private LLM app.