Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
QwQ-32B-Preview-3bit
like
5
Follow
MLX Community
3.06k
Text Generation
Transformers
Safetensors
MLX
English
qwen2
chat
conversational
text-generation-inference
Inference Endpoints
3-bit
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
ValueError: [quantize] The requested number of bits 3 is not supported. The supported bits are 2, 4 and 8.
1
#2 opened about 2 months ago by
sauterne