Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
davidmeikle
/
mixtral-8x22b-v0.1-GGUF
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
mixtral-8x22b-v0.1-GGUF
/
Q4_0
1 contributor
History:
1 commit
davidmeikle
Upload folder using huggingface_hub
5b0b965
verified
9 months ago
mixtral-8x22b-v0.1.Q4_0-00001-of-00007.gguf
Safe
12.5 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q4_0-00002-of-00007.gguf
Safe
12.7 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q4_0-00003-of-00007.gguf
Safe
12.7 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q4_0-00004-of-00007.gguf
Safe
12.7 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q4_0-00005-of-00007.gguf
Safe
12.7 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q4_0-00006-of-00007.gguf
Safe
12.7 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q4_0-00007-of-00007.gguf
Safe
3.62 GB
LFS
Upload folder using huggingface_hub
9 months ago