Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
davidmeikle
/
mixtral-8x22b-v0.1-GGUF
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
mixtral-8x22b-v0.1-GGUF
/
Q6_K
1 contributor
History:
1 commit
davidmeikle
Upload folder using huggingface_hub
b9465f6
verified
9 months ago
mixtral-8x22b-v0.1.Q6_K-00001-of-00010.gguf
12.5 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00002-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00003-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00004-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00005-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00006-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00007-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00008-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00009-of-00010.gguf
12.6 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q6_K-00010-of-00010.gguf
2.64 GB
LFS
Upload folder using huggingface_hub
9 months ago