Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
davidmeikle
/
mixtral-8x22b-v0.1-GGUF
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
mixtral-8x22b-v0.1-GGUF
/
Q8_0
1 contributor
History:
1 commit
davidmeikle
Upload folder using huggingface_hub
981c623
verified
9 months ago
mixtral-8x22b-v0.1.Q8_0-00001-of-00010.gguf
Safe
15.9 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00002-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00003-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00004-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00005-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00006-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00007-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00008-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00009-of-00010.gguf
Safe
15.4 GB
LFS
Upload folder using huggingface_hub
9 months ago
mixtral-8x22b-v0.1.Q8_0-00010-of-00010.gguf
Safe
10.3 GB
LFS
Upload folder using huggingface_hub
9 months ago