pipeline_tag: text-generation | |
tags: | |
- llama-2 | |
- chat | |
- 7b | |
- GGUF | |
- meta | |
- llama | |
- pytorch | |
This is a converted model to GGUF from `meta-llama/Llama-2-7b-chat-hf` quantized to `Q4_0` using `llama.cpp` library. |
pipeline_tag: text-generation | |
tags: | |
- llama-2 | |
- chat | |
- 7b | |
- GGUF | |
- meta | |
- llama | |
- pytorch | |
This is a converted model to GGUF from `meta-llama/Llama-2-7b-chat-hf` quantized to `Q4_0` using `llama.cpp` library. |