MaziyarPanahi
commited on
Upload folder using huggingface_hub (#1)
Browse files- 74b848e4d2e1d8812ef25d64d148ce5a13e7cdf4b84253bad6738fb07004b8b8 (096d2619c915393508572b9b3a53c7f7a8ddb6a7)
- e40ddc0f6703daa7cc8eeb3e669d25e50f69f9c4fde7529ea59bf1cf920e7b1d (6469619d4391c3e040d1fb10a189bacf218f0cde)
- 9df46c09a94d185753be04083358b4c563e40c8111e61e7a581145b8559bbf3a (a1f3e109c0234107e9010ea33600819ba017a594)
- 51be67c14e1838d4684401b304756ce5b59552ea384da690bfc4a9d1e760ea25 (5301ccddd54bd61054f61a1a8d6b5b60e485fc71)
- c839bc0faa76cd190e7b779894f1e5a2097d64ad0565843d3c8d4ae686ddbe6c (e97de3f2f7f2cf9ef04568835d489caaaeef6e98)
- 43ff51adc4e683555609c9bd0f56b0f530fb075676b43ffd9874af2710ba2506 (9ddf551652bd23ff1dc82bd27161d2739d48fb27)
- .gitattributes +6 -0
- Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF_imatrix.dat +3 -0
- Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q5_K_M.gguf +3 -0
- Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q5_K_S.gguf +3 -0
- Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q6_K.gguf +3 -0
- Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q8_0.gguf +3 -0
- Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.fp16.gguf +3 -0
- README.md +45 -0
.gitattributes
CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF_imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d7d5eb7274bda2f0640111dc7bb7fd019c43cd5df68406057518d8d76667bebc
|
3 |
+
size 4988146
|
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:72454d512795eec49e21cdeadc6221446f8a21b9c303be4f1bb2a9247856ca36
|
3 |
+
size 5732999520
|
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:617d8dae4fdcf5e114dba6cbe35b3288a0a5efaf3ed0ecf9350638fea5b7611b
|
3 |
+
size 5599306080
|
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48cb5d3d9ed24d7d75ec3c6b947f4c54ec632c8e614900f2084c7fca0992e83b
|
3 |
+
size 6596019072
|
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:972afa916aa1ff5b67cf5b293ba9b9e7177647907ab18d8f61fa43032362dac7
|
3 |
+
size 8540785472
|
Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning.fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f7d0244e2ac087fe405e2a57797f56032574d345b7537f5de955d9b07ca6ed6e
|
3 |
+
size 16068913216
|
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Broyojo/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning
|
3 |
+
inference: false
|
4 |
+
model_creator: Broyojo
|
5 |
+
model_name: Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
quantized_by: MaziyarPanahi
|
8 |
+
tags:
|
9 |
+
- quantized
|
10 |
+
- 2-bit
|
11 |
+
- 3-bit
|
12 |
+
- 4-bit
|
13 |
+
- 5-bit
|
14 |
+
- 6-bit
|
15 |
+
- 8-bit
|
16 |
+
- GGUF
|
17 |
+
- text-generation
|
18 |
+
---
|
19 |
+
# [MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF)
|
20 |
+
- Model creator: [Broyojo](https://huggingface.co/Broyojo)
|
21 |
+
- Original model: [Broyojo/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning](https://huggingface.co/Broyojo/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning)
|
22 |
+
|
23 |
+
## Description
|
24 |
+
[MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning-GGUF) contains GGUF format model files for [Broyojo/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning](https://huggingface.co/Broyojo/Meta-Llama-3.1-8B-Instruct-PRM800K-Reasoning).
|
25 |
+
|
26 |
+
### About GGUF
|
27 |
+
|
28 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
29 |
+
|
30 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
31 |
+
|
32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
33 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
34 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
35 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
36 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
37 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
38 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
39 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
42 |
+
|
43 |
+
## Special thanks
|
44 |
+
|
45 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|