Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,6 @@ license: apache-2.0
|
|
5 |
tags:
|
6 |
- medical
|
7 |
- llama-cpp
|
8 |
-
- gguf-my-repo
|
9 |
base_model: mistralai/Mistral-7B-v0.1
|
10 |
datasets:
|
11 |
- Open-Orca/OpenOrca
|
@@ -17,8 +16,8 @@ metrics:
|
|
17 |
tag: text-generation
|
18 |
---
|
19 |
|
20 |
-
# miloradg/base-7b-v0.2-Q8_0-GGUF
|
21 |
-
This model was converted to GGUF format from [`internistai/base-7b-v0.2`](https://huggingface.co/internistai/base-7b-v0.2) using llama.cpp
|
22 |
Refer to the [original model card](https://huggingface.co/internistai/base-7b-v0.2) for more details on the model.
|
23 |
## Use with llama.cpp
|
24 |
|
@@ -45,4 +44,4 @@ Note: You can also use this checkpoint directly through the [usage steps](https:
|
|
45 |
|
46 |
```
|
47 |
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m base-7b-v0.2.Q8_0.gguf -n 128
|
48 |
-
```
|
|
|
5 |
tags:
|
6 |
- medical
|
7 |
- llama-cpp
|
|
|
8 |
base_model: mistralai/Mistral-7B-v0.1
|
9 |
datasets:
|
10 |
- Open-Orca/OpenOrca
|
|
|
16 |
tag: text-generation
|
17 |
---
|
18 |
|
19 |
+
# miloradg/internistai/base-7b-v0.2-Q8_0-GGUF
|
20 |
+
This model was converted to GGUF format from [`internistai/base-7b-v0.2`](https://huggingface.co/internistai/base-7b-v0.2) using llama.cpp.
|
21 |
Refer to the [original model card](https://huggingface.co/internistai/base-7b-v0.2) for more details on the model.
|
22 |
## Use with llama.cpp
|
23 |
|
|
|
44 |
|
45 |
```
|
46 |
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m base-7b-v0.2.Q8_0.gguf -n 128
|
47 |
+
```
|