flyingfishinwater
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -118,25 +118,25 @@ The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens.
|
|
118 |
|
119 |
---
|
120 |
|
121 |
-
# Mistral 7B v0.
|
122 |
|
123 |
-
The Mistral
|
124 |
|
125 |
**Model Intention:** It's a 7B large model for Q&A purpose. But it requires a high-end device to run.
|
126 |
|
127 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/
|
128 |
|
129 |
-
**Model Info URL:** [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.
|
130 |
|
131 |
**Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0)
|
132 |
|
133 |
-
**Model Description:** The Mistral
|
134 |
|
135 |
**Developer:** [https://mistral.ai/](https://mistral.ai/)
|
136 |
|
137 |
-
**File Size:**
|
138 |
|
139 |
-
**Context Length:**
|
140 |
|
141 |
**Prompt Format:**
|
142 |
|
@@ -155,15 +155,15 @@ The Mistral-7B-v0.2 Large Language Model (LLM) is a pretrained generative text m
|
|
155 |
|
156 |
---
|
157 |
|
158 |
-
# OpenChat 3.
|
159 |
|
160 |
OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
161 |
|
162 |
-
**Model Intention:**
|
163 |
|
164 |
-
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.
|
165 |
|
166 |
-
**Model Info URL:** [https://huggingface.co/openchat/
|
167 |
|
168 |
**Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0)
|
169 |
|
@@ -171,14 +171,14 @@ OpenChat is an innovative library of open-source language models, fine-tuned wit
|
|
171 |
|
172 |
**Developer:** [https://openchat.team/](https://openchat.team/)
|
173 |
|
174 |
-
**File Size:**
|
175 |
|
176 |
**Context Length:** 8192 tokens
|
177 |
|
178 |
**Prompt Format:**
|
179 |
|
180 |
```
|
181 |
-
GPT4 User: {{prompt}}<|end_of_turn|>GPT4 Assistant:
|
182 |
```
|
183 |
|
184 |
**Template Name:** Mistral
|
@@ -192,11 +192,11 @@ GPT4 User: {{prompt}}<|end_of_turn|>GPT4 Assistant:
|
|
192 |
|
193 |
---
|
194 |
|
195 |
-
# Phi-3
|
196 |
|
197 |
The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It is optimized for the instruction following and safety measures. It is good at common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
|
198 |
|
199 |
-
**Model Intention:** It's a 3B model with
|
200 |
|
201 |
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true)
|
202 |
|
|
|
118 |
|
119 |
---
|
120 |
|
121 |
+
# Mistral 7B v0.3
|
122 |
|
123 |
+
The Mistral 7B v0.3 Large is a pretrained generative text model with 7 billion parameters. It extended vocabulary to 32768 and supports function calling.
|
124 |
|
125 |
**Model Intention:** It's a 7B large model for Q&A purpose. But it requires a high-end device to run.
|
126 |
|
127 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Mistral-7B-Instruct-v0.3.Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Mistral-7B-Instruct-v0.3.Q3_K_M.gguf?download=true)
|
128 |
|
129 |
+
**Model Info URL:** [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
|
130 |
|
131 |
**Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0)
|
132 |
|
133 |
+
**Model Description:** The Mistral 7B v0.3 Large is a pretrained generative text model with 7 billion parameters. It extended vocabulary to 32768 and supports function calling.
|
134 |
|
135 |
**Developer:** [https://mistral.ai/](https://mistral.ai/)
|
136 |
|
137 |
+
**File Size:** 3520 MB
|
138 |
|
139 |
+
**Context Length:** 8192 tokens
|
140 |
|
141 |
**Prompt Format:**
|
142 |
|
|
|
155 |
|
156 |
---
|
157 |
|
158 |
+
# OpenChat 3.6(0522)
|
159 |
|
160 |
OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
161 |
|
162 |
+
**Model Intention:** the Llama-3 based version OpenChat 3.6 20240522, outperforming official Llama 3 8B Instruct. But it requires a high-end device to run.
|
163 |
|
164 |
+
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.6-8b-20240522-Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.6-8b-20240522-Q3_K_M.gguf?download=true)
|
165 |
|
166 |
+
**Model Info URL:** [https://huggingface.co/openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
|
167 |
|
168 |
**Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0)
|
169 |
|
|
|
171 |
|
172 |
**Developer:** [https://openchat.team/](https://openchat.team/)
|
173 |
|
174 |
+
**File Size:** 4020 MB
|
175 |
|
176 |
**Context Length:** 8192 tokens
|
177 |
|
178 |
**Prompt Format:**
|
179 |
|
180 |
```
|
181 |
+
GPT4 Correct User: {{prompt}}<|end_of_turn|>GPT4 Correct Assistant:
|
182 |
```
|
183 |
|
184 |
**Template Name:** Mistral
|
|
|
192 |
|
193 |
---
|
194 |
|
195 |
+
# Phi-3 Vision
|
196 |
|
197 |
The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It is optimized for the instruction following and safety measures. It is good at common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
|
198 |
|
199 |
+
**Model Intention:** It's a Microsoft Phi-3B model with visual support. It can understand images as well as text
|
200 |
|
201 |
**Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true)
|
202 |
|