Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,7 @@ datasets:
|
|
4 |
- m-a-p/Code-Feedback
|
5 |
---
|
6 |
|
|
|
7 |
|
8 |
# Model Overview
|
9 |
|
@@ -11,7 +12,6 @@ The base model used for training is `CallComply/openchat-3.5-0106-128k`, which f
|
|
11 |
|
12 |
The reason for choosing this base model is its long context length and strong performance in every category, specifically coding. This model was trained to output up to 8192 tokens, and still inherits its 128k context window from its base model. Making this is the best open source generalized agent model.
|
13 |
|
14 |
-
<img src="https://huggingface.co/Vezora/Agent-7b-v1/resolve/main/Designer.png" width="400" height="500" />
|
15 |
|
16 |
# Additional Information
|
17 |
|
|
|
4 |
- m-a-p/Code-Feedback
|
5 |
---
|
6 |
|
7 |
+
<img src="https://huggingface.co/Vezora/Agent-7b-v1/resolve/main/Designer.png" width="400" height="500" />
|
8 |
|
9 |
# Model Overview
|
10 |
|
|
|
12 |
|
13 |
The reason for choosing this base model is its long context length and strong performance in every category, specifically coding. This model was trained to output up to 8192 tokens, and still inherits its 128k context window from its base model. Making this is the best open source generalized agent model.
|
14 |
|
|
|
15 |
|
16 |
# Additional Information
|
17 |
|