chore: update README.md content
Browse files
README.md
CHANGED
@@ -1,15 +1,22 @@
|
|
1 |
---
|
2 |
base_model: ghost-x/ghost-8b-beta
|
3 |
language:
|
4 |
-
- en
|
5 |
- vi
|
|
|
6 |
- es
|
7 |
- pt
|
8 |
-
- de
|
9 |
-
- it
|
10 |
-
- fr
|
11 |
-
- ko
|
12 |
- zh
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
license: other
|
14 |
license_name: ghost-open-llms
|
15 |
license_link: https://ghost-x.org/ghost-open-llms-license
|
@@ -26,7 +33,7 @@ widget:
|
|
26 |
---
|
27 |
|
28 |
|
29 |
-
<p><img src="
|
30 |
|
31 |
A large language model was developed with goals including excellent multilingual support, superior knowledge capabilities and cost efficiency.
|
32 |
|
@@ -179,7 +186,7 @@ For direct use with `transformers`, you can easily get started with the followin
|
|
179 |
AutoTokenizer,
|
180 |
)
|
181 |
|
182 |
-
base_model = "ghost-x/ghost-8b-beta"
|
183 |
model = AutoModelForCausalLM.from_pretrained(
|
184 |
base_model,
|
185 |
torch_dtype=torch.bfloat16,
|
@@ -210,7 +217,7 @@ For direct use with `transformers`, you can easily get started with the followin
|
|
210 |
BitsAndBytesConfig,
|
211 |
)
|
212 |
|
213 |
-
base_model = "ghost-x/ghost-8b-beta"
|
214 |
bnb_config = BitsAndBytesConfig(
|
215 |
load_in_4bit=True,
|
216 |
bnb_4bit_quant_type="nf4",
|
@@ -432,7 +439,7 @@ For deployment, we recommend using vLLM. You can enable the long-context capabil
|
|
432 |
- Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
|
433 |
|
434 |
```bash
|
435 |
-
python -m vllm.entrypoints.openai.api_server --served-model-name ghost-8b-beta --model ghost-x/ghost-8b-beta
|
436 |
```
|
437 |
|
438 |
- Try it now:
|
|
|
1 |
---
|
2 |
base_model: ghost-x/ghost-8b-beta
|
3 |
language:
|
|
|
4 |
- vi
|
5 |
+
- ko
|
6 |
- es
|
7 |
- pt
|
|
|
|
|
|
|
|
|
8 |
- zh
|
9 |
+
- fr
|
10 |
+
- it
|
11 |
+
- de
|
12 |
+
- ja
|
13 |
+
- ru
|
14 |
+
- pl
|
15 |
+
- nl
|
16 |
+
- hi
|
17 |
+
- tr
|
18 |
+
- id
|
19 |
+
- en
|
20 |
license: other
|
21 |
license_name: ghost-open-llms
|
22 |
license_link: https://ghost-x.org/ghost-open-llms-license
|
|
|
33 |
---
|
34 |
|
35 |
|
36 |
+
<p><img src="https://ghost-x.org/docs/models/ghost-8b-beta/images/logo.jpeg" width="40%" align="center" /></p>
|
37 |
|
38 |
A large language model was developed with goals including excellent multilingual support, superior knowledge capabilities and cost efficiency.
|
39 |
|
|
|
186 |
AutoTokenizer,
|
187 |
)
|
188 |
|
189 |
+
base_model = "ghost-x/ghost-8b-beta-1608"
|
190 |
model = AutoModelForCausalLM.from_pretrained(
|
191 |
base_model,
|
192 |
torch_dtype=torch.bfloat16,
|
|
|
217 |
BitsAndBytesConfig,
|
218 |
)
|
219 |
|
220 |
+
base_model = "ghost-x/ghost-8b-beta-1608"
|
221 |
bnb_config = BitsAndBytesConfig(
|
222 |
load_in_4bit=True,
|
223 |
bnb_4bit_quant_type="nf4",
|
|
|
439 |
- Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
|
440 |
|
441 |
```bash
|
442 |
+
python -m vllm.entrypoints.openai.api_server --served-model-name ghost-8b-beta --model ghost-x/ghost-8b-beta-1608
|
443 |
```
|
444 |
|
445 |
- Try it now:
|