unsubscribe
commited on
Upload folder using huggingface_hub
Browse files- .gitattributes +9 -0
- README.md +151 -0
- internlm3-8b-instruct-q2_k.gguf +3 -0
- internlm3-8b-instruct-q3_k_m.gguf +3 -0
- internlm3-8b-instruct-q4_0.gguf +3 -0
- internlm3-8b-instruct-q4_k_m.gguf +3 -0
- internlm3-8b-instruct-q5_0.gguf +3 -0
- internlm3-8b-instruct-q5_k_m.gguf +3 -0
- internlm3-8b-instruct-q6_k.gguf +3 -0
- internlm3-8b-instruct-q8_0.gguf +3 -0
- internlm3-8b-instruct.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,12 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
internlm3-8b-instruct-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
internlm3-8b-instruct-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
internlm3-8b-instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
internlm3-8b-instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
internlm3-8b-instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
internlm3-8b-instruct-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
internlm3-8b-instruct-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
internlm3-8b-instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
internlm3-8b-instruct.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- chat
|
8 |
+
---
|
9 |
+
# InternLM3-8B-Instruct GGUF Model
|
10 |
+
|
11 |
+
## Introduction
|
12 |
+
|
13 |
+
The `internlm3-8b-instruct` model in GGUF format can be utilized by [llama.cpp](https://github.com/ggerganov/llama.cpp), a highly popular open-source framework for Large Language Model (LLM) inference, across a variety of hardware platforms, both locally and in the cloud.
|
14 |
+
This repository offers `internlm3-8b-instruct` models in GGUF format in both half precision and various low-bit quantized versions, including `q5_0`, `q5_k_m`, `q6_k`, and `q8_0`.
|
15 |
+
|
16 |
+
In the subsequent sections, we will first present the installation procedure, followed by an explanation of the model download process.
|
17 |
+
And finally we will illustrate the methods for model inference and service deployment through specific examples.
|
18 |
+
|
19 |
+
## Installation
|
20 |
+
|
21 |
+
We recommend building `llama.cpp` from source. The following code snippet provides an example for the Linux CUDA platform. For instructions on other platforms, please refer to the [official guide](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build).
|
22 |
+
|
23 |
+
- Step 1: create a conda environment and install cmake
|
24 |
+
|
25 |
+
```shell
|
26 |
+
conda create --name internlm3 python=3.10 -y
|
27 |
+
conda activate internlm3
|
28 |
+
pip install cmake
|
29 |
+
```
|
30 |
+
|
31 |
+
- Step 2: clone the source code and build the project
|
32 |
+
|
33 |
+
```shell
|
34 |
+
git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
|
35 |
+
cd llama.cpp
|
36 |
+
cmake -B build -DGGML_CUDA=ON
|
37 |
+
cmake --build build --config Release -j
|
38 |
+
```
|
39 |
+
|
40 |
+
All the built targets can be found in the sub directory `build/bin`
|
41 |
+
|
42 |
+
In the following sections, we assume that the working directory is at the root directory of `llama.cpp`.
|
43 |
+
|
44 |
+
## Download models
|
45 |
+
|
46 |
+
In the [introduction section](#introduction), we mentioned that this repository includes several models with varying levels of computational precision. You can download the appropriate model based on your requirements.
|
47 |
+
For instance, `internlm3-8b-instruct-fp16.gguf` can be downloaded as below:
|
48 |
+
|
49 |
+
```shell
|
50 |
+
pip install huggingface-hub
|
51 |
+
huggingface-cli download internlm/internlm3-8b-instruct-gguf internlm3-8b-instruct-fp16.gguf --local-dir . --local-dir-use-symlinks False
|
52 |
+
```
|
53 |
+
|
54 |
+
## Inference
|
55 |
+
|
56 |
+
You can use `llama-cli` for conducting inference. For a detailed explanation of `llama-cli`, please refer to [this guide](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
57 |
+
|
58 |
+
### chat example
|
59 |
+
|
60 |
+
```shell
|
61 |
+
build/bin/llama-cli \
|
62 |
+
--model internlm3-8b-instruct-fp16.gguf \
|
63 |
+
--predict 512 \
|
64 |
+
--ctx-size 4096 \
|
65 |
+
--gpu-layers 48 \
|
66 |
+
--temp 0.8 \
|
67 |
+
--top-p 0.8 \
|
68 |
+
--top-k 50 \
|
69 |
+
--seed 1024 \
|
70 |
+
--color \
|
71 |
+
--prompt "<|im_start|>system\nYou are an AI assistant whose name is InternLM (书生·浦语).\n- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.<|im_end|>\n" \
|
72 |
+
--interactive \
|
73 |
+
--multiline-input \
|
74 |
+
--conversation \
|
75 |
+
--verbose \
|
76 |
+
--logdir workdir/logdir \
|
77 |
+
--in-prefix "<|im_start|>user\n" \
|
78 |
+
--in-suffix "<|im_end|>\n<|im_start|>assistant\n"
|
79 |
+
```
|
80 |
+
|
81 |
+
### Function call example
|
82 |
+
|
83 |
+
`llama-cli` example:
|
84 |
+
|
85 |
+
```shell
|
86 |
+
build/bin/llama-cli \
|
87 |
+
--model internlm3-8b-instruct-fp16.gguf \
|
88 |
+
--predict 512 \
|
89 |
+
--ctx-size 4096 \
|
90 |
+
--gpu-layers 48 \
|
91 |
+
--temp 0.8 \
|
92 |
+
--top-p 0.8 \
|
93 |
+
--top-k 50 \
|
94 |
+
--seed 1024 \
|
95 |
+
--color \
|
96 |
+
--prompt '<|im_start|>system\nYou are InternLM-Chat, a harmless AI assistant.<|im_end|>\n<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>\n<|im_start|>user\n' \
|
97 |
+
--interactive \
|
98 |
+
--multiline-input \
|
99 |
+
--conversation \
|
100 |
+
--verbose \
|
101 |
+
--in-suffix "<|im_end|>\n<|im_start|>assistant\n" \
|
102 |
+
--special
|
103 |
+
```
|
104 |
+
|
105 |
+
Conversation results:
|
106 |
+
|
107 |
+
```text
|
108 |
+
<s><|im_start|>system
|
109 |
+
You are InternLM-Chat, a harmless AI assistant.<|im_end|>
|
110 |
+
<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>
|
111 |
+
<|im_start|>user
|
112 |
+
|
113 |
+
> I want to know today's weather in Shanghai
|
114 |
+
I need to use the get_current_weather function to get the current weather in Shanghai.<|action_start|><|plugin|>
|
115 |
+
{"name": "get_current_weather", "parameters": {"location": "Shanghai"}}<|action_end|>32
|
116 |
+
<|im_end|>
|
117 |
+
|
118 |
+
> <|im_start|>environment name=<|plugin|>\n{"temperature": 22}
|
119 |
+
The current temperature in Shanghai is 22 degrees Celsius.<|im_end|>
|
120 |
+
|
121 |
+
>
|
122 |
+
```
|
123 |
+
|
124 |
+
## Serving
|
125 |
+
|
126 |
+
`llama.cpp` provides an OpenAI API compatible server - `llama-server`. You can deploy `internlm3-8b-instruct-fp16.gguf` into a service like this:
|
127 |
+
|
128 |
+
```shell
|
129 |
+
./build/bin/llama-server -m ./internlm3-8b-instruct-fp16.gguf -ngl 48
|
130 |
+
```
|
131 |
+
|
132 |
+
At the client side, you can access the service through OpenAI API:
|
133 |
+
|
134 |
+
```python
|
135 |
+
from openai import OpenAI
|
136 |
+
client = OpenAI(
|
137 |
+
api_key='YOUR_API_KEY',
|
138 |
+
base_url='http://localhost:8080/v1'
|
139 |
+
)
|
140 |
+
model_name = client.models.list().data[0].id
|
141 |
+
response = client.chat.completions.create(
|
142 |
+
model=model_name,
|
143 |
+
messages=[
|
144 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
145 |
+
{"role": "user", "content": " provide three suggestions about time management"},
|
146 |
+
],
|
147 |
+
temperature=0.8,
|
148 |
+
top_p=0.8
|
149 |
+
)
|
150 |
+
print(response)
|
151 |
+
```
|
internlm3-8b-instruct-q2_k.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bc3ae670c7f8e74b69b6ba4d7c4dc9e7ad123b42215f176200493c09b627354b
|
3 |
+
size 3450641600
|
internlm3-8b-instruct-q3_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:37edd5c2afdd3c11d5b7205a453e9cf054b97ede78706fd0bbb4433b0ac3ec0d
|
3 |
+
size 4390280384
|
internlm3-8b-instruct-q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dd38d7164071f9ca559e2e099a9a63f3ee9c0a4d7b5e81067349a25e41a7c915
|
3 |
+
size 5092613312
|
internlm3-8b-instruct-q4_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e7b10f95f20a5c5a8e6213925c88bc6b02012e41c0c7d7da0b0788c528c0e010
|
3 |
+
size 5358623936
|
internlm3-8b-instruct-q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7c1d3b59f10bb3dc940d137f605d69e23c8ed8944db1201eab0620bf2df02d08
|
3 |
+
size 6127295680
|
internlm3-8b-instruct-q5_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:137045fa5d9504762ebdb3029a3780fafc2c2599fd77f99ed71b5124a9dae325
|
3 |
+
size 6264331456
|
internlm3-8b-instruct-q6_k.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa2231958b48cca2a53ad98cc0c4caa354845bbc4db14f2c7eda7981f434fab6
|
3 |
+
size 7226645696
|
internlm3-8b-instruct-q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4b6cd620c74ff56aa465d31f30b9e54a0a73defe18dd979de84e82d6ef54174b
|
3 |
+
size 9358826688
|
internlm3-8b-instruct.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a75c3ca0b4feefe73223751301f67fe73caf5f2a08d2c8a7d7f4d914fc207a6a
|
3 |
+
size 17612430528
|