Jeronymous
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -72,7 +72,7 @@ The model architecture and hyperparameters are the same as for [Lucie-7B](https:
|
|
72 |
### Test with ollama
|
73 |
|
74 |
* Download and install [Ollama](https://ollama.com/download)
|
75 |
-
* Download the [GGUF model](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct
|
76 |
* Copy the [`Modelfile`](Modelfile), adpating if necessary the path to the GGUF file (line starting with `FROM`).
|
77 |
* Run in a shell:
|
78 |
* `ollama create -f Modelfile Lucie`
|
@@ -101,7 +101,7 @@ docker run --runtime nvidia --gpus=all \
|
|
101 |
-p 8000:8000 \
|
102 |
--ipc=host \
|
103 |
vllm/vllm-openai:latest \
|
104 |
-
--model OpenLLM-France/Lucie-7B-Instruct
|
105 |
```
|
106 |
|
107 |
#### 2. Test using OpenAI Client in Python
|
@@ -119,7 +119,7 @@ content = "Hello Lucie"
|
|
119 |
|
120 |
# Generate a response
|
121 |
chat_response = client.chat.completions.create(
|
122 |
-
model="OpenLLM-France/Lucie-7B-Instruct
|
123 |
messages=[
|
124 |
{"role": "user", "content": content}
|
125 |
],
|
|
|
72 |
### Test with ollama
|
73 |
|
74 |
* Download and install [Ollama](https://ollama.com/download)
|
75 |
+
* Download the [GGUF model](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct/resolve/main/Lucie-7B-q4_k_m.gguf)
|
76 |
* Copy the [`Modelfile`](Modelfile), adpating if necessary the path to the GGUF file (line starting with `FROM`).
|
77 |
* Run in a shell:
|
78 |
* `ollama create -f Modelfile Lucie`
|
|
|
101 |
-p 8000:8000 \
|
102 |
--ipc=host \
|
103 |
vllm/vllm-openai:latest \
|
104 |
+
--model OpenLLM-France/Lucie-7B-Instruct
|
105 |
```
|
106 |
|
107 |
#### 2. Test using OpenAI Client in Python
|
|
|
119 |
|
120 |
# Generate a response
|
121 |
chat_response = client.chat.completions.create(
|
122 |
+
model="OpenLLM-France/Lucie-7B-Instruct",
|
123 |
messages=[
|
124 |
{"role": "user", "content": content}
|
125 |
],
|