apepkuss79 commited on
Commit
58e0a77
·
verified ·
1 Parent(s): 20ff22b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -33,9 +33,7 @@ language:
33
 
34
  ## Run with LlamaEdge
35
 
36
- - LlamaEdge version: coming soon
37
-
38
- <!-- - LlamaEdge version: [v0.12.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.3)
39
 
40
  - Prompt template
41
 
@@ -45,11 +43,11 @@ language:
45
 
46
  ```text
47
  <s>[INST] {user_message_1} [/INST]{assistant_message_1}</s>[INST] {user_message_2} [/INST]{assistant_message_2}</s>
48
- ``` -->
49
 
50
  - Context size: `128000`
51
 
52
- <!-- - Run as LlamaEdge service
53
 
54
  ```bash
55
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:Mistral-Nemo-Instruct-2407-Q5_K_M.gguf \
@@ -66,7 +64,7 @@ language:
66
  llama-chat.wasm \
67
  --prompt-template mistral-instruct \
68
  --ctx-size 128000
69
- ``` -->
70
 
71
  ## Quantized GGUF Models
72
 
 
33
 
34
  ## Run with LlamaEdge
35
 
36
+ - LlamaEdge version: [v0.12.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.4)
 
 
37
 
38
  - Prompt template
39
 
 
43
 
44
  ```text
45
  <s>[INST] {user_message_1} [/INST]{assistant_message_1}</s>[INST] {user_message_2} [/INST]{assistant_message_2}</s>
46
+ ```
47
 
48
  - Context size: `128000`
49
 
50
+ - Run as LlamaEdge service
51
 
52
  ```bash
53
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:Mistral-Nemo-Instruct-2407-Q5_K_M.gguf \
 
64
  llama-chat.wasm \
65
  --prompt-template mistral-instruct \
66
  --ctx-size 128000
67
+ ```
68
 
69
  ## Quantized GGUF Models
70