tinybiggames commited on
Commit
50ddc51
·
verified ·
1 Parent(s): 5071492

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -48
README.md CHANGED
@@ -1,18 +1,18 @@
1
  ---
 
2
  language:
3
  - en
4
  license: mit
 
 
5
  tags:
6
  - nlp
7
  - code
8
  - llama-cpp
9
  - gguf-my-repo
10
- - LMEngine
11
- license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
12
- pipeline_tag: text-generation
13
  inference:
14
  parameters:
15
- temperature: 0
16
  widget:
17
  - messages:
18
  - role: user
@@ -22,59 +22,43 @@ widget:
22
  # tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
23
  This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
25
- ## Use with tinyBigGAMES's [Inference](https://github.com/tinyBigGAMES) Libraries.
26
 
 
 
27
 
28
- How to configure LMEngine:
 
29
 
30
- ```Delphi
31
- InitConfig(
32
- 'C:/LLM/gguf', // path to model files
33
- -1 // number of GPU layer, -1 to use all available layers
34
- );
35
  ```
 
36
 
37
- How to define model:
 
 
 
38
 
39
- ```Delphi
40
- DefineModel('phi-3-mini-4k-instruct.Q4_K_M.gguf',
41
- 'phi-3-mini-4k-instruct.Q4_K_M', 4000,
42
- '<|{role}|>{content}<|end|>',
43
- '<|assistant|>');
44
  ```
45
 
46
- How to add a message:
47
 
48
- ```Delphi
49
- AddMessage(
50
- ROLE_USER, // role
51
- 'What is AI?' // content
52
- );
53
  ```
54
 
55
- `{role}` - will be substituted with the message "role"
56
- `{content}` - will be substituted with the message "content"
57
-
58
- How to do inference:
59
 
60
- ```Delphi
61
- var
62
- LTokenOutputSpeed: Single;
63
- LInputTokens: Int32;
64
- LOutputTokens: Int32;
65
- LTotalTokens: Int32;
66
-
67
- if RunInference('phi-3-mini-4k-instruct.Q4_K_M', 1024) then
68
- begin
69
- GetInferenceStats(nil, @LTokenOutputSpeed, @LInputTokens, @LOutputTokens,
70
- @LTotalTokens);
71
- PrintLn('', FG_WHITE);
72
- PrintLn('Tokens :: Input: %d, Output: %d, Total: %d, Speed: %3.1f t/s',
73
- FG_BRIGHTYELLOW, LInputTokens, LOutputTokens, LTotalTokens, LTokenOutputSpeed);
74
- end
75
- else
76
- begin
77
- PrintLn('', FG_WHITE);
78
- PrintLn('Error: %s', FG_RED, GetError());
79
- end;
80
- ```
 
1
  ---
2
+ base_model: microsoft/Phi-3-mini-4k-instruct
3
  language:
4
  - en
5
  license: mit
6
+ license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
7
+ pipeline_tag: text-generation
8
  tags:
9
  - nlp
10
  - code
11
  - llama-cpp
12
  - gguf-my-repo
 
 
 
13
  inference:
14
  parameters:
15
+ temperature: 0.0
16
  widget:
17
  - messages:
18
  - role: user
 
22
  # tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
23
  This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
 
25
 
26
+ ## Use with llama.cpp
27
+ Install llama.cpp through brew (works on Mac and Linux)
28
 
29
+ ```bash
30
+ brew install llama.cpp
31
 
 
 
 
 
 
32
  ```
33
+ Invoke the llama.cpp server or the CLI.
34
 
35
+ ### CLI:
36
+ ```bash
37
+ llama-cli --hf-repo tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
38
+ ```
39
 
40
+ ### Server:
41
+ ```bash
42
+ llama-server --hf-repo tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
 
 
43
  ```
44
 
45
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
46
 
47
+ Step 1: Clone llama.cpp from GitHub.
48
+ ```
49
+ git clone https://github.com/ggerganov/llama.cpp
 
 
50
  ```
51
 
52
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
53
+ ```
54
+ cd llama.cpp && LLAMA_CURL=1 make
55
+ ```
56
 
57
+ Step 3: Run inference through the main binary.
58
+ ```
59
+ ./llama-cli --hf-repo tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
60
+ ```
61
+ or
62
+ ```
63
+ ./llama-server --hf-repo tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
64
+ ```