hellork commited on
Commit
7fbcdb7
·
verified ·
1 Parent(s): 83343ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -39,7 +39,28 @@ llama-cli --hf-repo hellork/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3
39
  llama-server --hf-repo hellork/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-iq4_nl-imat.gguf -c 2048
40
  ```
41
 
42
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  Step 1: Clone llama.cpp from GitHub.
45
  ```
 
39
  llama-server --hf-repo hellork/Phi-3-mini-128k-instruct-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-iq4_nl-imat.gguf -c 2048
40
  ```
41
 
42
+ ### The Ship's Computer:
43
+
44
+ [whisper_dictation](https://github.com/themanyone/whisper_dictation)
45
+
46
+ Interact with this model by speaking to it. Lean, fast, & private, networked speech to text, AI images, multi-modal voice chat, control apps, webcam, and sound with less than 4GiB of VRAM.
47
+
48
+ ```bash
49
+ git clone -b main --single-branch https://github.com/themanyone/whisper_dictation.git
50
+ pip install -r whisper_dictation/requirements.txt
51
+
52
+ git clone https://github.com/ggerganov/whisper.cpp
53
+ cd whisper.cpp
54
+ GGML_CUDA=1 make -j # assuming CUDA is available. see docs
55
+ ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
56
+
57
+ whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
58
+ cd whisper_dictation
59
+ ./whisper_cpp_client.py
60
+ ```
61
+ See [the docs](https://github.com/themanyone/whisper_dictation) for tips on integrating with llama.cpp server, enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
62
+
63
+ ### Install Llama.cpp via git:
64
 
65
  Step 1: Clone llama.cpp from GitHub.
66
  ```