Update README.md to install llama-cpp-python to be >=0.2.70
Browse files
README.md
CHANGED
@@ -28,12 +28,12 @@ These files are designed for use with [GGML](https://ggml.ai/) and executors bas
|
|
28 |
To get started using one of the GGUF file, you can simply use [llama-cpp-python](https://github.com/abetlen/llama-cpp-python),
|
29 |
a Python binding for `llama.cpp`.
|
30 |
|
31 |
-
1. Install `llama-cpp-python` with pip.
|
32 |
The following command will install a pre-built wheel with basic CPU support.
|
33 |
For other installation methods, see [llama-cpp-python installation docs](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation).
|
34 |
|
35 |
```bash
|
36 |
-
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
|
37 |
```
|
38 |
|
39 |
3. Download one of the GGUF file. In this example,
|
|
|
28 |
To get started using one of the GGUF file, you can simply use [llama-cpp-python](https://github.com/abetlen/llama-cpp-python),
|
29 |
a Python binding for `llama.cpp`.
|
30 |
|
31 |
+
1. Install `llama-cpp-python` of at least `v0.2.70` with pip.
|
32 |
The following command will install a pre-built wheel with basic CPU support.
|
33 |
For other installation methods, see [llama-cpp-python installation docs](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation).
|
34 |
|
35 |
```bash
|
36 |
+
pip install llama-cpp-python>=0.2.70 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
|
37 |
```
|
38 |
|
39 |
3. Download one of the GGUF file. In this example,
|