Update README.md
Browse files
README.md
CHANGED
@@ -43,15 +43,30 @@ For more details about the training protocol of this model, please refer to the
|
|
43 |
|
44 |
# Usage
|
45 |
|
46 |
-
Currently to use this model you can rely on [BitNet](https://github.com/microsoft/BitNet) library. You can also play with the model using the [falcon-1.58bit playground](https://huggingface.co/spaces/tiiuae/falcon3-1.58bit-playground) (only for the 7B instruct version).
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
## BitNet
|
50 |
|
51 |
```
|
52 |
git clone https://github.com/microsoft/BitNet && cd BitNet
|
53 |
pip install -r requirements.txt
|
54 |
-
|
55 |
python run_inference.py -m models/Falcon3-10B-1.58bit/ggml-model-i2_s.gguf -p "You are a helpful assistant" -cnv
|
56 |
```
|
57 |
|
|
|
43 |
|
44 |
# Usage
|
45 |
|
46 |
+
Currently to use this model you can either rely on Hugging Face transformers library or [BitNet](https://github.com/microsoft/BitNet) library. You can also play with the model using the [falcon-1.58bit playground](https://huggingface.co/spaces/tiiuae/falcon3-1.58bit-playground) (only for the 7B instruct version).
|
47 |
|
48 |
+
## 🤗 transformers
|
49 |
+
|
50 |
+
```python
|
51 |
+
import torch
|
52 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
53 |
+
|
54 |
+
model_id = "tiiuae/Falcon3-7B-Instruct-1.58bit"
|
55 |
+
|
56 |
+
model = AutoModelForCausalLM.from_pretrained(
|
57 |
+
model_id,
|
58 |
+
torch_dtype=torch.bfloat16,
|
59 |
+
).to("cuda")
|
60 |
+
|
61 |
+
# Perform text generation
|
62 |
+
```
|
63 |
|
64 |
## BitNet
|
65 |
|
66 |
```
|
67 |
git clone https://github.com/microsoft/BitNet && cd BitNet
|
68 |
pip install -r requirements.txt
|
69 |
+
python setup_env.py --hf-repo tiiuae/Falcon3-10B-Instruct-1.58bit -q i2_s
|
70 |
python run_inference.py -m models/Falcon3-10B-1.58bit/ggml-model-i2_s.gguf -p "You are a helpful assistant" -cnv
|
71 |
```
|
72 |
|