MaziyarPanahi
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -40,6 +40,12 @@ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1"
|
|
40 |
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1")
|
41 |
```
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
## Examples
|
44 |
|
45 |
```
|
|
|
40 |
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1")
|
41 |
```
|
42 |
|
43 |
+
### Quantized Models
|
44 |
+
|
45 |
+
> I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone!
|
46 |
+
|
47 |
+
- GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1-GGUF)
|
48 |
+
|
49 |
## Examples
|
50 |
|
51 |
```
|