File size: 1,397 Bytes
4a5a2c3 043be07 d22b4fa 043be07 d22b4fa 043be07 d22b4fa a6b00c3 d22b4fa a6b00c3 d22b4fa a6b00c3 d22b4fa a6b00c3 4a5a2c3 d22b4fa 043be07 d22b4fa 043be07 67bb427 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: apache-2.0
datasets:
- grammarly/coedit
language:
- en
tags:
- text-generation-inference
- candle
widget:
- text: >-
Fix the grammar: When I grow up,
I start to understand what he said is
quite right.
example_title: Fluency
- text: >-
Make this text coherent: Their flight
is weak. They run quickly through
the tree canopy.
example_title: Coherence
- text: >-
Rewrite to make this easier to understand: A storm surge is what
forecasters consider a hurricane's most treacherous aspect.
example_title: Simplification
- text: >-
Paraphrase this: Do you know where I was born?
example_title: Paraphrase
- text: >-
Write this more formally: omg i love that song im
listening to it right now
example_title: Formalize
- text: >-
Write in a more neutral way: The authors' exposé on nutrition studies.
example_title: Neutralize
---
# Quantized candle weights for the CoEdIT model
Quantized weights of [CoEdIT](https://github.com/vipulraheja/coedit) for inference with [candle](https://github.com/huggingface/candle/tree/main/candle-examples/examples/quantized-t5).
Conversion command, using candle:
```shell
cargo run --example tensor-tools --release -- quantize \
--quantization q6k \
/path/to/coedit-<version>/model.safetensors \
--out-file model<version>.gguf
```
|