BIG FAN OF THE READER API
Incredible work by the team! Thanks for the work, the model is good and the notebook very informative! The quantize version of the model will be more accessible!
I thought the advances from quantization on 1.5B parameters are limited. But why not, it is an interesting idea to see what's we can get from quantization.
@Svngoku oh, I just observed you have take a shot https://huggingface.co/Svngoku/ReaderLM-v2-Q8_0-GGUF How do you feel?
@Svngoku oh, I just observed you have take a shot https://huggingface.co/Svngoku/ReaderLM-v2-Q8_0-GGUF How do you feel?
Yes indeed, I quantized the model to 8-bit GGUF, and tested it using the same notebook, and it works fine but still consumes just as much RAM. In terms of time, execution takes about 4 min on a L4 high RAM (22.5 GB) for 28984 generated tokens.
Code
!wget https://huggingface.co/Svngoku/ReaderLM-v2-Q8_0-GGUF/resolve/main/readerlm-v2-q8_0.gguf
from vllm import LLM
llm = LLM(
model="/content/readerlm-v2-q8_0.gguf",
max_model_len=max_model_len,
tokenizer='jinaai/ReaderLM-v2'
)