Has this model had it's pre-tokenizer fixed?
#10
by
smcleod
- opened
Many llama 3 quantizations were created with a missing pre-tokenizer, has this been fixed in these quants?
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
they stated it in the paper
you can also test on the playground
oh wait
smcleod
changed discussion status to
closed