Datasets:

Modalities:
Image
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:

Pls help me - OSError: Can't load tokenizer

#2
by aifinitydigital - opened

Hi all
I have trained ""unsloth/Llama-3.2-11B-Vision-Instruct", with "unsloth/Radiology_mini" dataset and uploaded to Huggingface as below , from my colab =?
model.save_pretrained("lora_model")
tokenizer.save_pretrained("lora_model")

model.save_pretrained_merged("aifinitydigital/Llama-3.2-11B-Vision-Radiology-3", tokenizer,)
model.push_to_hub_merged("aifinitydigital/Llama-3.2-11B-Vision-Radiology-3", tokenizer, save_method = "merged_16bit", token ="hf_xxxxxxyyyyyy")

However , when I try to load the model using "vllm" I get below error
!vllm serve "aifinitydigital/Llama-3.2-11B-Vision-Radiology-3"

OSError: Can't load tokenizer for 'aifinitydigital/Llama-3.2-11B-Vision-Radiology-3'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'aifinitydigital/Llama-3.2-11B-Vision-Radiology-3' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.
WARNING 12-13 11:27:57 arg_utils.py:1023] The model has a long context length (131072). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.

Please help me what am I missing ?

regards
Dharani

Mate, it looks like you forgot to save/upload the tokenizer.

https://huggingface.co/aifinitydigital/Llama-3.2-11B-Vision-Radiology-3/tree/main

Compare that with the files in the unsloth upload:

https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct/tree/main

the tokenizer and special_tokens files are missing in your repo

Sign up or log in to comment