KeyError: 'embed_tokens' when Running vLLM
#2
by
CedPei
- opened
Hello,
I'm currently working on integrating the model with vLLM and have encountered an issue during the loading process. When attempting to load the model, I receive the following error:
param = params_dict[name]
KeyError: 'embed_tokens'
Any insights or suggestions from the community would be greatly appreciated.
Thank you for your support.