The training data is not in ChatML format and it won't stop correctly.
#3
by
imoc
- opened
It continues generating alternating from "user" then "assistant"...
What are your settings like?
To clarify, it was indeed trained on ChatML but with user and assistant as roles and name: as prefix. I.e. use ChatML not ChatML-Names.
Got it. Indeed, config problem. Missing a generation_config.json and generator didn't load <|im_end|> as an eos token. And these tokens are set to special in tokenizer.json so it didn't print out too. After fixing, the model general performance, besides RP, are really good, correctly answered a bunch of math and logic problems too.
imoc
changed discussion status to
closed