How did you fine tuning this model?
According to my tests, Lexi performs the best in all uncensored models I could find, especially preserves the original model Llama's multi language ability, for example responsing between Simplified Chinese and Traditional Chinese, instead of forgetting how to speak a foreign language when overfitting. Lexi has good generalization ability, it is suitable for continue fine tuning.
I sincerely request you to publish the details and steps, such as the fine tuning datasets, so that we can transfer Lexi's advanced experience to other model architectures in the future.
Glad you like it, that is the goal to ensure the intelligence of the model is not broken. Unfortunately my methods and dataset is not something public for now, I'm sorry.
Do you plan to publish a paper?