Summary
This is a LLaMA 3 Youko qlora fine-tune, created using a new version of the VNTL dataset. The purpose of this fine-tune is to improve performance of LLMs at translating Japanese visual novels to English.
Unlike the previous version, this one doesn't includes the "chat mode".
Notes
For this new version of VNTL 8B, I've rebuilt and expanded VNTL's dataset from the groud up, and I'm happy to say it performs really well, outperforming the previous version when it comes to accuracy and stability, it makes far fewer mistakes than it even when running at high temperatures (though I still recommend temperature 0 for the best accuracy).
Some major changes in this version:
- Switched to the default LLaMA3 prompt format since people had trouble with the custom one
- Added proper support for multi-line translations (the old version only handled single lines)
- Overall better translation accuracy
One thing to note: while the translations are more accurate, they tend to be more literal compared to the previous version.
Sampling Recommendations
For optimal results, it's highly recommended to use neutral sampling parameters (temperature 0 with no repetition penalty) when using this model.
Training Details
This fine-tune was done using similar hyperparameters as the previous version. The only difference is the dataset, which is a brand-new one.
- Rank: 128
- Alpha: 32
- Effective Batch Size: 45
- Warmup Ratio: 0.02
- Learning Rate: 6e-5
- Embedding Learning Rate: 1e-5
- Optimizer: grokadamw
- LR Schedule: cosine
- Weight Decay: 0.01
Train Loss: 0.42
Translation Prompt
This fine-tune uses the LLaMA 3 prompt format, this is an prompt example for translation:
<|begin_of_text|><|start_header_id|>Metadata<|end_header_id|>
[character] Name: Uryuu Shingo (ηη ζ°εΎ) | Gender: Male | Aliases: Onii-chan (γε
γ‘γγ)
[character] Name: Uryuu Sakuno (ηη ζ‘δΉ) | Gender: Female<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
[ζ‘δΉ]: γβ¦β¦γγγγ<|eot_id|><|start_header_id|>English<|end_header_id|>
[Sakuno]: γ... Sorry.γ<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
[ζ°εΎ]: γγγγγγγθ¨γ£γ‘γγͺγγ γγ©γθΏ·εγ§γγγ£γγγζ‘δΉγ―ε―ζγγγγγγγγεΏι
γγ‘γγ£γ¦γγγ γδΏΊγ<|eot_id|><|start_header_id|>English<|end_header_id|>
[Shingo]: "Nah, I know itβs weird to say this, but Iβm glad you got lost. Youβre so cute, Sakuno, so I was really worried about you."<|eot_id|>
The generated translation for that prompt, with temperature 0, is:
[Shingo]: "Nah, I know itβs weird to say this, but Iβm glad you got lost. Youβre so cute, Sakuno, so I was really worried about you."
Trivia
The Metadata section isn't limited to character information - you can also add trivia and teach the model the correct way to pronounce words it struggles with.
Here's an example:
<|begin_of_text|><|start_header_id|>Metadata<|end_header_id|>
[character] Name: Uryuu Shingo (ηη ζ°εΎ) | Gender: Male | Aliases: Onii-chan (γε
γ‘γγ)
[character] Name: Uryuu Sakuno (ηη ζ‘δΉ) | Gender: Female
[element] Name: Murasamemaru (ε’ι¨δΈΈ) | Type: Quality<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
[ζ‘δΉ]: γβ¦β¦γγγγ<|eot_id|><|start_header_id|>English<|end_header_id|>
[Sakuno]: γ... Sorry.γ<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
[ζ°εΎ]: γγγγγγγθ¨γ£γ‘γγͺγγ γγ©γθΏ·εγ§γγγ£γγγζ‘δΉγ―ε’ι¨δΈΈγγγγγγγγεΏι
γγ‘γγ£γ¦γγγ γδΏΊγ<|eot_id|><|start_header_id|>English<|end_header_id|>
The generated translation for that prompt, with temperature 0, is:
[Shingo]: "Nah, I know itβs not the best thing to say, but Iβm glad you got lost. Sakunoβs Murasamemaru, so I was really worried about you, you know?"
- Downloads last month
- 1,447