Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ Fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma
|
|
18 |
- Supports 8k context length
|
19 |
|
20 |
|
21 |
-
# Usage
|
22 |
|
23 |
[๐ฌ๐ฎ๐น Try the model on Hugging Face Spaces](https://huggingface.co/spaces/anakin87/gemma-2-2b-neogenesis-ita)
|
24 |
|
@@ -48,7 +48,7 @@ print(outputs[0]["generated_text"][1]["content"])
|
|
48 |
|
49 |
For more usage examples and applications, refer to the [๐ Kaggle notebook](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond).
|
50 |
|
51 |
-
# Evaluation Results
|
52 |
|
53 |
The model was submitted and evaluated in the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/mii-llm/open_ita_llm_leaderboard), the most popular leaderboard for Italian Language Models.
|
54 |
|
@@ -73,7 +73,7 @@ Training required about 15 hours on a single NVIDIA A6000 GPU (48GB VRAM).
|
|
73 |
For comprehensive training details, check out the [๐ Kaggle notebook](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond).
|
74 |
|
75 |
|
76 |
-
# Training data
|
77 |
The model was trained primarily on Italian data, with a small portion of English data included.
|
78 |
|
79 |
|
@@ -98,5 +98,5 @@ For Direct Preference Optimization
|
|
98 |
|
99 |
Although the model demonstrates solid Italian fluency and good reasoning capabilities for its small size, it is expected to have limited world knowledge due to its restricted number of parameters. This limitation can be mitigated by pairing it with techniques like Retrieval-Augmented Generation. Check out the [๐ Kaggle notebook](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond) for an example.
|
100 |
|
101 |
-
# Safety
|
102 |
While this model was not specifically fine-tuned for safety, its selective training with the Spectrum technique helps preserve certain safety features from the original model, as emerged in the [qualitative evaluation]((https://html-preview.github.io/?url=https://github.com/anakin87/gemma-neogenesis/blob/main/qualitative_evaluation/qualitative_evaluation.html)).
|
|
|
18 |
- Supports 8k context length
|
19 |
|
20 |
|
21 |
+
# ๐ฎ Usage
|
22 |
|
23 |
[๐ฌ๐ฎ๐น Try the model on Hugging Face Spaces](https://huggingface.co/spaces/anakin87/gemma-2-2b-neogenesis-ita)
|
24 |
|
|
|
48 |
|
49 |
For more usage examples and applications, refer to the [๐ Kaggle notebook](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond).
|
50 |
|
51 |
+
# ๐ Evaluation Results
|
52 |
|
53 |
The model was submitted and evaluated in the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/mii-llm/open_ita_llm_leaderboard), the most popular leaderboard for Italian Language Models.
|
54 |
|
|
|
73 |
For comprehensive training details, check out the [๐ Kaggle notebook](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond).
|
74 |
|
75 |
|
76 |
+
# ๐๏ธ Training data
|
77 |
The model was trained primarily on Italian data, with a small portion of English data included.
|
78 |
|
79 |
|
|
|
98 |
|
99 |
Although the model demonstrates solid Italian fluency and good reasoning capabilities for its small size, it is expected to have limited world knowledge due to its restricted number of parameters. This limitation can be mitigated by pairing it with techniques like Retrieval-Augmented Generation. Check out the [๐ Kaggle notebook](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond) for an example.
|
100 |
|
101 |
+
# ๐ก๏ธ Safety
|
102 |
While this model was not specifically fine-tuned for safety, its selective training with the Spectrum technique helps preserve certain safety features from the original model, as emerged in the [qualitative evaluation]((https://html-preview.github.io/?url=https://github.com/anakin87/gemma-neogenesis/blob/main/qualitative_evaluation/qualitative_evaluation.html)).
|