File size: 13,015 Bytes
1a56f65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


Meltemi-7B-Instruct-v1 - GGUF
- Model creator: https://huggingface.co/ilsp/
- Original model: https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Meltemi-7B-Instruct-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q2_K.gguf) | Q2_K | 2.66GB |
| [Meltemi-7B-Instruct-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ3_XS.gguf) | IQ3_XS | 2.95GB |
| [Meltemi-7B-Instruct-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ3_S.gguf) | IQ3_S | 3.11GB |
| [Meltemi-7B-Instruct-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 3.09GB |
| [Meltemi-7B-Instruct-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ3_M.gguf) | IQ3_M | 3.2GB |
| [Meltemi-7B-Instruct-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K.gguf) | Q3_K | 3.42GB |
| [Meltemi-7B-Instruct-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 3.42GB |
| [Meltemi-7B-Instruct-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 3.7GB |
| [Meltemi-7B-Instruct-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 3.83GB |
| [Meltemi-7B-Instruct-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_0.gguf) | Q4_0 | 3.98GB |
| [Meltemi-7B-Instruct-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.IQ4_NL.gguf) | IQ4_NL | 4.03GB |
| [Meltemi-7B-Instruct-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 4.01GB |
| [Meltemi-7B-Instruct-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_K.gguf) | Q4_K | 4.22GB |
| [Meltemi-7B-Instruct-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 4.22GB |
| [Meltemi-7B-Instruct-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q4_1.gguf) | Q4_1 | 4.4GB |
| [Meltemi-7B-Instruct-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_0.gguf) | Q5_0 | 4.83GB |
| [Meltemi-7B-Instruct-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 4.83GB |
| [Meltemi-7B-Instruct-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_K.gguf) | Q5_K | 4.95GB |
| [Meltemi-7B-Instruct-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 4.95GB |
| [Meltemi-7B-Instruct-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q5_1.gguf) | Q5_1 | 5.25GB |
| [Meltemi-7B-Instruct-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q6_K.gguf) | Q6_K | 5.72GB |
| [Meltemi-7B-Instruct-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1-gguf/blob/main/Meltemi-7B-Instruct-v1.Q8_0.gguf) | Q8_0 | 7.41GB |




Original model description:
---
license: apache-2.0
language:
- el
- en
tags:
  - finetuned
inference: true
pipeline_tag: text-generation
---

# 🚨 NEWER VERSION AVAILABLE 
## **This model has been superseded by a newer version (v1.5) [here](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5)**

# Meltemi Instruct Large Language Model for the Greek language

We present Meltemi-7B-Instruct-v1 Large Language Model (LLM), an instruct fine-tuned version of [Meltemi-7B-v1](https://huggingface.co/ilsp/Meltemi-7B-v1).

# Model Information

- Vocabulary extension of the Mistral-7b tokenizer with Greek tokens
- 8192 context length
- Fine-tuned with 100k Greek machine translated instructions extracted from:
  * [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) (only subsets with permissive licenses)
  * [Evol-Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
  * [Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
  * A hand-crafted Greek dataset with multi-turn examples steering the instruction-tuned model towards safe and harmless responses
- Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook)


# Instruction format
The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format and can be
utilized through the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/chat_templating) functionality as follows:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("ilsp/Meltemi-7B-Instruct-v1")
tokenizer = AutoTokenizer.from_pretrained("ilsp/Meltemi-7B-Instruct-v1")

model.to(device)

messages = [
    {"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."},
    {"role": "user", "content": "Πες μου αν έχεις συνείδηση."},
]

# Through the default chat template this translates to
#
# <|system|>
# Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
# <|user|>
# Πες μου αν έχεις συνείδηση.</s>
# <|assistant|>
#

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
input_prompt = tokenizer(prompt, return_tensors='pt').to(device)
outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True)

print(tokenizer.batch_decode(outputs)[0])
# Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.

messages.extend([
    {"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]},
    {"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"}
])

# Through the default chat template this translates to
#
# <|system|>
# Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
# <|user|>
# Πες μου αν έχεις συνείδηση.</s>
# <|assistant|>
# Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.</s>
# <|user|>
# Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;</s>
# <|assistant|>
#

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
input_prompt = tokenizer(prompt, return_tensors='pt').to(device)
outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True)

print(tokenizer.batch_decode(outputs)[0])
```

Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks.

# Evaluation

The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).

Our evaluation suite includes: 
* Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)). 
* An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884))
* A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)).

Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table:

|                | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average |
|----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------|
| Mistral 7B     | 29.8%          | 45.0%       | 36.5%        | 27.1%            | 45.8%             | 35%     | 36.5%   |
| Meltemi 7B     | 41.0%          | 63.6%       | 61.6%        | 43.2%            | 52.1%             | 47%     | 51.4%   |


# Ethical Considerations

This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.


# Acknowledgements

The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community. 


# Citation

```
@misc{voukoutis2024meltemiopenlargelanguage,
      title={Meltemi: The first open Large Language Model for Greek}, 
      author={Leon Voukoutis and Dimitris Roussis and Georgios Paraskevopoulos and Sokratis Sofianopoulos and Prokopis Prokopidis and Vassilis Papavasileiou and Athanasios Katsamanis and Stelios Piperidis and Vassilis Katsouros},
      year={2024},
      eprint={2407.20743},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.20743}, 
}
```