Text Generation
Transformers
Safetensors
English
Korean
llama
translation
enko
conversational
text-generation-inference
Inference Endpoints

instructTrans-v2

image/png

Introduction

exaone3-instrucTrans-v2-enko-7.8b model is trained on translation datasets(english->korean) based on exaone-3-7.8B-it. To translate the English instruction dataset.

Generating Text

This model supports translation from english to korean. To translate text, use the following Python code:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
  model_name,
  device_map="auto",
  torch_dtype=torch.bfloat16
)

system_prompt="๋‹น์‹ ์€ ๋ฒˆ์—ญ๊ธฐ ์ž…๋‹ˆ๋‹ค. ์˜์–ด๋ฅผ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”."
sentence = "The aerospace industry is a flower in the field of technology and science."
conversation = [{'role': 'system', 'content': system_prompt},
                {'role': 'user', 'content': sentence}]

inputs = tokenizer.apply_chat_template(
  conversation,
  tokenize=True,
  add_generation_prompt=True,
  return_tensors='pt'
).to("cuda")

outputs = model.generate(inputs, max_new_tokens=4096) # Finetuned with length 8192
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

inference with vLLM

์ถ”๋ก  ์ฝ”๋“œ ์ ‘๊ธฐ/ํŽผ์น˜๊ธฐ
# Requires at least a 24 GB Vram GPU. If you have 12GB VRAM, you will need to run in FP8 mode. 

python vllm_inference.py -gpu_id 0 -split_idx 0 -split_num 2 -dname "nvidia/HelpSteer" -untrans_col 'helpfulness' 'correctness' 'coherence' 'complexity' 'verbosity' > 0.out
python vllm_inference.py -gpu_id 1 -split_idx 1 -split_num 2 -dname "nvidia/HelpSteer" -untrans_col 'helpfulness' 'correctness' 'coherence' 'complexity' 'verbosity' > 1.out
import os
import argparse
import pandas as pd

from tqdm import tqdm
from typing import List, Dict
from datasets import load_dataset, Dataset
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

# truncate sentences with more than 4096 tokens. # for same dataset size
def truncation_func(sample, column_name):
    input_ids = tokenizer(str(sample[column_name]), truncation=True, max_length=4096, add_special_tokens=False).input_ids
    output = tokenizer.decode(input_ids)
    sample[column_name]=output
    return sample

# convert to chat_template
def create_conversation(sample, column_name):
    SYSTEM_PROMPT=f"๋‹น์‹ ์€ ๋ฒˆ์—ญ๊ธฐ ์ž…๋‹ˆ๋‹ค. ์˜์–ด ๋ฌธ์žฅ์„ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”."
    messages=[
        {"role":"system", "content": SYSTEM_PROMPT},
        {"role":"user", "content":sample[column_name]}
    ]
    text=tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    sample[column_name]=text
    return sample

def load_dataset_preprocess(dataset_name:str, untranslate_column:List, split_num, split_idx, subset=None, num_proc=128) -> Dataset: 
    step = 100//split_num     # split datasets
    if subset:
        dataset = load_dataset(dataset_name, subset, split=f'train[{step*split_idx}%:{step*(split_idx+1)}%]')
    else:
        dataset = load_dataset(dataset_name, split=f'train[{step*split_idx}%:{step*(split_idx+1)}%]')
    print(dataset)
    original_dataset = dataset # To leave columns untranslated
    dataset = dataset.remove_columns(untranslate_column)
    
    for feature in dataset.features:
        dataset = dataset.map(lambda x: truncation_func(x,feature), num_proc=num_proc) # 
        dataset = dataset.map(lambda x: create_conversation(x,feature), batched=False, num_proc=num_proc)
        
    print("filtered_dataset:", dataset)
    return dataset, original_dataset

def save_dataset(result_dict:Dict, dataset_name, untranslate_column:List, split_idx, subset:str):
    for column in untranslate_column:
        result_dict[column] = original_dataset[column]
    
    df = pd.DataFrame(result_dict)
    output_file_name = dataset_name.split('/')[-1]
    os.makedirs('gen', exist_ok=True)
    if subset:
        save_path = f"gen/{output_file_name}_{subset}_{split_idx}.jsonl"
    else:
        save_path = f"gen/{output_file_name}_{split_idx}.jsonl"
    df.to_json(save_path, lines=True, orient='records', force_ascii=False)

if __name__=="__main__":
    model_name = "Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b"
    tokenizer = AutoTokenizer.from_pretrained(model_name)

    parser = argparse.ArgumentParser(description='load dataset name & split size')
    parser.add_argument('-dname', type=str, default="Magpie-Align/Magpie-Pro-MT-300K-v0.1")
    parser.add_argument('-untrans_col', nargs='+', default=[])
    parser.add_argument('-split_num', type=int, default=4)
    parser.add_argument('-split_idx', type=int, default=0)
    parser.add_argument('-gpu_id', type=int, default=0)
    parser.add_argument('-subset', type=str, default=None)
    parser.add_argument('-num_proc', type=int, default=128)

    args = parser.parse_args()
    os.environ["CUDA_VISIBLE_DEVICES"]=str(args.gpu_id)
    dataset, original_dataset =  load_dataset_preprocess(args.dname,
                                                         args.untrans_col,
                                                         args.split_num,
                                                         args.split_idx,
                                                         args.subset, 
                                                         args.num_proc
                                                        )
    # define model
    sampling_params = SamplingParams(
        temperature=0, 
        max_tokens=8192,
    )
    llm = LLM(
        model=model_name,
        tensor_parallel_size=1,
        gpu_memory_utilization=0.95,
    )
    # inference model
    result_dict = {}
    for feature in  tqdm(dataset.features):
        print(f"'{feature}' column in progress..")
        outputs = llm.generate(dataset[feature], sampling_params)
        result_dict[feature]=[output.outputs[0].text for output in outputs]
        save_dataset(result_dict, args.dname, args.untrans_col, args.split_idx, args.subset)
        print(f"saved to json. column: {feature}")

Result

# EVAL_RESULT (2405_KO_NEWS) (max_new_tokens=512)
"en_ref":"This controversy arose around a new advertisement for the latest iPad Pro that Apple released on YouTube on the 7th. The ad shows musical instruments, statues, cameras, and paints being crushed in a press, followed by the appearance of the iPad Pro in their place. It appears to emphasize the new iPad Pro's artificial intelligence features, advanced display, performance, and thickness. Apple mentioned that the newly unveiled iPad Pro is equipped with the latest 'M4' chip and is the thinnest device in Apple's history. The ad faced immediate backlash upon release, as it graphically depicts objects symbolizing creators being crushed. Critics argue that the imagery could be interpreted as technology trampling on human creators. Some have also voiced concerns that it evokes a situation where creators are losing ground due to AI."
"ko_ref":"์ด๋ฒˆ ๋…ผ๋ž€์€ ์• ํ”Œ์ด ์ง€๋‚œ 7์ผ ์œ ํŠœ๋ธŒ์— ๊ณต๊ฐœํ•œ ์‹ ํ˜• ์•„์ดํŒจ๋“œ ํ”„๋กœ ๊ด‘๊ณ ๋ฅผ ๋‘˜๋Ÿฌ์‹ธ๊ณ  ๋ถˆ๊ฑฐ์กŒ๋‹ค. ํ•ด๋‹น ๊ด‘๊ณ  ์˜์ƒ์€ ์•…๊ธฐ์™€ ์กฐ๊ฐ์ƒ, ์นด๋ฉ”๋ผ, ๋ฌผ๊ฐ ๋“ฑ์„ ์••์ฐฉ๊ธฐ๋กœ ์ง“๋ˆ„๋ฅธ ๋’ค ๊ทธ ์ž๋ฆฌ์— ์•„์ดํŒจ๋“œ ํ”„๋กœ๋ฅผ ๋“ฑ์žฅ์‹œํ‚ค๋Š” ๋‚ด์šฉ์ด์—ˆ๋‹ค. ์‹ ํ˜• ์•„์ดํŒจ๋“œ ํ”„๋กœ์˜ ์ธ๊ณต์ง€๋Šฅ ๊ธฐ๋Šฅ๋“ค๊ณผ ์ง„ํ™”๋œ ๋””์Šคํ”Œ๋ ˆ์ด์™€ ์„ฑ๋Šฅ, ๋‘๊ป˜ ๋“ฑ์„ ๊ฐ•์กฐํ•˜๊ธฐ ์œ„ํ•œ ์ทจ์ง€๋กœ ํ’€์ด๋œ๋‹ค. ์• ํ”Œ์€ ์ด๋ฒˆ์— ๊ณต๊ฐœํ•œ ์•„์ดํŒจ๋“œ ํ”„๋กœ์— ์‹ ํ˜• โ€˜M4โ€™ ์นฉ์ด ํƒ‘์žฌ๋˜๋ฉฐ ๋‘๊ป˜๋Š” ์• ํ”Œ์˜ ์—ญ๋Œ€ ์ œํ’ˆ ์ค‘ ๊ฐ€์žฅ ์–‡๋‹ค๋Š” ์„ค๋ช…๋„ ๋ง๋ถ™์˜€๋‹ค. ๊ด‘๊ณ ๋Š” ๊ณต๊ฐœ ์งํ›„ ๊ฑฐ์„ผ ๋น„ํŒ์— ์ง๋ฉดํ–ˆ๋‹ค. ์ฐฝ์ž‘์ž๋ฅผ ์ƒ์ง•ํ•˜๋Š” ๋ฌผ๊ฑด์ด ์ง“๋ˆŒ๋ ค์ง€๋Š” ๊ณผ์ •์„ ์ง€๋‚˜์น˜๊ฒŒ ์ ๋‚˜๋ผํ•˜๊ฒŒ ๋ฌ˜์‚ฌํ•œ ์ ์ด ๋ฌธ์ œ๊ฐ€ ๋๋‹ค. ๊ธฐ์ˆ ์ด ์ธ๊ฐ„ ์ฐฝ์ž‘์ž๋ฅผ ์ง“๋ฐŸ๋Š” ๋ชจ์Šต์„ ๋ฌ˜์‚ฌํ•œ ๊ฒƒ์œผ๋กœ ํ•ด์„๋  ์—ฌ์ง€๊ฐ€ ์žˆ๋‹ค๋Š” ๋ฌธ์ œ์˜์‹์ด๋‹ค. ์ธ๊ณต์ง€๋Šฅ(AI)์œผ๋กœ ์ธํ•ด ์ฐฝ์ž‘์ž๊ฐ€ ์„ค ์ž๋ฆฌ๊ฐ€ ์ค„์–ด๋“œ๋Š” ์ƒํ™ฉ์„ ์—ฐ์ƒ์‹œํ‚จ๋‹ค๋Š” ๋ชฉ์†Œ๋ฆฌ๋„ ๋‚˜์™”๋‹ค."

"exaone3-InstrucTrans-v2":"์ด๋ฒˆ ๋…ผ๋ž€์€ ์• ํ”Œ์ด ์ง€๋‚œ 7์ผ ์œ ํŠœ๋ธŒ์— ๊ณต๊ฐœํ•œ ์ตœ์‹ ํ˜• ์•„์ดํŒจ๋“œ ํ”„๋กœ์˜ ์ƒˆ ๊ด‘๊ณ ๋ฅผ ๋‘˜๋Ÿฌ์‹ธ๊ณ  ๋ถˆ๊ฑฐ์กŒ๋‹ค. ์ด ๊ด‘๊ณ ๋Š” ์•…๊ธฐ, ์กฐ๊ฐ์ƒ, ์นด๋ฉ”๋ผ, ๋ฌผ๊ฐ ๋“ฑ์ด ํ”„๋ ˆ์Šค๊ธฐ์— ์ง“๋ˆŒ๋ฆฌ๋Š” ์žฅ๋ฉด์— ์ด์–ด ๊ทธ ์ž๋ฆฌ์— ์•„์ดํŒจ๋“œ ํ”„๋กœ๊ฐ€ ๋“ฑ์žฅํ•˜๋Š” ์žฅ๋ฉด์„ ๋ณด์—ฌ์ค€๋‹ค. ์ƒˆ๋กœ์šด ์•„์ดํŒจ๋“œ ํ”„๋กœ์˜ ์ธ๊ณต์ง€๋Šฅ ๊ธฐ๋Šฅ, ์ฒจ๋‹จ ๋””์Šคํ”Œ๋ ˆ์ด, ์„ฑ๋Šฅ, ๋‘๊ป˜๋ฅผ ๊ฐ•์กฐํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹ค. ์• ํ”Œ์€ ์ด๋ฒˆ์— ๊ณต๊ฐœ๋œ ์•„์ดํŒจ๋“œ ํ”„๋กœ์— ์ตœ์‹  'M4' ์นฉ์ด ํƒ‘์žฌ๋์œผ๋ฉฐ, ์• ํ”Œ ์—ญ์‚ฌ์ƒ ๊ฐ€์žฅ ์–‡์€ ๋‘๊ป˜๋ฅผ ์ž๋ž‘ํ•œ๋‹ค๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค. ์ด ๊ด‘๊ณ ๋Š” ๊ณต๊ฐœ๋˜์ž๋งˆ์ž ํฌ๋ฆฌ์—์ดํ„ฐ๋ฅผ ์ƒ์ง•ํ•˜๋Š” ์‚ฌ๋ฌผ๋“ค์ด ์ง“๋ฐŸํžˆ๋Š” ์žฅ๋ฉด์„ ๊ทธ๋ž˜ํ”ฝ์œผ๋กœ ํ‘œํ˜„ํ•ด ์ฆ‰๊ฐ์ ์ธ ๋ฐ˜๋ฐœ์— ๋ถ€๋”ชํ˜”๋‹ค. ๋น„ํ‰๊ฐ€๋“ค์€ ์ด ์ด๋ฏธ์ง€๊ฐ€ ๊ธฐ์ˆ ์ด ์ธ๊ฐ„ ํฌ๋ฆฌ์—์ดํ„ฐ๋ฅผ ์ง“๋ฐŸ๋Š” ๊ฒƒ์œผ๋กœ ํ•ด์„๋  ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ฃผ์žฅํ•œ๋‹ค. ์ผ๋ถ€์—์„œ๋Š” AI๋กœ ์ธํ•ด ํฌ๋ฆฌ์—์ดํ„ฐ๋“ค์ด ์„ค ์ž๋ฆฌ๋ฅผ ์žƒ๋Š” ์ƒํ™ฉ์„ ์—ฐ์ƒ์‹œํ‚จ๋‹ค๋Š” ์šฐ๋ ค์˜ ๋ชฉ์†Œ๋ฆฌ๋„ ๋‚˜์™”๋‹ค."
"llama3-InstrucTrans":"์ด๋ฒˆ ๋…ผ๋ž€์€ ์• ํ”Œ์ด ์ง€๋‚œ 7์ผ ์œ ํŠœ๋ธŒ์— ๊ณต๊ฐœํ•œ ์ตœ์‹  ์•„์ดํŒจ๋“œ ํ”„๋กœ ๊ด‘๊ณ ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ๋ถˆ๊ฑฐ์กŒ๋‹ค. ์ด ๊ด‘๊ณ ๋Š” ์•…๊ธฐ, ์กฐ๊ฐ์ƒ, ์นด๋ฉ”๋ผ, ๋ฌผ๊ฐ ๋“ฑ์„ ๋ˆ„๋ฅด๊ธฐ ์‹œ์ž‘ํ•˜๋Š” ์žฅ๋ฉด๊ณผ ํ•จ๊ป˜ ๊ทธ ์ž๋ฆฌ์— ์•„์ดํŒจ๋“œ ํ”„๋กœ๊ฐ€ ๋“ฑ์žฅํ•˜๋Š” ์žฅ๋ฉด์„ ๋ณด์—ฌ์ค€๋‹ค. ์ด๋Š” ์ƒˆ๋กœ์šด ์•„์ดํŒจ๋“œ ํ”„๋กœ์˜ ์ธ๊ณต์ง€๋Šฅ ๊ธฐ๋Šฅ, ๊ณ ๊ธ‰ ๋””์Šคํ”Œ๋ ˆ์ด, ์„ฑ๋Šฅ, ๋‘๊ป˜๋ฅผ ๊ฐ•์กฐํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹ค. ์• ํ”Œ์€ ์ด๋ฒˆ์— ๊ณต๊ฐœํ•œ ์•„์ดํŒจ๋“œ ํ”„๋กœ์— ์ตœ์‹  'M4' ์นฉ์ด ํƒ‘์žฌ๋์œผ๋ฉฐ, ์• ํ”Œ ์—ญ์‚ฌ์ƒ ๊ฐ€์žฅ ์–‡์€ ๊ธฐ๊ธฐ๋ผ๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค. ์ด ๊ด‘๊ณ ๋Š” ์ถœ์‹œํ•˜์ž๋งˆ์ž ํฌ๋ฆฌ์—์ดํ„ฐ๋ฅผ ์ƒ์ง•ํ•˜๋Š” ๋ฌผ๊ฑด์ด ํŒŒ์‡„๋˜๋Š” ์žฅ๋ฉด์ด ๊ทธ๋Œ€๋กœ ๊ทธ๋ ค์ ธ ๋…ผ๋ž€์ด ๋˜๊ณ  ์žˆ๋‹ค. ๋น„ํ‰๊ฐ€๋“ค์€ ์ด ์ด๋ฏธ์ง€๊ฐ€ ๊ธฐ์ˆ ์ด ์ธ๊ฐ„ ํฌ๋ฆฌ์—์ดํ„ฐ๋ฅผ ์ง“๋ฐŸ๋Š”๋‹ค๋Š” ์˜๋ฏธ๋กœ ํ•ด์„๋  ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ฃผ์žฅํ•œ๋‹ค. ๋˜ํ•œ AI๋กœ ์ธํ•ด ํฌ๋ฆฌ์—์ดํ„ฐ๋“ค์ด ๋ฐ€๋ฆฌ๊ณ  ์žˆ๋‹ค๋Š” ์ƒํ™ฉ์„ ์—ฐ์ƒ์‹œํ‚จ๋‹ค๋Š” ์šฐ๋ ค์˜ ๋ชฉ์†Œ๋ฆฌ๋„ ๋‚˜์˜จ๋‹ค."

Evalution Result

์˜์–ด->ํ•œ๊ตญ์–ด ๋ฒˆ์—ญ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ์œ„ํ•œ ๋ฐ์ดํ„ฐ์…‹์„ ์„ ์ •ํ•˜์—ฌ ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ์…‹ ์ถœ์ฒ˜

๋ชจ๋ธ ํ‰๊ฐ€๋ฐฉ๋ฒ•

  • ๋ณธ ํ‰๊ฐ€์—์„œ๋Š” ์ด์ „(hf)๊ณผ ๋‹ฌ๋ฆฌ vLLM์„ ํ™œ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜์—ฌ ํ‰๊ฐ€ํ•˜์˜€์Šต๋‹ˆ๋‹ค. (๊ณตํ†ต: max_new_tokens=512)
  • ๊ฐ ์ž์„ธํ•œ ํ‰๊ฐ€ ๋‚ด์šฉ์€ ๊ธฐ์กด์˜ instruct-Trans ๊ฒฐ๊ณผ๋ฅผ ๋”ฐ๋ž์Šต๋‹ˆ๋‹ค. [๋งํฌ]

Average

  • vLLM์„ ํ™œ์šฉํ•˜๋‹ˆ HF๋ณด๋‹ค ์ „์ฒด์ ์œผ๋กœ ์ ์ˆ˜๊ฐ€ ๋‚ฎ์•„์กŒ์Šต๋‹ˆ๋‹ค.

๋ชจ๋ธ ๋ณ„ ์„ฑ๋Šฅ ๋น„๊ต

๋ชจ๋ธ ์ด๋ฆ„ AIHub Flores IWSLT News ํ‰๊ท 
Meta-Llama
meta-llama/Meta-Llama-3-8B-Instruct 0.3075 0.295 2.395 0.17 0.7919
nayohan/llama3-8b-it-translation-general-en-ko-1sent 15.7875 8.09 4.445 4.68 8.2506
nayohan/llama3-instrucTrans-enko-8b 16.3938 9.63 5.405 5.3225 9.1878
nayohan/llama3-8b-it-general-trc313k-enko-8k 14.7225 10.47 4.45 7.555 9.2994
Gemma
Translation-EnKo/gemma-2-2b-it-general1.2m-trc313eval45 13.7775 7.88 3.95 6.105 7.9281
Translation-EnKo/gemma-2-9b-it-general1.2m-trc313eval45 18.9887 13.215 6.28 9.975 12.1147
Translation-EnKo/gukbap-gemma-2-9b-it-general1.2m-trc313eval45 18.405 12.44 6.59 9.64 11.7688
EXAONE
CarrotAI/EXAONE-3.0-7.8B-Instruct-Llamafied-8k 4.9375 4.9 1.58 8.215 4.9081
Translation-EnKo/exaeon3-translation-general-enko-7.8b (private) 17.8275 8.56 2.72 6.31 8.8544
Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b 19.6075 13.46 7.28 11.4425 12.9475

ํ•™์Šต ๋ฐ์ดํ„ฐ์…‹ ๋ณ„ ์„ฑ๋Šฅ ๋ถ„์„

๋ชจ๋ธ ์ด๋ฆ„ AIHub Flores IWSLT News ํ‰๊ท 
Meta-Llama
Meta-Llama-3-8B-Instruct 0.3075 0.295 2.395 0.17 0.7919
llama3-8b-it-general1.2m-en-ko-4k 15.7875 8.09 4.445 4.68 8.2506
llama3-8b-it-general1.2m-trc313k-enko-4k 16.3938 9.63 5.405 5.3225 9.1878
llama3-8b-it-general1.2m-trc313k-enko-8k 14.7225 10.47 4.45 7.555 9.2994
Gemma
gemma-2-2b-it-general1.2m-trc313eval45 13.7775 7.88 3.95 6.105 7.9281
gemma-2-9b-it-general1.2m-trc313eval45 18.9887 13.215 6.28 9.975 12.1147
gukbap-gemma-2-9b-it-general1.2m-trc313eval45 18.405 12.44 6.59 9.64 11.7688
EXAONE
EXAONE-3.0-7.8B-Instruct 4.9375 4.9 1.58 8.215 4.9081
EXAONE-3.0-7.8B-Instruct-general12m (private) 17.8275 8.56 2.72 6.31 8.8544
EXAONE-3.0-7.8B-Instruct-general12m-trc1400k-trc313eval45 19.6075 13.46 7.28 11.4425 12.9475

Citation

@misc{InstrcTrans-v2,
  title={exaone3-instrucTrans-v2-enko-7.8b},
  author={Yohan Na, Suzie Oh, Eunji Kim, Mingyou sung},
  year={2024},
  url={https://huggingface.co/Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b}
}
@misc{llama3modelcard,
        title={Llama 3 Model Card},
        author={AI@Meta},
        year={2024},
        url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
@article{exaone-3.0-7.8B-instruct,
  title={EXAONE 3.0 7.8B Instruction Tuned Language Model},
  author={LG AI Research},
  journal={arXiv preprint arXiv:2408.03541},
  year={2024}
}
@article{gemma_2024,
    title={Gemma},
    url={https://www.kaggle.com/m/3301},
    DOI={10.34740/KAGGLE/M/3301},
    publisher={Kaggle},
    author={Gemma Team},
    year={2024}
}
Downloads last month
217
Safetensors
Model size
7.82B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b

Quantizations
1 model

Datasets used to train Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b