--- language: ko license: mit metrics: - perplexity - accuracy tags: - korean - qwen - finetunned dataset_tags: - kyujinpy/KOpen-platypus --- # Qwen 2.5 3B Instruction-tuned Model This model is a Instruction-tuned version of Qwen 2.5 3B for recipie recommandation. ## Model Description - Fine-tuned from: Qwen/Qwen2.5-3B - Fine-tuning task: [Instruction-tuning] - Training data: [kyujinpy/KOpen-platypus + Recipe data] - Evaluation results: [Add your evaluation metrics] ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM base_model_path = "Qwen/Qwen2.5-3B" adapter_model = "INo0121/qwen2.5_3b_instruction_tuning_241020" base_model = AutoModelForCausalLM.from_pretrained( base_model_path, torch_dtype="auto", device_map="auto", temperature=0.1 ) model = PeftModel.from_pretrained(base_model, adapter_model) tokenizer = AutoTokenizer.from_pretrained(adapter_model) # Example usage input_text = "Your input text here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Limitations and Biases [Describe any known limitations or biases of your model] ## Training Details - Training framework: Hugging Face Transformers - Hyperparameters: [List your key hyperparameters] - Training hardware: [Describe the hardware used]