--- base_model: - kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_0_15000 - Qwen/Qwen2.5-7B-Instruct - Qwen/Qwen2.5-7B - kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_15001_30000 library_name: transformers tags: - mergekit - merge --- # output-model-directory This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base. ### Models Merged The following models were included in the merge: * [kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_0_15000](https://huggingface.co/kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_0_15000) * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) * [kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_15001_30000](https://huggingface.co/kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_15001_30000) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Qwen/Qwen2.5-7B dtype: bfloat16 merge_method: ties models: - model: kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_0_15000 parameters: density: 1 weight: 1 - model: kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_15001_30000 parameters: density: 1 weight: 1 - model: Qwen/Qwen2.5-7B-Instruct parameters: density: 1 weight: 1 parameters: density: 1 int8_mask: true normalize: true weight: 1 tokenizer_source: kamruzzaman-asif/qwen2.5_7B_instruct_base_lora_merged_0_15000 ```