File size: 521 Bytes
6ebd98f
 
d8c5c25
 
6ebd98f
 
d8c5c25
6ebd98f
d8c5c25
 
 
6ebd98f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
license: apache-2.0
language:
- en
---

![image/jpeg](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6589d7e6586088fd2784a12c%2FORVjYrpzyfKfP4ByOQnpQ.jpeg%3C%2Fspan%3E)

A DPO fine tuned [mhm-7b-v1.3](https://huggingface.co/h2m/mhm-7b-v1.3) on [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)

Based upon mistral. Created using [dare_ties](https://github.com/cg123/mergekit) and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result.

Just an experiment.