File size: 1,150 Bytes
062ac68
 
 
 
 
 
 
 
 
 
5b99c47
062ac68
a54ae34
062ac68
 
3e67c79
 
 
 
d28c675
3e67c79
 
062ac68
 
 
 
 
 
 
 
dc28fb7
 
 
395d7e6
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: mlabonne/AlphaMonarch-7B
datasets: migtissera/Hitchhiker
---

# AlphaHitchhiker-7B (v1)

![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6455cc8d679315e4ef16fbec%2FbMRgkD7-UJ4b82hB_YUN6.png%3C%2Fspan%3E)

Retrain available [here](https://huggingface.co/macadeliccc/AlphaHitchhiker-7B-v2) with ~15% improvement.

Thanks to [migtessera](https://huggingface.co/migtissera) for his dataset [Hitchhikers](https://huggingface.co/datasets/migtissera/Hitchhiker)

- **Developed by:** macadeliccc
- **License:** apache-2.0
- **Finetuned from model :** mlabonne/AlphaMonarch-7B

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# Quants

- [dagbs/-GGUF](https://huggingface.co/dagbs/AlphaHitchhiker-7B-GGUF)
- [solidrust/-AWQ](https://huggingface.co/solidrust/AlphaHitchhiker-7B-AWQ)