|
--- |
|
base_model: mlabonne/Beagle14-7B |
|
inference: false |
|
language: |
|
- en |
|
license: apache-2.0 |
|
model_creator: mlabonne |
|
model_name: Beagle14-7B |
|
model_type: mistral |
|
pipeline_tag: text-generation |
|
prompt_template: "<|system|> |
|
|
|
</s> |
|
|
|
<|user|> |
|
|
|
{prompt}</s> |
|
|
|
<|assistant|> |
|
|
|
" |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- fblgit/UNA-TheBeagle-7b-v1 |
|
- argilla/distilabeled-Marcoro14-7B-slerp |
|
quantized_by: brittlewis12 |
|
--- |
|
|
|
# Beagle14-7B GGUF |
|
|
|
Original model: [Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) |
|
Model creator: [Maxime Labonne](https://huggingface.co/mlabonne) |
|
|
|
This repo contains GGUF format model files for Maxime Labonne’s Beagle14-7B. |
|
|
|
Beagle14-7B is a merge of the following models using LazyMergekit: |
|
|
|
- [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1) |
|
- [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) |
|
|
|
|
|
### What is GGUF? |
|
|
|
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. |
|
Converted using llama.cpp build 1879 (revision [3e5ca79](https://github.com/ggerganov/llama.cpp/commit/3e5ca7931c68152e4ec18d126e9c832dd84914c8)) |
|
|
|
### Prompt template: Zephyr |
|
|
|
Zephyr-style appears to work well! |
|
|
|
``` |
|
<|system|> |
|
{{system_message}}</s> |
|
<|user|> |
|
{{prompt}}</s> |
|
<|assistant|> |
|
``` |
|
|
|
--- |
|
|
|
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! |
|
|
|
![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) |
|
|
|
[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: |
|
- create & save **Characters** with custom system prompts & temperature settings |
|
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! |
|
- make it your own with custom **Theme colors** |
|
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! |
|
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! |
|
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date |
|
|
|
--- |
|
|
|
|
|
## Original Model Evaluations: |
|
|
|
The evaluation was performed by the model’s creator using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite, as reported from mlabonne’s alternative leaderboard, YALL: [Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). |
|
|
|
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |
|
|----------------------------------------------------------|------:|------:|---------:|-------:|------:| |
|
|[**Beagle14-7B**](https://huggingface.co/mlabonne/Beagle14-7B)| 44.38| **76.53**| **69.44**| 47.25| **59.4**| |
|
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42| |
|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51| |
|
|[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| **47.79**| 74.69| 55.92| 44.84| 55.81| |
|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |
|
|[CatMarcoro14-7B-slerp](https://huggingface.co/occultml/CatMarcoro14-7B-slerp)| 45.21| 75.91| 63.81| **47.31**| 58.06| |
|
|
|
|