File size: 1,808 Bytes
8c92358
 
5aa0099
35606bd
a8152f6
97012eb
c8168db
5aa0099
 
8c92358
87d8916
54b7628
a24d499
5aa0099
 
 
 
 
a24d499
5aa0099
 
 
97012eb
c8168db
5aa0099
a24d499
5aa0099
 
 
 
e2a9ce7
5aa0099
 
 
a24d499
5aa0099
 
 
 
 
 
 
 
a24d499
5aa0099
1c9d2e3
5aa0099
 
 
a24d499
06cf8d8
 
 
 
 
 
5aa0099
5f2187a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: mit
datasets:
- CreitinGameplays/merged-data-v2
base_model: 
- HuggingFaceH4/zephyr-7b-beta
- mistral-community/Mistral-7B-v0.2
language:
- en
---
# **ConvAI-9b: A Conversational AI Model**
![img](https://huggingface.co/CreitinGameplays/ConvAI-9b/resolve/main/convai.png)
## **1. Model Details**

* **Model Name:** ConvAI-9b
* **Authors:** CreitinGameplays
* **Date:** April 18th, 2024 

## **2. Model Description**

ConvAI-9b is a fine-tuned conversational AI model with 9 billion parameters. It is based on the following models:

* **Base Model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* **Merged Model:** [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2)

## **3. Training Data**

The model was fine-tuned on a custom dataset of conversations between an AI assistant and a user. The dataset format followed a specific structure:

 ```
<|system|> (system prompt, e.g.: You are a helpful AI language model called ChatGPT, your goal is helping users with their questions) </s> <|user|> (user prompt) </s>
```


## **4. Intended Uses**

ConvAI-9b is intended for use in conversational AI applications, such as:

* Chatbots
* Virtual assistants
* Interactive storytelling
* Educational tools

## **5. Limitations**

* Like any other language model, ConvAI-9b may generate incorrect or misleading responses.
* It may exhibit biases present in the training data.
* The model's performance can be affected by the quality and format of the input text.

## **6. Evaluation**
| Metrics  |Value|
|----------|-----|
|ARC       |57.50|
|HellaSwag |80.34|
|TruthfulQA|49.54|
|Winogrande|76.24|

More detailed evaluation [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CreitinGameplays__ConvAI-9b)