fbeawels commited on
Commit
a5ec759
·
verified ·
1 Parent(s): da3af1c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/Phi-3-medium-128k-instruct
4
+ library_name: adapters
5
+ datasets:
6
+ - awels/druidai_admin_dataset
7
+ language:
8
+ - en
9
+ widget:
10
+ - text: Who are you, Merlin ?
11
+ tags:
12
+ - awels
13
+ - druidai
14
+ ---
15
+
16
+ # Merlin Model Card
17
+
18
+ ## Model Details
19
+ **Model Name:** Merlin
20
+
21
+ **Model Type:** Transformer-based leveraging Microsoft Phi 14b 128k tokens
22
+
23
+ **Publisher:** Awels Engineering
24
+
25
+ **License:** MIT
26
+
27
+ **Model Description:**
28
+ Merlin is a sophisticated model designed to help as an AI agent focusing on the Druid AI Conversational platform. It leverages advanced machine learning techniques to provide efficient and accurate solutions. It has been trained on the full docments corpus of Druid 7.14.
29
+
30
+ ## Dataset
31
+ **Dataset Name:** [awels/druidai_admin_dataset](https://huggingface.co/datasets/awels/druidai_admin_dataset)
32
+
33
+ **Dataset Source:** Hugging Face Datasets
34
+
35
+ **Dataset License:** MIT
36
+
37
+ **Dataset Description:**
38
+ The dataset used to train Merlin consists of all the public documents available on the Druid AI Conversational Platform. This dataset is curated to ensure a comprehensive representation of typical administrative and development scenarios encountered in Druid AI Platform.
39
+
40
+ ## Training Details
41
+
42
+ **Training Data:**
43
+ The training data includes 33,000 Questions and Answers generated by the [Bonito LLM](https://github.com/BatsResearch/bonito). The dataset is split into 3 sets of data (training, test and validation) to ensure robust model performance.
44
+
45
+ **Training Procedure:**
46
+ Thready was trained using supervised learning with cross-entropy loss and the Adam optimizer. The training involved 1 epoch, a batch size of 4, a learning rate of 5.0e-06, and a cosine learning rate scheduler with gradient checkpointing for memory efficiency.
47
+
48
+ **Hardware:**
49
+ The model was trained on a single NVIDIA H100 SXM graphic card.
50
+
51
+ **Framework:**
52
+ The training was conducted using PyTorch.
53
+
54
+ ## Evaluation
55
+
56
+ **Evaluation Metrics:**
57
+ Thready was evaluated on the training dataset:
58
+
59
+ > epoch = 1.0
60
+ total_flos = 124998759GF
61
+ train_loss = 1.8515
62
+ train_runtime = 0:43:52.83
63
+ train_samples_per_second = 9.584
64
+ train_steps_per_second = 2.396
65
+
66
+ **Performance:**
67
+ The model achieved the following results on the evaluation dataset:
68
+
69
+ > epoch = 1.0
70
+ eval_loss = 1.5167
71
+ eval_runtime = 0:01:56.08
72
+ eval_samples = 5298
73
+ eval_samples_per_second = 52.287
74
+ eval_steps_per_second = 13.076
75
+
76
+
77
+ ## Intended Use
78
+
79
+ **Primary Use Case:**
80
+ Merlin is intended to be used locally in an agent swarm to colleborate together to solve Druid AI Conversational platform related problems.
81
+
82
+ **Limitations:**
83
+ This 14b model is an upscale of the 3b model. Much better loss than the 3b so results should be better.