fblgit commited on
Commit
32a84ba
·
verified ·
1 Parent(s): 2b27757

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md CHANGED
@@ -16,6 +16,58 @@ model-index:
16
  results: []
17
  ---
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Llamacpp imatrix Quantizations of miniclaus-qw1.5B-UNAMGS
20
 
21
  Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4058">b4058</a> for quantization.
 
16
  results: []
17
  ---
18
 
19
+ # miniclaus-qw1.5B-UNAMGS
20
+
21
+ Trained with `Magpie-Align/Magpie-Pro-MT-300K-v0.1`
22
+
23
+ Using MGS & UNA (MLP) on this tiny but powerful model.
24
+
25
+ ![miniclaus-qw1.5B-UNAMGS](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS/resolve/main/miniclaus_qw15-UNAMGS.png)
26
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
27
+
28
+ It achieves the following results on the evaluation set:
29
+ - Loss: 0.7193
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - train_batch_size: 1
37
+ - seed: 42
38
+ - distributed_type: multi-GPU
39
+ - num_devices: 8
40
+ - total_train_batch_size: 128
41
+ - total_eval_batch_size: 8
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - num_epochs: 1
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:------:|:----:|:---------------:|
49
+ | 1.1641 | 0.0007 | 1 | 0.8514 |
50
+ | 0.9246 | 0.0503 | 76 | 0.7921 |
51
+ | 0.8791 | 0.1006 | 152 | 0.7727 |
52
+ | 0.8507 | 0.1509 | 228 | 0.7611 |
53
+ | 0.8376 | 0.2012 | 304 | 0.7534 |
54
+ | 0.793 | 0.2515 | 380 | 0.7467 |
55
+ | 0.7834 | 0.3018 | 456 | 0.7421 |
56
+ | 0.7807 | 0.3521 | 532 | 0.7384 |
57
+ | 0.764 | 0.4023 | 608 | 0.7359 |
58
+ | 0.7738 | 0.4526 | 684 | 0.7320 |
59
+ | 0.7425 | 0.5029 | 760 | 0.7300 |
60
+ | 0.7519 | 0.5532 | 836 | 0.7279 |
61
+ | 0.7461 | 0.6035 | 912 | 0.7255 |
62
+ | 0.7489 | 0.6538 | 988 | 0.7245 |
63
+ | 0.7614 | 0.7041 | 1064 | 0.7222 |
64
+ | 0.7576 | 0.7544 | 1140 | 0.7222 |
65
+ | 0.7303 | 0.8047 | 1216 | 0.7209 |
66
+ | 0.7332 | 0.8550 | 1292 | 0.7199 |
67
+ | 0.7541 | 0.9053 | 1368 | 0.7202 |
68
+ | 0.7369 | 0.9556 | 1444 | 0.7193 |
69
+
70
+
71
  ## Llamacpp imatrix Quantizations of miniclaus-qw1.5B-UNAMGS
72
 
73
  Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4058">b4058</a> for quantization.