Text Generation
Transformers
Safetensors
mistral
Generated from Trainer
conversational
text-generation-inference
Inference Endpoints
plaguss HF staff commited on
Commit
fbb2575
·
verified ·
1 Parent(s): 0941063

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -6,6 +6,8 @@ tags:
6
  model-index:
7
  - name: zephyr-7b-spin-iter1-v0
8
  results: []
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # zephyr-7b-spin-iter1-v0
15
 
16
- This model is a fine-tuned version of [plaguss/zephyr-7b-spin-iter0-v0](https://huggingface.co/plaguss/zephyr-7b-spin-iter0-v0) on the None dataset.
 
 
 
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.0831
19
  - Rewards/real: 1.3037
@@ -71,4 +76,4 @@ The following hyperparameters were used during training:
71
  - Transformers 4.37.0
72
  - Pytorch 2.1.2+cu121
73
  - Datasets 2.14.6
74
- - Tokenizers 0.15.2
 
6
  model-index:
7
  - name: zephyr-7b-spin-iter1-v0
8
  results: []
9
+ datasets:
10
+ - argilla/10k_prompts_SPIN_iter1_zephyr_top
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # zephyr-7b-spin-iter1-v0
17
 
18
+ This model is a fine-tuned version of [argilla/zephyr-7b-spin-iter0-v0](https://huggingface.co/argilla/zephyr-7b-spin-iter0-v0) on the
19
+ [argilla/10k_prompts_SPIN_iter1_zephyr_top](https://huggingface.co/datasets/argilla/10k_prompts_SPIN_iter1_zephyr_top) and
20
+ [argilla/10k_prompts_SPIN_iter0_zephyr_top](https://huggingface.co/datasets/argilla/10k_prompts_SPIN_iter0_zephyr_top) dataset.
21
+
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.0831
24
  - Rewards/real: 1.3037
 
76
  - Transformers 4.37.0
77
  - Pytorch 2.1.2+cu121
78
  - Datasets 2.14.6
79
+ - Tokenizers 0.15.2