collapse_gemma-2-27b_hs2_replace_iter4_sftsd1

This model is a fine-tuned version of google/gemma-2-27b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3517
  • Num Input Tokens Seen: 3894680

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-06
  • train_batch_size: 4
  • eval_batch_size: 16
  • seed: 1
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
No log 0 0 1.1282 0
3.7019 0.0699 5 1.0817 269240
3.3923 0.1397 10 1.1760 545652
3.1972 0.2096 15 1.2260 817200
3.0134 0.2795 20 1.2884 1093032
2.8402 0.3493 25 1.3254 1363440
2.7236 0.4192 30 1.3343 1638832
2.4553 0.4891 35 1.3224 1907096
2.6104 0.5590 40 1.3143 2179408
2.6111 0.6288 45 1.3231 2461164
2.5001 0.6987 50 1.3322 2735560
2.4994 0.7686 55 1.3282 3010548
2.5116 0.8384 60 1.3344 3284104
2.4746 0.9083 65 1.3460 3558936
2.4739 0.9782 70 1.3520 3839172

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
27.2B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter4_sftsd1

Base model

google/gemma-2-27b
Finetuned
(49)
this model