limbxy_feet

This model is a fine-tuned version of c14kevincardenas/beit-large-patch16-384-limb on the c14kevincardenas/beta_caller_284_limbxy_feet dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0047
  • Rmse: 0.0683

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 2014
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • num_epochs: 20.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rmse
0.1975 1.0 47 0.1440 0.3794
0.1345 2.0 94 0.0639 0.2527
0.0329 3.0 141 0.0165 0.1284
0.0295 4.0 188 0.0178 0.1334
0.0272 5.0 235 0.0153 0.1237
0.0156 6.0 282 0.0164 0.1282
0.0254 7.0 329 0.0241 0.1551
0.0254 8.0 376 0.0679 0.2606
0.0268 9.0 423 0.0060 0.0773
0.0174 10.0 470 0.0081 0.0899
0.0073 11.0 517 0.0144 0.1200
0.0101 12.0 564 0.0094 0.0968
0.0064 13.0 611 0.0080 0.0894
0.0074 14.0 658 0.0055 0.0743
0.0037 15.0 705 0.0064 0.0803
0.0045 16.0 752 0.0055 0.0742
0.0022 17.0 799 0.0057 0.0754
0.0027 18.0 846 0.0047 0.0683
0.0019 19.0 893 0.0047 0.0684
0.0018 20.0 940 0.0047 0.0684

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
223
Safetensors
Model size
304M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for c14kevincardenas/limbxy_feet

Finetuned
(4)
this model