SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- csv
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Tam1032/MiniLM6-v2-sport")
# Run inference
sentences = [
'Juventus vs Napoli (23h00 ngày 21/9): Không dễ cho chủ nhà.',
'Real Madrid vs Barcelona',
'El Salvador vs Montserrat',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Binary Classification
- Dataset:
sport_query_title_dev
- Evaluated with
BinaryClassificationEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.9944 |
cosine_accuracy_threshold | 0.6411 |
cosine_f1 | 0.9943 |
cosine_f1_threshold | 0.6108 |
cosine_precision | 0.9959 |
cosine_recall | 0.9928 |
cosine_ap | 0.9996 |
Training Details
Training Dataset
csv
- Dataset: csv
- Size: 19,598 training samples
- Columns:
hypothesis
,premise
, andlabel
- Approximate statistics based on the first 1000 samples:
hypothesis premise label type string string int details - min: 12 tokens
- mean: 27.44 tokens
- max: 37 tokens
- min: 5 tokens
- mean: 9.63 tokens
- max: 55 tokens
- 0: ~50.20%
- 1: ~49.80%
- Samples:
hypothesis premise label bóng đá Las Palmas vs Girona, 23h30 ngày 26/10: Trừng phạt chủ nhà.
Las Palmas vs Girona
1
Seattle Sounders vs Houston Dynamo 9h30 ngày 29/9 (Nhà nghề Mỹ 2024).
dự đoán Seattle Sounders vs Houston Dynamo
1
bóng đá Tây Ban Nha vs Đan Mạch, 01h45 ngày 13/10: Khuất phục ‘lính chì’.
bóng đá Tây Ban Nha vs Đan Mạch
1
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset
csv
- Dataset: csv
- Size: 19,598 evaluation samples
- Columns:
hypothesis
,premise
, andlabel
- Approximate statistics based on the first 1000 samples:
hypothesis premise label type string string int details - min: 12 tokens
- mean: 27.15 tokens
- max: 40 tokens
- min: 4 tokens
- mean: 9.55 tokens
- max: 40 tokens
- 0: ~51.40%
- 1: ~48.60%
- Samples:
hypothesis premise label Hải Phòng vs CAHN (19h15 ngày 15/9): Điểm tựa sân nhà.
kết quả Hải Phòng vs CAHN
1
Kuwait vs Jordan 1h15 ngày 20/11 (Vòng loại World Cup 2026).
Kuwait vs Iraq
0
bóng đá Parma vs Empoli 18h30 ngày 27/10 (Serie A 2024/25).
nhận định Parma vs Empoli
1
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16warmup_ratio
: 0.1fp16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | sport_query_title_dev_cosine_ap |
---|---|---|---|---|
1.0 | 1103 | - | 0.1376 | 0.9991 |
1.4506 | 1600 | 0.3994 | - | - |
2.0 | 2206 | - | 0.0693 | 0.9994 |
2.9012 | 3200 | 0.0442 | - | - |
3.0 | 3309 | - | 0.0534 | 0.9996 |
Framework Versions
- Python: 3.11.7
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Tam1032/MiniLM6-v2-sport
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Cosine Accuracy on sport query title devself-reported0.994
- Cosine Accuracy Threshold on sport query title devself-reported0.641
- Cosine F1 on sport query title devself-reported0.994
- Cosine F1 Threshold on sport query title devself-reported0.611
- Cosine Precision on sport query title devself-reported0.996
- Cosine Recall on sport query title devself-reported0.993
- Cosine Ap on sport query title devself-reported1.000