MPNet base trained on AllNLI triplets

This is a sentence-transformers model finetuned from prajjwal1/bert-tiny on the pair_similarity_new_1 dataset. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 128 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Tien09/tiny_bert_ft_sim_score_1")
# Run inference
sentences = [
    '2 Cyberse monsters If this card is Link Summoned: You can add 1 "Cynet Fusion" from your Deck to your hand. If a monster(s) is Special Summoned to a zone(s) this card points to (except during the Damage Step): You can target 1 Level 4 or lower Cyberse monster in your GY; Special Summon it, but negate its effects, also you cannot Special Summon monsters from the Extra Deck for the rest of this turn, except Fusion Monsters. You can only use each effect of "Clock Spartoi" once per turn.',
    'You can banish 1 "Virtual World" card from your GY, then target 1 face-up monster on the field; negate its effects until the end of this turn (even if this card leaves the field). You can banish this card from your GY; add 1 "Virtual World" monster from your Deck to your hand, then send 1 card from your hand to the GY. You can only use each effect of "Virtual World Gate - Qinglong" once per turn.',
    'Remove from play 1 "Assault Mode Activate" from your GY. Destroy all monsters you control and Special Summon 1 "/Assault Mode" monster from your GY, ignoring its Summoning conditions. Its effect(s) is negated, and it cannot be Tributed. If it is removed from the field, remove it from play.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

pair_similarity_new_1

  • Dataset: pair_similarity_new_1 at c49380e
  • Size: 8,959 training samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 9 tokens
    • mean: 73.57 tokens
    • max: 204 tokens
    • min: 0.0
    • mean: 0.74
    • max: 1.0
    • min: 4 tokens
    • mean: 73.05 tokens
    • max: 181 tokens
  • Samples:
    effect_text score effect_text2
    When your opponent's monster attacks a face-up Level 4 or lower Toon Monster on your side of the field, you can make the attack a direct attack to your Life Points. 0.0 During either player's Main Phase: Special Summon this card as a Normal Monster (Reptile-Type/EARTH/Level 4/ATK 1600/DEF 1800). (This card is also still a Trap Card.)
    When your opponent Special Summons a monster, you can discard 1 card to Special Summon this card from your hand. Your opponent cannot remove cards from play. 1.0 Activate this card by discarding 1 monster, then target 1 monster in your GY whose Level is lower than the discarded monster's original Level; Special Summon it and equip it with this card. The equipped monster has its effects negated. You can only activate 1 "Overdone Burial" per turn.
    Mystical Elf" + "Curtain of the Dark Ones 0.0 A destructive machine discovered in the Ruins of the Ancients.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

pair_similarity_new_1

  • Dataset: pair_similarity_new_1 at c49380e
  • Size: 1,920 evaluation samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 6 tokens
    • mean: 72.29 tokens
    • max: 190 tokens
    • min: 0.0
    • mean: 0.74
    • max: 1.0
    • min: 5 tokens
    • mean: 72.2 tokens
    • max: 185 tokens
  • Samples:
    effect_text score effect_text2
    2+ Level 4 monsters
    This Xyz Summoned card gains 500 ATK x the total Link Rating of Link Monsters linked to this card. You can detach 2 materials from this card, then target 1 4 Cyberse Link Monster in your GY; Special Summon it to your field so it points to this card, also you cannot Special Summon other monsters or attack directly for the rest of this turn.
    1.0 3 Level 4 monsters Once per turn, you can also Xyz Summon "Zoodiac Tigermortar" by using 1 "Zoodiac" monster you control with a different name as Xyz Material. (If you used an Xyz Monster, any Xyz Materials attached to it also become Xyz Materials on this card.) This card gains ATK and DEF equal to the ATK and DEF of all "Zoodiac" monsters attached to it as Materials. Once per turn: You can detach 1 Xyz Material from this card, then target 1 Xyz Monster you control and 1 "Zoodiac" monster in your GY; attach that "Zoodiac" monster to that Xyz Monster as Xyz Material.
    1 Tuner + 1 or more non-Tuner Pendulum Monsters Once per turn: You can target 1 Pendulum Monster on the field or 1 card in the Pendulum Zone; destroy it, and if you do, shuffle 1 card on the field into the Deck. Once per turn: You can Special Summon 1 "Dracoslayer" monster from your Deck in Defense Position, but it cannot be used as a Synchro Material for a Summon. 0.5 You can Ritual Summon this card with a "Recipe" card. If this card is Special Summoned: You can target 1 Spell/Trap on the field; destroy it. When a card or effect is activated that targets this card on the field, or when this card is targeted for an attack (Quick Effect): You can Tribute this card and 1 Attack Position monster on either field, and if you do, Special Summon 1 Level 3 or 4 "Nouvelles" Ritual Monster from your hand or Deck. You can only use each effect of "Confiras de Nouvelles" once per turn.
    If you control an Illusion or Spellcaster monster: Add 1 "White Forest" monster from your Deck to your hand. If this card is sent to the GY to activate a monster effect: You can Set this card. You can only use each effect of "Tales of the White Forest" once per turn. 1.0 If you control no monsters, you can Special Summon this card (from your hand). You can only use each of the following effects of "Kashtira Fenrir" once per turn. During your Main Phase: You can add 1 "Kashtira" monster from your Deck to your hand. When this card declares an attack, or if your opponent activates a monster effect (except during the Damage Step): You can target 1 face-up card your opponent controls; banish it, face-down.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.1786 100 4.6579 4.3287
0.3571 200 4.3378 4.2222
0.5357 300 4.2299 4.1919
0.7143 400 4.2124 4.1616
0.8929 500 4.1399 4.1370
1.0714 600 4.2017 4.1200
1.25 700 4.1343 4.1058
1.4286 800 4.0805 4.1072
1.6071 900 4.0843 4.0773
1.7857 1000 4.01 4.0771
1.9643 1100 4.0615 4.0627
2.1429 1200 4.0847 4.0468
2.3214 1300 3.9798 4.0659
2.5 1400 3.9663 4.0551
2.6786 1500 3.9625 4.0335
2.8571 1600 3.9096 4.0306
3.0357 1700 4.0127 4.0105
3.2143 1800 3.9753 4.0077
3.3929 1900 3.8669 4.0188
3.5714 2000 3.8983 4.0174
3.75 2100 3.9077 4.0025
3.9286 2200 3.8346 4.0218
4.1071 2300 3.9618 3.9921
4.2857 2400 3.8631 3.9981
4.4643 2500 3.8456 4.0014
4.6429 2600 3.8655 3.9976
4.8214 2700 3.8248 4.0031
5.0 2800 3.8935 4.0016

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
7
Safetensors
Model size
4.39M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Tien09/tiny_bert_ft_sim_score_1

Finetuned
(57)
this model

Dataset used to train Tien09/tiny_bert_ft_sim_score_1