SentenceTransformer based on nreimers/MiniLM-L6-H384-uncased
This is a sentence-transformers model finetuned from nreimers/MiniLM-L6-H384-uncased. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nreimers/MiniLM-L6-H384-uncased
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Computationally efficient fixed complexity LLL algorithm for lattice-reduction-aided multiple-input–multiple-output precoding',
'In multiple-input–multiple-output broadcast channels, lattice reduction (LR) preprocessing technique can significantly improve the precoding performance. Among the existing LR algorithms, the fixed complexity Lenstra–Lenstra–Lovasz (fcLLL) algorithm applying limited number of LLL loops is suitable for the real-time communication system. However, fcLLL algorithm suffers from higher average complexity. Aiming at this problem, a computationally efficient fcLLL (CE-fcLLL) algorithm for LR-aided (LRA) precoding is developed in this study. First, the authors analyse the impact of fcLLL algorithm on the signal-to-noise ratio performance of LRA precoding by a power factor (PF) which is defined to measure the relation of reduced basis and transmit power of LRA precoding. Then, they propose a CE-fcLLL algorithm by designing a new LLL loop and introducing new early termination conditions to reduce redundant and inefficient LR operation in fcLLL algorithm. Finally, they define a PF loss factor to optimise the PF threshold and the number of LLL loops, which can lead to a performance-complexity tradeoff. Simulation results show that the proposed algorithm for LRA precoding can achieve better bit-error-rate performance than the fcLLL algorithm with remarkable complexity savings in the same upper complexity bound.',
'ABSTRACTThe success of the open innovation (OI) paradigm is still debated and literature is searching for its determinants. Although firms’ internal social context is crucial to explain the success or failure of OI practices, such context is still poorly investigated. The aim of the paper is to analyse whether internal social capital (SC), intended as employees’ propensity to interact and work in groups in order to solve innovation issues, mediates the relationship between OI practices and innovation ambidexterity (IA). Results, based on a survey research developed in Finland, Italy and Sweden, suggest that collaborations with different typologies of partners (scientific and business) achieve good results in terms of IA, through the partial mediation of the internal SC.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 730,454 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 4 tokens
- mean: 15.97 tokens
- max: 48 tokens
- min: 18 tokens
- mean: 193.95 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 E-government in a corporatist, communitarian society: the case of Singapore
Singapore was one of the early adopters of e-government initiatives in keeping with its status as one of the few developed Asian countries and has continued to be at the forefront of developing e-government structures. While crediting the city-state for the speed of its development, observers have critiqued that the republic limits pluralism, which directly affects e-governance initiatives. This article draws on two recent government initiatives, the notions of corporatism and communitarianism and the concept of symmetry and asymmetry in communication to present the e-government and e-governance structures in Singapore. Four factors are presented as critical for the creation of a successful e-government infrastructure: an educated citizenry; adequate technical infrastructures; offering e-services that citizens need; and commitment from top government officials to support the necessary changes with financial resources and leadership. However, to have meaningful e-governance there has to be political plural...
Multicast routing representation in ad hoc networks using fuzzy Petri nets
In an ad hoc network, each mobile node plays the role of a router and relays packets to final destinations. The network topology of an ad hoc network changes frequently and unpredictable, so that the routing and multicast become extremely challenging. We describe the multicast routing representation using fuzzy Petri net model with the concept of immediately reachable set in wireless ad hoc networks which all nodes equipped with GPS unit. It allows structured representation of network topology, and has a fuzzy reasoning algorithm for finding multicast tree and improves the efficiency of the ad hoc network routing scheme. Therefore when a packet is to be multicast to a group by a multicast source, a heuristic algorithm is used to compute the multicast tree based on the local network topology with a multicast source. Finally, the simulation shows that the percentage of the improvement is more than 15% when compared the IRS method with the original method.
A Prognosis Tool Based on Fuzzy Anthropometric and Questionnaire Data for Obstructive Sleep Apnea Severity
Obstructive sleep apnea (OSA) are linked to the augmented risk of morbidity and mortality. Although polysomnography is considered a well-established method for diagnosing OSA, it suffers the weakness of time consuming and labor intensive, and requires doctors and attending personnel to conduct an overnight evaluation in sleep laboratories with dedicated systems. This study aims at proposing an efficient diagnosis approach for OSA on the basis of anthropometric and questionnaire data. The proposed approach integrates fuzzy set theory and decision tree to predict OSA patterns. A total of 3343 subjects who were referred for clinical suspicion of OSA (eventually 2869 confirmed with OSA and 474 otherwise) were collected, and then classified by the degree of severity. According to an assessment of experiment results on g-means, our proposed method outperforms other methods such as linear regression, decision tree, back propagation neural network, support vector machine, and learning vector quantization. The proposed method is highly viable and capable of detecting the severity of OSA. It can assist doctors in pre-diagnosis of OSA before running the formal PSG test, thereby enabling the more effective use of medical resources.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 1multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.0110 | 500 | 0.4667 |
0.0219 | 1000 | 0.179 |
0.0329 | 1500 | 0.1543 |
0.0438 | 2000 | 0.1284 |
0.0548 | 2500 | 0.1123 |
0.0657 | 3000 | 0.101 |
0.0767 | 3500 | 0.0989 |
0.0876 | 4000 | 0.0941 |
0.0986 | 4500 | 0.0827 |
0.1095 | 5000 | 0.0874 |
0.1205 | 5500 | 0.0825 |
0.1314 | 6000 | 0.0788 |
0.1424 | 6500 | 0.0728 |
0.1533 | 7000 | 0.0768 |
0.1643 | 7500 | 0.0707 |
0.1752 | 8000 | 0.0691 |
0.1862 | 8500 | 0.0666 |
0.1971 | 9000 | 0.0644 |
0.2081 | 9500 | 0.0615 |
0.2190 | 10000 | 0.0651 |
0.2300 | 10500 | 0.0604 |
0.2409 | 11000 | 0.0595 |
0.2519 | 11500 | 0.0622 |
0.2628 | 12000 | 0.0537 |
0.2738 | 12500 | 0.0564 |
0.2848 | 13000 | 0.0622 |
0.2957 | 13500 | 0.052 |
0.3067 | 14000 | 0.0475 |
0.3176 | 14500 | 0.0569 |
0.3286 | 15000 | 0.0511 |
0.3395 | 15500 | 0.0476 |
0.3505 | 16000 | 0.0498 |
0.3614 | 16500 | 0.0527 |
0.3724 | 17000 | 0.0556 |
0.3833 | 17500 | 0.0495 |
0.3943 | 18000 | 0.0482 |
0.4052 | 18500 | 0.0556 |
0.4162 | 19000 | 0.0454 |
0.4271 | 19500 | 0.0452 |
0.4381 | 20000 | 0.0431 |
0.4490 | 20500 | 0.0462 |
0.4600 | 21000 | 0.0473 |
0.4709 | 21500 | 0.0387 |
0.4819 | 22000 | 0.041 |
0.4928 | 22500 | 0.0472 |
0.5038 | 23000 | 0.0435 |
0.5147 | 23500 | 0.0419 |
0.5257 | 24000 | 0.0395 |
0.5366 | 24500 | 0.043 |
0.5476 | 25000 | 0.0419 |
0.5585 | 25500 | 0.0394 |
0.5695 | 26000 | 0.0403 |
0.5805 | 26500 | 0.0436 |
0.5914 | 27000 | 0.0414 |
0.6024 | 27500 | 0.0418 |
0.6133 | 28000 | 0.0411 |
0.6243 | 28500 | 0.035 |
0.6352 | 29000 | 0.0397 |
0.6462 | 29500 | 0.0392 |
0.6571 | 30000 | 0.0373 |
0.6681 | 30500 | 0.0373 |
0.6790 | 31000 | 0.0363 |
0.6900 | 31500 | 0.0418 |
0.7009 | 32000 | 0.0377 |
0.7119 | 32500 | 0.0321 |
0.7228 | 33000 | 0.0331 |
0.7338 | 33500 | 0.0373 |
0.7447 | 34000 | 0.0342 |
0.7557 | 34500 | 0.0335 |
0.7666 | 35000 | 0.0323 |
0.7776 | 35500 | 0.0362 |
0.7885 | 36000 | 0.0376 |
0.7995 | 36500 | 0.0364 |
0.8104 | 37000 | 0.0396 |
0.8214 | 37500 | 0.0321 |
0.8323 | 38000 | 0.0358 |
0.8433 | 38500 | 0.0299 |
0.8543 | 39000 | 0.0304 |
0.8652 | 39500 | 0.0317 |
0.8762 | 40000 | 0.0334 |
0.8871 | 40500 | 0.0331 |
0.8981 | 41000 | 0.0326 |
0.9090 | 41500 | 0.0325 |
0.9200 | 42000 | 0.0321 |
0.9309 | 42500 | 0.0316 |
0.9419 | 43000 | 0.0321 |
0.9528 | 43500 | 0.0353 |
0.9638 | 44000 | 0.0315 |
0.9747 | 44500 | 0.0326 |
0.9857 | 45000 | 0.031 |
0.9966 | 45500 | 0.0315 |
Framework Versions
- Python: 3.12.2
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sarwin/rp-embed
Base model
nreimers/MiniLM-L6-H384-uncased