rp-embed / README.md
sarwin's picture
Upload 12 files
5ec6ffd verified
metadata
base_model: nreimers/MiniLM-L6-H384-uncased
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:730454
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: Markov chains and performance comparison of switched diversity systems
    sentences:
      - >-
        An algorithm for speaker's lip segmentation and features extraction is
        presented. A color video sequence of speaker's face is acquired, under
        natural lighting conditions and without any particular make-up. First, a
        logarithmic color transform is performed from the RGB to HI (hue,
        intensity) color space. Second, a statistical approach using Markov
        random field modeling determines the red hue prevailing region and
        motion in a spatiotemporal neighborhood. Third, the final label field is
        used to extract ROI (region of interest) and geometrical features.
      - >-
        There are about 90 million high performance mobile phones used in Japan.
        We are now planning to develop new applications of mobile phone to
        support children and elder and disabled people who are out of scope of
        major mobile phone application based on their requirements. We have a
        responsibility to extend the application filed of mobile phone as a
        leading country of ubiquitous life. This paper discusses possibilities
        to realize mobile ad hoc networks using Bluetooth functions equipped on
        a mobile phone. Hierarchical mobile ad hoc networks using Bluetooth in a
        mobile phone are firstly developed as a test platform. The test platform
        proves the possibility of developing mobile ad hoc network by mobile
        phone built-in Bluetooth functions. We demonstrate their capabilities by
        showing results of implementing game applications on the test platform.
        The paper also describes some example applications using mobile ad hoc
        network technologies, which include a location tracking system for
        children on the way to a school and an alarm system for hearing impaired
        people
      - >-
        Switch-and-stay combining (SSC) diversity systems have the advantage of
        offering one of the least complex solutions to mitigating the effect of
        fading. In this paper, we present a Markov chain-based analytical
        framework for the performance analysis of various switching strategies
        used in conjunction with SSC systems. The resulting expressions are
        quite general, and are applicable to dual-branch diversity systems
        operating over a variety of correlated and/or unbalanced fading
        channels. The mathematical formalism is illustrated by some selected
        numerical examples, along with their discussion and interpretation. As a
        result, this paper presents a thorough comparison and highlights the
        main differences and tradeoffs between the various SSC switching
        strategies.
  - source_sentence: >-
      Effect of age on the failure properties of human meniscus: High-speed
      strain mapping of tissue tears.
    sentences:
      - "The knee meniscus is a soft fibrous tissue with a high incidence of injury in older populations. The objective of this study was to determine the effect of age on the failure behavior of human knee meniscus when applying uniaxial tensile loads parallel or perpendicular to the primary circumferential fiber orientation. Two age groups were tested: under 40 and over 65\_years old. We paired high-speed video with digital image correlation to quantify for the first time the planar strains occurring in the tear region at precise time points, including at ultimate tensile stress, when the tissue begins losing load-bearing capacity. On average, older meniscus specimens loaded parallel to the fiber axis had approximately one-third less ultimate tensile strain and absorbed 60% less energy to failure within the tear region than younger specimens (p\_<\_0.05). Older specimens also had significantly reduced strength and material toughness when loaded perpendicular to the fibers (p\_<\_0.05). These age-related changes indicate a loss of collagen fiber extensibility and weakening of the non-fibrous matrix with age. In addition, we found that when loaded perpendicular to the circumferential fibers, tears propagated near the planes of maximum tensile stress and strain. Whereas when loaded parallel to the circumferential fibers, tears propagated oblique to the loading axis, closer to the planes of maximum shear stress and strain. Our experimental results can assist the selection of valid failure criteria for meniscus, and provide insight into the effect of age on the failure mechanisms of soft fibrous tissue."
      - >-
        Objectives: We aimed to identify key demographic risk factors for
        hospital attendance with COVID-19 infection. Design: Community survey
        Setting: The COVID Symptom Tracker mobile application co-developed by
        physicians and scientists at Kings College London, Massachusetts General
        Hospital, Boston and Zoe Global Limited was launched in the UK and US on
        24th and 29th March 2020 respectively. It captured self-reported
        information related to COVID-19 symptoms and testing. Participants:
        2,618,948 users of the COVID Symptom Tracker App. UK (95.7%) and US
        (4.3%) population. Data cut-off for this analysis was 21st April 2020.
        Main outcome measures: Visit to hospital and for those who attended
        hospital, the need for respiratory support in three subgroups (i)
        self-reported COVID-19 infection with classical symptoms (SR-COVID-19),
        (ii) self-reported positive COVID-19 test results (T-COVID-19), and
        (iii) imputed/predicted COVID-19 infection based on symptomatology
        (I-COVID-19). Multivariate logistic regressions for each outcome and
        each subgroup were adjusted for age and gender, with sensitivity
        analyses adjusted for comorbidities. Classical symptoms were defined as
        high fever and persistent cough for several days. Results: Older age and
        all comorbidities tested were found to be associated with increased odds
        of requiring hospital care for COVID-19. Obesity (BMI >30) predicted
        hospital care in all models, with odds ratios (OR) varying from 1.20
        [1.11; 1.31] to 1.40 [1.23; 1.60] across population groups. Pre-existing
        lung disease and diabetes were consistently found to be associated with
        hospital visit with a maximum OR of 1.79 [1.64,1.95] and 1.72 [1.27;
        2.31]) respectively. Findings were similar when assessing the need for
        respiratory support, for which age and male gender played an additional
        role. Conclusions: Being older, obese, diabetic or suffering from
        pre-existing lung, heart or renal disease placed participants at
        increased risk of visiting hospital with COVID-19. It is of utmost
        importance for governments and the scientific and medical communities to
        work together to find evidence-based means of protecting those deemed
        most vulnerable from COVID-19. Trial registration: The App Ethics have
        been approved by KCL ethics Committee REMAS ID 18210, review reference
        LRS-19/20-18210
      - >-
        Social networking sites (SNS) have growing popularity and several sites
        compete with each other. This study examines three models to determine
        how competition between Facebook and other social networking sites may
        affect continuance intention on Facebook. The first model examines the
        relationship between having an account on four different SNSs and its
        impact on Facebook. Twitter users have lower intentions to continue
        using Facebook, Instagram users have higher intentions. The second model
        examines attitudes toward specific alternatives and found that users who
        felt alternatives were attractive have lower intentions to continue
        using Facebook. The third model examined general attitudes about
        alternative attractiveness and attitudes toward switching, this model
        explained a moderate to substantial amount of the variance in
        continuance intention. This study makes important contributions to both
        research and practice.
  - source_sentence: Bayesian duration modeling and learning for speech recognition
    sentences:
      - >-
        Measuring solar irradiance allows for direct maximization of the
        efficiency in photovoltaic power plants. However, devices for solar
        irradiance sensing, such as pyranometers and pyrheliometers, are
        expensive and difficult to calibrate and thus seldom utilized in
        photovoltaic power plants. Indirect methods are instead implemented in
        order to maximize efficiency. This paper proposes a novel approach for
        solar irradiance measurement based on neural networks, which may, in
        turn, be used to maximize efficiency directly. An initial estimate
        suggests the cost of the sensor proposed herein may be price competitive
        with other inexpensive solutions available in the market, making the
        device a good candidate for large deployment in photovoltaic power
        plants. The proposed sensor is implemented through a photovoltaic cell,
        a temperature sensor, and a low-cost microcontroller. The use of a
        microcontroller allows for easy calibration, updates, and enhancement by
        simply adding code libraries. Furthermore, it can be interfaced via
        standard communication means with other control devices, integrated into
        control schemes, and remote-controlled through its embedded web server.
        The proposed approach is validated through experimental prototyping and
        compared against a commercial device.
      - >-
        Form is a framework used to construct tools for analyzing the runtime
        behavior of standalone and distributed software systems. The
        architecture of Form is based on the event broadcast and pipe and filter
        styles. In the implementation of this architecture, execution profiles
        may be generated from standalone or distributed systems. The profile
        data is subsequently broadcast by Form to one or more views. Each view
        is a tool used to support program understanding or other software
        development activities. The authors describe the Form architecture and
        implementation, as well as a tool that was built using Form. This tool
        profiles Java-based distributed systems and generates UML sequence
        diagrams to describe their execution. We also present a case study that
        shows how this tool was used to extract sequence diagrams from a
        three-tiered EJB-based distributed application.
      - >-
        We present Bayesian duration modeling and learning for speech
        recognition under nonstationary speaking rates and noise conditions. In
        this study, the Gaussian, Poisson and gamma distributions are
        investigated, to characterize duration models. The maximum a posteriori
        (MAP) estimate of the gamma duration model is developed. To exploit the
        sequential learning, we adopt the Poisson duration model, incorporated
        with gamma prior density, which belongs to the conjugate prior family.
        When the adaptation data are sequentially observed, the gamma posterior
        density is produced for twofold advantages. One is to determine the
        optimal quasi-Bayes (QB) duration parameter, which can be merged in
        HMM's for speech recognition. The other one is to build the updating
        mechanism of gamma prior statistics for sequential learning. An
        expectation-maximization algorithm is applied to fulfill parameter
        estimation. In the experiments, the proposed Bayesian approaches
        significantly improve the speech recognition performance of Mandarin
        broadcast news. Batch and sequential learning are investigated for MAP
        and QB duration models, respectively.
  - source_sentence: Configurable security for scavenged storage systems
    sentences:
      - >-
        Scavenged storage systems harness unused disk space from individual
        workstations the same way idle CPU cycles are harnessed by desktop grid
        applications like Seti@Home. These systems provide a promising low cost,
        high-performance storage solution in certain high-end computing
        scenarios. However, selecting the security level and designing the
        security mechanisms for such systems is challenging as scavenging idle
        storage opens the door for security threats absent in traditional
        storage systems that use dedicated nodes under a single administrative
        domain. Moreover, increased security often comes at the price of
        performance and scalability. This paper develops a general threat model
        for systems that use scavenged storage, presents the design of a
        protocol that addresses these threats and is optimized for throughput,
        and evaluates the overheads brought by the new security protocol when
        configured to provide a number of different security properties.
      - >-
        Histone methyltransferases are involved in many important biological
        processes, and abnormalities in these enzymes are associated with
        tumorigenesis and progression. Disruptor of telomeric silencing 1-like
        (DOT1L), a key hub in histone lysine methyltransferases, has been
        reported to play an important role in the processes of mixed-lineage
        leukemia (MLL)-rearranged leukemias and validated to be a potential
        therapeutic target. In this study, we identified a novel DOT1L
        inhibitor, DC_L115 (CAS no. 1163729-79-0), by combining structure-based
        virtual screening with biochemical analyses. This potent inhibitor
        DC_L115 shows high inhibitory activity toward DOT1L (IC50 = 1.5 μM).
        Through a process of surface plasmon resonance-based binding assays,
        DC_L115 was founded to bind to DOT1L with a binding affinity of 0.6 μM
        in vitro. Moreover, this compound selectively inhibits MLL-rearranged
        cell proliferation with an IC50 value of 37.1 μM. We further predicted
        the binding modes of DC_L115 through molecular docking anal...
      - >-
        Employing channel state information at the network layer, efficient
        routing protocols for equal-power and optimal-power allocation in a
        multihop network in fading are proposed. The end-to-end outage
        probability from source to destination is used as the optimization
        criterion. The problem of finding the optimal route is investigated
        under either known mean channel state information (CSI) or known
        instantaneous CSI. The analysis shows that the proposed routing strategy
        achieves full diversity order, equal to the total number of nodes in the
        network excluding the destination, only when instantaneous CSI is known
        and used. The optimal routing algorithm requires a centralized
        exhaustive search which leads to an exponential complexity, which is
        infeasible for large networks. An algorithm of polynomial complexity for
        a centralized environment is developed by reducing the search space. A
        distributed approach based on the Bellman-Ford routing algorithm is
        proposed which achieves a good implementation complexity-performance
        trade-off.
  - source_sentence: >-
      Computationally efficient fixed complexity LLL algorithm for
      lattice-reduction-aided multiple-input–multiple-output precoding
    sentences:
      - >-
        ABSTRACTThe success of the open innovation (OI) paradigm is still
        debated and literature is searching for its determinants. Although
        firms’ internal social context is crucial to explain the success or
        failure of OI practices, such context is still poorly investigated. The
        aim of the paper is to analyse whether internal social capital (SC),
        intended as employees’ propensity to interact and work in groups in
        order to solve innovation issues, mediates the relationship between OI
        practices and innovation ambidexterity (IA). Results, based on a survey
        research developed in Finland, Italy and Sweden, suggest that
        collaborations with different typologies of partners (scientific and
        business) achieve good results in terms of IA, through the partial
        mediation of the internal SC.
      - >-
        In multiple-input–multiple-output broadcast channels, lattice reduction
        (LR) preprocessing technique can significantly improve the precoding
        performance. Among the existing LR algorithms, the fixed complexity
        Lenstra–Lenstra–Lovasz (fcLLL) algorithm applying limited number of LLL
        loops is suitable for the real-time communication system. However, fcLLL
        algorithm suffers from higher average complexity. Aiming at this
        problem, a computationally efficient fcLLL (CE-fcLLL) algorithm for
        LR-aided (LRA) precoding is developed in this study. First, the authors
        analyse the impact of fcLLL algorithm on the signal-to-noise ratio
        performance of LRA precoding by a power factor (PF) which is defined to
        measure the relation of reduced basis and transmit power of LRA
        precoding. Then, they propose a CE-fcLLL algorithm by designing a new
        LLL loop and introducing new early termination conditions to reduce
        redundant and inefficient LR operation in fcLLL algorithm. Finally, they
        define a PF loss factor to optimise the PF threshold and the number of
        LLL loops, which can lead to a performance-complexity tradeoff.
        Simulation results show that the proposed algorithm for LRA precoding
        can achieve better bit-error-rate performance than the fcLLL algorithm
        with remarkable complexity savings in the same upper complexity bound.
      - >-
        While multistage switching networks for vector multiprocessors have been
        studied extensively, detailed evaluations of their performance are rare.
        Indeed, analytical models, simulations with pseudo-synthetic loads,
        studies focused on average-value parameters, and measurements of
        networks disconnected from the machine all provide limited information.
        In this paper, instead, we present an in-depth empirical analysis of a
        multistage switching network in a realistic setting: we use hardware
        probes to examine the performance of the omega network of the Cedar
        shared-memory machine executing real applications. The machine is
        configured with 16 vector processors. The analysis suggests that the
        performance of multistage switching networks is limited by traffic
        non-uniformities. We identify two major non-uniformities that degrade
        Cedar's performance and are likely to slow down other networks too. The
        first one is the contention caused by the return messages in a vector
        access as they converge from the memories to one processor port. This
        traffic convergence penalizes vector reads and, more importantly, causes
        tree saturation. The second non-uniformity is the uneven contention
        delays induced by even a relatively fair scheme to resolve message
        collisions. Based on our observations, we argue that intuitive
        optimizations for multistage switching networks may not be
        cost-effective. Instead, we suggest changes to increase the network
        bandwidth at the root of the traffic convergence tree and to delay
        traffic convergence up until the final stages of the network. >

SentenceTransformer based on nreimers/MiniLM-L6-H384-uncased

This is a sentence-transformers model finetuned from nreimers/MiniLM-L6-H384-uncased. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nreimers/MiniLM-L6-H384-uncased
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Computationally efficient fixed complexity LLL algorithm for lattice-reduction-aided multiple-input–multiple-output precoding',
    'In multiple-input–multiple-output broadcast channels, lattice reduction (LR) preprocessing technique can significantly improve the precoding performance. Among the existing LR algorithms, the fixed complexity Lenstra–Lenstra–Lovasz (fcLLL) algorithm applying limited number of LLL loops is suitable for the real-time communication system. However, fcLLL algorithm suffers from higher average complexity. Aiming at this problem, a computationally efficient fcLLL (CE-fcLLL) algorithm for LR-aided (LRA) precoding is developed in this study. First, the authors analyse the impact of fcLLL algorithm on the signal-to-noise ratio performance of LRA precoding by a power factor (PF) which is defined to measure the relation of reduced basis and transmit power of LRA precoding. Then, they propose a CE-fcLLL algorithm by designing a new LLL loop and introducing new early termination conditions to reduce redundant and inefficient LR operation in fcLLL algorithm. Finally, they define a PF loss factor to optimise the PF threshold and the number of LLL loops, which can lead to a performance-complexity tradeoff. Simulation results show that the proposed algorithm for LRA precoding can achieve better bit-error-rate performance than the fcLLL algorithm with remarkable complexity savings in the same upper complexity bound.',
    'ABSTRACTThe success of the open innovation (OI) paradigm is still debated and literature is searching for its determinants. Although firms’ internal social context is crucial to explain the success or failure of OI practices, such context is still poorly investigated. The aim of the paper is to analyse whether internal social capital (SC), intended as employees’ propensity to interact and work in groups in order to solve innovation issues, mediates the relationship between OI practices and innovation ambidexterity (IA). Results, based on a survey research developed in Finland, Italy and Sweden, suggest that collaborations with different typologies of partners (scientific and business) achieve good results in terms of IA, through the partial mediation of the internal SC.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 730,454 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 4 tokens
    • mean: 15.97 tokens
    • max: 48 tokens
    • min: 18 tokens
    • mean: 193.95 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    E-government in a corporatist, communitarian society: the case of Singapore Singapore was one of the early adopters of e-government initiatives in keeping with its status as one of the few developed Asian countries and has continued to be at the forefront of developing e-government structures. While crediting the city-state for the speed of its development, observers have critiqued that the republic limits pluralism, which directly affects e-governance initiatives. This article draws on two recent government initiatives, the notions of corporatism and communitarianism and the concept of symmetry and asymmetry in communication to present the e-government and e-governance structures in Singapore. Four factors are presented as critical for the creation of a successful e-government infrastructure: an educated citizenry; adequate technical infrastructures; offering e-services that citizens need; and commitment from top government officials to support the necessary changes with financial resources and leadership. However, to have meaningful e-governance there has to be political plural...
    Multicast routing representation in ad hoc networks using fuzzy Petri nets In an ad hoc network, each mobile node plays the role of a router and relays packets to final destinations. The network topology of an ad hoc network changes frequently and unpredictable, so that the routing and multicast become extremely challenging. We describe the multicast routing representation using fuzzy Petri net model with the concept of immediately reachable set in wireless ad hoc networks which all nodes equipped with GPS unit. It allows structured representation of network topology, and has a fuzzy reasoning algorithm for finding multicast tree and improves the efficiency of the ad hoc network routing scheme. Therefore when a packet is to be multicast to a group by a multicast source, a heuristic algorithm is used to compute the multicast tree based on the local network topology with a multicast source. Finally, the simulation shows that the percentage of the improvement is more than 15% when compared the IRS method with the original method.
    A Prognosis Tool Based on Fuzzy Anthropometric and Questionnaire Data for Obstructive Sleep Apnea Severity Obstructive sleep apnea (OSA) are linked to the augmented risk of morbidity and mortality. Although polysomnography is considered a well-established method for diagnosing OSA, it suffers the weakness of time consuming and labor intensive, and requires doctors and attending personnel to conduct an overnight evaluation in sleep laboratories with dedicated systems. This study aims at proposing an efficient diagnosis approach for OSA on the basis of anthropometric and questionnaire data. The proposed approach integrates fuzzy set theory and decision tree to predict OSA patterns. A total of 3343 subjects who were referred for clinical suspicion of OSA (eventually 2869 confirmed with OSA and 474 otherwise) were collected, and then classified by the degree of severity. According to an assessment of experiment results on g-means, our proposed method outperforms other methods such as linear regression, decision tree, back propagation neural network, support vector machine, and learning vector quantization. The proposed method is highly viable and capable of detecting the severity of OSA. It can assist doctors in pre-diagnosis of OSA before running the formal PSG test, thereby enabling the more effective use of medical resources.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0110 500 0.4667
0.0219 1000 0.179
0.0329 1500 0.1543
0.0438 2000 0.1284
0.0548 2500 0.1123
0.0657 3000 0.101
0.0767 3500 0.0989
0.0876 4000 0.0941
0.0986 4500 0.0827
0.1095 5000 0.0874
0.1205 5500 0.0825
0.1314 6000 0.0788
0.1424 6500 0.0728
0.1533 7000 0.0768
0.1643 7500 0.0707
0.1752 8000 0.0691
0.1862 8500 0.0666
0.1971 9000 0.0644
0.2081 9500 0.0615
0.2190 10000 0.0651
0.2300 10500 0.0604
0.2409 11000 0.0595
0.2519 11500 0.0622
0.2628 12000 0.0537
0.2738 12500 0.0564
0.2848 13000 0.0622
0.2957 13500 0.052
0.3067 14000 0.0475
0.3176 14500 0.0569
0.3286 15000 0.0511
0.3395 15500 0.0476
0.3505 16000 0.0498
0.3614 16500 0.0527
0.3724 17000 0.0556
0.3833 17500 0.0495
0.3943 18000 0.0482
0.4052 18500 0.0556
0.4162 19000 0.0454
0.4271 19500 0.0452
0.4381 20000 0.0431
0.4490 20500 0.0462
0.4600 21000 0.0473
0.4709 21500 0.0387
0.4819 22000 0.041
0.4928 22500 0.0472
0.5038 23000 0.0435
0.5147 23500 0.0419
0.5257 24000 0.0395
0.5366 24500 0.043
0.5476 25000 0.0419
0.5585 25500 0.0394
0.5695 26000 0.0403
0.5805 26500 0.0436
0.5914 27000 0.0414
0.6024 27500 0.0418
0.6133 28000 0.0411
0.6243 28500 0.035
0.6352 29000 0.0397
0.6462 29500 0.0392
0.6571 30000 0.0373
0.6681 30500 0.0373
0.6790 31000 0.0363
0.6900 31500 0.0418
0.7009 32000 0.0377
0.7119 32500 0.0321
0.7228 33000 0.0331
0.7338 33500 0.0373
0.7447 34000 0.0342
0.7557 34500 0.0335
0.7666 35000 0.0323
0.7776 35500 0.0362
0.7885 36000 0.0376
0.7995 36500 0.0364
0.8104 37000 0.0396
0.8214 37500 0.0321
0.8323 38000 0.0358
0.8433 38500 0.0299
0.8543 39000 0.0304
0.8652 39500 0.0317
0.8762 40000 0.0334
0.8871 40500 0.0331
0.8981 41000 0.0326
0.9090 41500 0.0325
0.9200 42000 0.0321
0.9309 42500 0.0316
0.9419 43000 0.0321
0.9528 43500 0.0353
0.9638 44000 0.0315
0.9747 44500 0.0326
0.9857 45000 0.031
0.9966 45500 0.0315

Framework Versions

  • Python: 3.12.2
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.3
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}