sarwin commited on
Commit
5ec6ffd
·
verified ·
1 Parent(s): 0296c1a

Upload 12 files

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,3 +1,647 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: nreimers/MiniLM-L6-H384-uncased
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - sentence-transformers
9
+ - sentence-similarity
10
+ - feature-extraction
11
+ - generated_from_trainer
12
+ - dataset_size:730454
13
+ - loss:MultipleNegativesRankingLoss
14
+ widget:
15
+ - source_sentence: Markov chains and performance comparison of switched diversity
16
+ systems
17
+ sentences:
18
+ - An algorithm for speaker's lip segmentation and features extraction is presented.
19
+ A color video sequence of speaker's face is acquired, under natural lighting conditions
20
+ and without any particular make-up. First, a logarithmic color transform is performed
21
+ from the RGB to HI (hue, intensity) color space. Second, a statistical approach
22
+ using Markov random field modeling determines the red hue prevailing region and
23
+ motion in a spatiotemporal neighborhood. Third, the final label field is used
24
+ to extract ROI (region of interest) and geometrical features.
25
+ - There are about 90 million high performance mobile phones used in Japan. We are
26
+ now planning to develop new applications of mobile phone to support children and
27
+ elder and disabled people who are out of scope of major mobile phone application
28
+ based on their requirements. We have a responsibility to extend the application
29
+ filed of mobile phone as a leading country of ubiquitous life. This paper discusses
30
+ possibilities to realize mobile ad hoc networks using Bluetooth functions equipped
31
+ on a mobile phone. Hierarchical mobile ad hoc networks using Bluetooth in a mobile
32
+ phone are firstly developed as a test platform. The test platform proves the possibility
33
+ of developing mobile ad hoc network by mobile phone built-in Bluetooth functions.
34
+ We demonstrate their capabilities by showing results of implementing game applications
35
+ on the test platform. The paper also describes some example applications using
36
+ mobile ad hoc network technologies, which include a location tracking system for
37
+ children on the way to a school and an alarm system for hearing impaired people
38
+ - Switch-and-stay combining (SSC) diversity systems have the advantage of offering
39
+ one of the least complex solutions to mitigating the effect of fading. In this
40
+ paper, we present a Markov chain-based analytical framework for the performance
41
+ analysis of various switching strategies used in conjunction with SSC systems.
42
+ The resulting expressions are quite general, and are applicable to dual-branch
43
+ diversity systems operating over a variety of correlated and/or unbalanced fading
44
+ channels. The mathematical formalism is illustrated by some selected numerical
45
+ examples, along with their discussion and interpretation. As a result, this paper
46
+ presents a thorough comparison and highlights the main differences and tradeoffs
47
+ between the various SSC switching strategies.
48
+ - source_sentence: 'Effect of age on the failure properties of human meniscus: High-speed
49
+ strain mapping of tissue tears.'
50
+ sentences:
51
+ - 'The knee meniscus is a soft fibrous tissue with a high incidence of injury in
52
+ older populations. The objective of this study was to determine the effect of
53
+ age on the failure behavior of human knee meniscus when applying uniaxial tensile
54
+ loads parallel or perpendicular to the primary circumferential fiber orientation.
55
+ Two age groups were tested: under 40 and over 65 years old. We paired high-speed
56
+ video with digital image correlation to quantify for the first time the planar
57
+ strains occurring in the tear region at precise time points, including at ultimate
58
+ tensile stress, when the tissue begins losing load-bearing capacity. On average,
59
+ older meniscus specimens loaded parallel to the fiber axis had approximately one-third
60
+ less ultimate tensile strain and absorbed 60% less energy to failure within the
61
+ tear region than younger specimens (p < 0.05). Older specimens also had significantly
62
+ reduced strength and material toughness when loaded perpendicular to the fibers
63
+ (p < 0.05). These age-related changes indicate a loss of collagen fiber extensibility
64
+ and weakening of the non-fibrous matrix with age. In addition, we found that when
65
+ loaded perpendicular to the circumferential fibers, tears propagated near the
66
+ planes of maximum tensile stress and strain. Whereas when loaded parallel to the
67
+ circumferential fibers, tears propagated oblique to the loading axis, closer to
68
+ the planes of maximum shear stress and strain. Our experimental results can assist
69
+ the selection of valid failure criteria for meniscus, and provide insight into
70
+ the effect of age on the failure mechanisms of soft fibrous tissue.'
71
+ - 'Objectives: We aimed to identify key demographic risk factors for hospital attendance
72
+ with COVID-19 infection. Design: Community survey Setting: The COVID Symptom Tracker
73
+ mobile application co-developed by physicians and scientists at Kings College
74
+ London, Massachusetts General Hospital, Boston and Zoe Global Limited was launched
75
+ in the UK and US on 24th and 29th March 2020 respectively. It captured self-reported
76
+ information related to COVID-19 symptoms and testing. Participants: 2,618,948
77
+ users of the COVID Symptom Tracker App. UK (95.7%) and US (4.3%) population. Data
78
+ cut-off for this analysis was 21st April 2020. Main outcome measures: Visit to
79
+ hospital and for those who attended hospital, the need for respiratory support
80
+ in three subgroups (i) self-reported COVID-19 infection with classical symptoms
81
+ (SR-COVID-19), (ii) self-reported positive COVID-19 test results (T-COVID-19),
82
+ and (iii) imputed/predicted COVID-19 infection based on symptomatology (I-COVID-19).
83
+ Multivariate logistic regressions for each outcome and each subgroup were adjusted
84
+ for age and gender, with sensitivity analyses adjusted for comorbidities. Classical
85
+ symptoms were defined as high fever and persistent cough for several days. Results:
86
+ Older age and all comorbidities tested were found to be associated with increased
87
+ odds of requiring hospital care for COVID-19. Obesity (BMI >30) predicted hospital
88
+ care in all models, with odds ratios (OR) varying from 1.20 [1.11; 1.31] to 1.40
89
+ [1.23; 1.60] across population groups. Pre-existing lung disease and diabetes
90
+ were consistently found to be associated with hospital visit with a maximum OR
91
+ of 1.79 [1.64,1.95] and 1.72 [1.27; 2.31]) respectively. Findings were similar
92
+ when assessing the need for respiratory support, for which age and male gender
93
+ played an additional role. Conclusions: Being older, obese, diabetic or suffering
94
+ from pre-existing lung, heart or renal disease placed participants at increased
95
+ risk of visiting hospital with COVID-19. It is of utmost importance for governments
96
+ and the scientific and medical communities to work together to find evidence-based
97
+ means of protecting those deemed most vulnerable from COVID-19. Trial registration:
98
+ The App Ethics have been approved by KCL ethics Committee REMAS ID 18210, review
99
+ reference LRS-19/20-18210'
100
+ - Social networking sites (SNS) have growing popularity and several sites compete
101
+ with each other. This study examines three models to determine how competition
102
+ between Facebook and other social networking sites may affect continuance intention
103
+ on Facebook. The first model examines the relationship between having an account
104
+ on four different SNSs and its impact on Facebook. Twitter users have lower intentions
105
+ to continue using Facebook, Instagram users have higher intentions. The second
106
+ model examines attitudes toward specific alternatives and found that users who
107
+ felt alternatives were attractive have lower intentions to continue using Facebook.
108
+ The third model examined general attitudes about alternative attractiveness and
109
+ attitudes toward switching, this model explained a moderate to substantial amount
110
+ of the variance in continuance intention. This study makes important contributions
111
+ to both research and practice.
112
+ - source_sentence: Bayesian duration modeling and learning for speech recognition
113
+ sentences:
114
+ - Measuring solar irradiance allows for direct maximization of the efficiency in
115
+ photovoltaic power plants. However, devices for solar irradiance sensing, such
116
+ as pyranometers and pyrheliometers, are expensive and difficult to calibrate and
117
+ thus seldom utilized in photovoltaic power plants. Indirect methods are instead
118
+ implemented in order to maximize efficiency. This paper proposes a novel approach
119
+ for solar irradiance measurement based on neural networks, which may, in turn,
120
+ be used to maximize efficiency directly. An initial estimate suggests the cost
121
+ of the sensor proposed herein may be price competitive with other inexpensive
122
+ solutions available in the market, making the device a good candidate for large
123
+ deployment in photovoltaic power plants. The proposed sensor is implemented through
124
+ a photovoltaic cell, a temperature sensor, and a low-cost microcontroller. The
125
+ use of a microcontroller allows for easy calibration, updates, and enhancement
126
+ by simply adding code libraries. Furthermore, it can be interfaced via standard
127
+ communication means with other control devices, integrated into control schemes,
128
+ and remote-controlled through its embedded web server. The proposed approach is
129
+ validated through experimental prototyping and compared against a commercial device.
130
+ - Form is a framework used to construct tools for analyzing the runtime behavior
131
+ of standalone and distributed software systems. The architecture of Form is based
132
+ on the event broadcast and pipe and filter styles. In the implementation of this
133
+ architecture, execution profiles may be generated from standalone or distributed
134
+ systems. The profile data is subsequently broadcast by Form to one or more views.
135
+ Each view is a tool used to support program understanding or other software development
136
+ activities. The authors describe the Form architecture and implementation, as
137
+ well as a tool that was built using Form. This tool profiles Java-based distributed
138
+ systems and generates UML sequence diagrams to describe their execution. We also
139
+ present a case study that shows how this tool was used to extract sequence diagrams
140
+ from a three-tiered EJB-based distributed application.
141
+ - We present Bayesian duration modeling and learning for speech recognition under
142
+ nonstationary speaking rates and noise conditions. In this study, the Gaussian,
143
+ Poisson and gamma distributions are investigated, to characterize duration models.
144
+ The maximum a posteriori (MAP) estimate of the gamma duration model is developed.
145
+ To exploit the sequential learning, we adopt the Poisson duration model, incorporated
146
+ with gamma prior density, which belongs to the conjugate prior family. When the
147
+ adaptation data are sequentially observed, the gamma posterior density is produced
148
+ for twofold advantages. One is to determine the optimal quasi-Bayes (QB) duration
149
+ parameter, which can be merged in HMM's for speech recognition. The other one
150
+ is to build the updating mechanism of gamma prior statistics for sequential learning.
151
+ An expectation-maximization algorithm is applied to fulfill parameter estimation.
152
+ In the experiments, the proposed Bayesian approaches significantly improve the
153
+ speech recognition performance of Mandarin broadcast news. Batch and sequential
154
+ learning are investigated for MAP and QB duration models, respectively.
155
+ - source_sentence: Configurable security for scavenged storage systems
156
+ sentences:
157
+ - Scavenged storage systems harness unused disk space from individual workstations
158
+ the same way idle CPU cycles are harnessed by desktop grid applications like Seti@Home.
159
+ These systems provide a promising low cost, high-performance storage solution
160
+ in certain high-end computing scenarios. However, selecting the security level
161
+ and designing the security mechanisms for such systems is challenging as scavenging
162
+ idle storage opens the door for security threats absent in traditional storage
163
+ systems that use dedicated nodes under a single administrative domain. Moreover,
164
+ increased security often comes at the price of performance and scalability. This
165
+ paper develops a general threat model for systems that use scavenged storage,
166
+ presents the design of a protocol that addresses these threats and is optimized
167
+ for throughput, and evaluates the overheads brought by the new security protocol
168
+ when configured to provide a number of different security properties.
169
+ - Histone methyltransferases are involved in many important biological processes,
170
+ and abnormalities in these enzymes are associated with tumorigenesis and progression.
171
+ Disruptor of telomeric silencing 1-like (DOT1L), a key hub in histone lysine methyltransferases,
172
+ has been reported to play an important role in the processes of mixed-lineage
173
+ leukemia (MLL)-rearranged leukemias and validated to be a potential therapeutic
174
+ target. In this study, we identified a novel DOT1L inhibitor, DC_L115 (CAS no.
175
+ 1163729-79-0), by combining structure-based virtual screening with biochemical
176
+ analyses. This potent inhibitor DC_L115 shows high inhibitory activity toward
177
+ DOT1L (IC50 = 1.5 μM). Through a process of surface plasmon resonance-based binding
178
+ assays, DC_L115 was founded to bind to DOT1L with a binding affinity of 0.6 μM
179
+ in vitro. Moreover, this compound selectively inhibits MLL-rearranged cell proliferation
180
+ with an IC50 value of 37.1 μM. We further predicted the binding modes of DC_L115
181
+ through molecular docking anal...
182
+ - Employing channel state information at the network layer, efficient routing protocols
183
+ for equal-power and optimal-power allocation in a multihop network in fading are
184
+ proposed. The end-to-end outage probability from source to destination is used
185
+ as the optimization criterion. The problem of finding the optimal route is investigated
186
+ under either known mean channel state information (CSI) or known instantaneous
187
+ CSI. The analysis shows that the proposed routing strategy achieves full diversity
188
+ order, equal to the total number of nodes in the network excluding the destination,
189
+ only when instantaneous CSI is known and used. The optimal routing algorithm requires
190
+ a centralized exhaustive search which leads to an exponential complexity, which
191
+ is infeasible for large networks. An algorithm of polynomial complexity for a
192
+ centralized environment is developed by reducing the search space. A distributed
193
+ approach based on the Bellman-Ford routing algorithm is proposed which achieves
194
+ a good implementation complexity-performance trade-off.
195
+ - source_sentence: Computationally efficient fixed complexity LLL algorithm for lattice-reduction-aided
196
+ multiple-input–multiple-output precoding
197
+ sentences:
198
+ - ABSTRACTThe success of the open innovation (OI) paradigm is still debated and
199
+ literature is searching for its determinants. Although firms’ internal social
200
+ context is crucial to explain the success or failure of OI practices, such context
201
+ is still poorly investigated. The aim of the paper is to analyse whether internal
202
+ social capital (SC), intended as employees’ propensity to interact and work in
203
+ groups in order to solve innovation issues, mediates the relationship between
204
+ OI practices and innovation ambidexterity (IA). Results, based on a survey research
205
+ developed in Finland, Italy and Sweden, suggest that collaborations with different
206
+ typologies of partners (scientific and business) achieve good results in terms
207
+ of IA, through the partial mediation of the internal SC.
208
+ - In multiple-input–multiple-output broadcast channels, lattice reduction (LR) preprocessing
209
+ technique can significantly improve the precoding performance. Among the existing
210
+ LR algorithms, the fixed complexity Lenstra–Lenstra–Lovasz (fcLLL) algorithm applying
211
+ limited number of LLL loops is suitable for the real-time communication system.
212
+ However, fcLLL algorithm suffers from higher average complexity. Aiming at this
213
+ problem, a computationally efficient fcLLL (CE-fcLLL) algorithm for LR-aided (LRA)
214
+ precoding is developed in this study. First, the authors analyse the impact of
215
+ fcLLL algorithm on the signal-to-noise ratio performance of LRA precoding by a
216
+ power factor (PF) which is defined to measure the relation of reduced basis and
217
+ transmit power of LRA precoding. Then, they propose a CE-fcLLL algorithm by designing
218
+ a new LLL loop and introducing new early termination conditions to reduce redundant
219
+ and inefficient LR operation in fcLLL algorithm. Finally, they define a PF loss
220
+ factor to optimise the PF threshold and the number of LLL loops, which can lead
221
+ to a performance-complexity tradeoff. Simulation results show that the proposed
222
+ algorithm for LRA precoding can achieve better bit-error-rate performance than
223
+ the fcLLL algorithm with remarkable complexity savings in the same upper complexity
224
+ bound.
225
+ - 'While multistage switching networks for vector multiprocessors have been studied
226
+ extensively, detailed evaluations of their performance are rare. Indeed, analytical
227
+ models, simulations with pseudo-synthetic loads, studies focused on average-value
228
+ parameters, and measurements of networks disconnected from the machine all provide
229
+ limited information. In this paper, instead, we present an in-depth empirical
230
+ analysis of a multistage switching network in a realistic setting: we use hardware
231
+ probes to examine the performance of the omega network of the Cedar shared-memory
232
+ machine executing real applications. The machine is configured with 16 vector
233
+ processors. The analysis suggests that the performance of multistage switching
234
+ networks is limited by traffic non-uniformities. We identify two major non-uniformities
235
+ that degrade Cedar''s performance and are likely to slow down other networks too.
236
+ The first one is the contention caused by the return messages in a vector access
237
+ as they converge from the memories to one processor port. This traffic convergence
238
+ penalizes vector reads and, more importantly, causes tree saturation. The second
239
+ non-uniformity is the uneven contention delays induced by even a relatively fair
240
+ scheme to resolve message collisions. Based on our observations, we argue that
241
+ intuitive optimizations for multistage switching networks may not be cost-effective.
242
+ Instead, we suggest changes to increase the network bandwidth at the root of the
243
+ traffic convergence tree and to delay traffic convergence up until the final stages
244
+ of the network. >'
245
  ---
246
+
247
+ # SentenceTransformer based on nreimers/MiniLM-L6-H384-uncased
248
+
249
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
250
+
251
+ ## Model Details
252
+
253
+ ### Model Description
254
+ - **Model Type:** Sentence Transformer
255
+ - **Base model:** [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) <!-- at revision 3276f0fac9d818781d7a1327b3ff818fc4e643c0 -->
256
+ - **Maximum Sequence Length:** 512 tokens
257
+ - **Output Dimensionality:** 384 tokens
258
+ - **Similarity Function:** Cosine Similarity
259
+ <!-- - **Training Dataset:** Unknown -->
260
+ <!-- - **Language:** Unknown -->
261
+ <!-- - **License:** Unknown -->
262
+
263
+ ### Model Sources
264
+
265
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
266
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
267
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
268
+
269
+ ### Full Model Architecture
270
+
271
+ ```
272
+ SentenceTransformer(
273
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
274
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
275
+ )
276
+ ```
277
+
278
+ ## Usage
279
+
280
+ ### Direct Usage (Sentence Transformers)
281
+
282
+ First install the Sentence Transformers library:
283
+
284
+ ```bash
285
+ pip install -U sentence-transformers
286
+ ```
287
+
288
+ Then you can load this model and run inference.
289
+ ```python
290
+ from sentence_transformers import SentenceTransformer
291
+
292
+ # Download from the 🤗 Hub
293
+ model = SentenceTransformer("sentence_transformers_model_id")
294
+ # Run inference
295
+ sentences = [
296
+ 'Computationally efficient fixed complexity LLL algorithm for lattice-reduction-aided multiple-input–multiple-output precoding',
297
+ 'In multiple-input–multiple-output broadcast channels, lattice reduction (LR) preprocessing technique can significantly improve the precoding performance. Among the existing LR algorithms, the fixed complexity Lenstra–Lenstra–Lovasz (fcLLL) algorithm applying limited number of LLL loops is suitable for the real-time communication system. However, fcLLL algorithm suffers from higher average complexity. Aiming at this problem, a computationally efficient fcLLL (CE-fcLLL) algorithm for LR-aided (LRA) precoding is developed in this study. First, the authors analyse the impact of fcLLL algorithm on the signal-to-noise ratio performance of LRA precoding by a power factor (PF) which is defined to measure the relation of reduced basis and transmit power of LRA precoding. Then, they propose a CE-fcLLL algorithm by designing a new LLL loop and introducing new early termination conditions to reduce redundant and inefficient LR operation in fcLLL algorithm. Finally, they define a PF loss factor to optimise the PF threshold and the number of LLL loops, which can lead to a performance-complexity tradeoff. Simulation results show that the proposed algorithm for LRA precoding can achieve better bit-error-rate performance than the fcLLL algorithm with remarkable complexity savings in the same upper complexity bound.',
298
+ 'ABSTRACTThe success of the open innovation (OI) paradigm is still debated and literature is searching for its determinants. Although firms’ internal social context is crucial to explain the success or failure of OI practices, such context is still poorly investigated. The aim of the paper is to analyse whether internal social capital (SC), intended as employees’ propensity to interact and work in groups in order to solve innovation issues, mediates the relationship between OI practices and innovation ambidexterity (IA). Results, based on a survey research developed in Finland, Italy and Sweden, suggest that collaborations with different typologies of partners (scientific and business) achieve good results in terms of IA, through the partial mediation of the internal SC.',
299
+ ]
300
+ embeddings = model.encode(sentences)
301
+ print(embeddings.shape)
302
+ # [3, 384]
303
+
304
+ # Get the similarity scores for the embeddings
305
+ similarities = model.similarity(embeddings, embeddings)
306
+ print(similarities.shape)
307
+ # [3, 3]
308
+ ```
309
+
310
+ <!--
311
+ ### Direct Usage (Transformers)
312
+
313
+ <details><summary>Click to see the direct usage in Transformers</summary>
314
+
315
+ </details>
316
+ -->
317
+
318
+ <!--
319
+ ### Downstream Usage (Sentence Transformers)
320
+
321
+ You can finetune this model on your own dataset.
322
+
323
+ <details><summary>Click to expand</summary>
324
+
325
+ </details>
326
+ -->
327
+
328
+ <!--
329
+ ### Out-of-Scope Use
330
+
331
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
332
+ -->
333
+
334
+ <!--
335
+ ## Bias, Risks and Limitations
336
+
337
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
338
+ -->
339
+
340
+ <!--
341
+ ### Recommendations
342
+
343
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
344
+ -->
345
+
346
+ ## Training Details
347
+
348
+ ### Training Dataset
349
+
350
+ #### Unnamed Dataset
351
+
352
+
353
+ * Size: 730,454 training samples
354
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
355
+ * Approximate statistics based on the first 1000 samples:
356
+ | | sentence_0 | sentence_1 |
357
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
358
+ | type | string | string |
359
+ | details | <ul><li>min: 4 tokens</li><li>mean: 15.97 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 193.95 tokens</li><li>max: 512 tokens</li></ul> |
360
+ * Samples:
361
+ | sentence_0 | sentence_1 |
362
+ |:------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
363
+ | <code>E-government in a corporatist, communitarian society: the case of Singapore</code> | <code>Singapore was one of the early adopters of e-government initiatives in keeping with its status as one of the few developed Asian countries and has continued to be at the forefront of developing e-government structures. While crediting the city-state for the speed of its development, observers have critiqued that the republic limits pluralism, which directly affects e-governance initiatives. This article draws on two recent government initiatives, the notions of corporatism and communitarianism and the concept of symmetry and asymmetry in communication to present the e-government and e-governance structures in Singapore. Four factors are presented as critical for the creation of a successful e-government infrastructure: an educated citizenry; adequate technical infrastructures; offering e-services that citizens need; and commitment from top government officials to support the necessary changes with financial resources and leadership. However, to have meaningful e-governance there has to be political plural...</code> |
364
+ | <code>Multicast routing representation in ad hoc networks using fuzzy Petri nets</code> | <code>In an ad hoc network, each mobile node plays the role of a router and relays packets to final destinations. The network topology of an ad hoc network changes frequently and unpredictable, so that the routing and multicast become extremely challenging. We describe the multicast routing representation using fuzzy Petri net model with the concept of immediately reachable set in wireless ad hoc networks which all nodes equipped with GPS unit. It allows structured representation of network topology, and has a fuzzy reasoning algorithm for finding multicast tree and improves the efficiency of the ad hoc network routing scheme. Therefore when a packet is to be multicast to a group by a multicast source, a heuristic algorithm is used to compute the multicast tree based on the local network topology with a multicast source. Finally, the simulation shows that the percentage of the improvement is more than 15% when compared the IRS method with the original method.</code> |
365
+ | <code>A Prognosis Tool Based on Fuzzy Anthropometric and Questionnaire Data for Obstructive Sleep Apnea Severity</code> | <code>Obstructive sleep apnea (OSA) are linked to the augmented risk of morbidity and mortality. Although polysomnography is considered a well-established method for diagnosing OSA, it suffers the weakness of time consuming and labor intensive, and requires doctors and attending personnel to conduct an overnight evaluation in sleep laboratories with dedicated systems. This study aims at proposing an efficient diagnosis approach for OSA on the basis of anthropometric and questionnaire data. The proposed approach integrates fuzzy set theory and decision tree to predict OSA patterns. A total of 3343 subjects who were referred for clinical suspicion of OSA (eventually 2869 confirmed with OSA and 474 otherwise) were collected, and then classified by the degree of severity. According to an assessment of experiment results on g-means, our proposed method outperforms other methods such as linear regression, decision tree, back propagation neural network, support vector machine, and learning vector quantization. The proposed method is highly viable and capable of detecting the severity of OSA. It can assist doctors in pre-diagnosis of OSA before running the formal PSG test, thereby enabling the more effective use of medical resources.</code> |
366
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
367
+ ```json
368
+ {
369
+ "scale": 20.0,
370
+ "similarity_fct": "cos_sim"
371
+ }
372
+ ```
373
+
374
+ ### Training Hyperparameters
375
+ #### Non-Default Hyperparameters
376
+
377
+ - `per_device_train_batch_size`: 16
378
+ - `per_device_eval_batch_size`: 16
379
+ - `num_train_epochs`: 1
380
+ - `multi_dataset_batch_sampler`: round_robin
381
+
382
+ #### All Hyperparameters
383
+ <details><summary>Click to expand</summary>
384
+
385
+ - `overwrite_output_dir`: False
386
+ - `do_predict`: False
387
+ - `eval_strategy`: no
388
+ - `prediction_loss_only`: True
389
+ - `per_device_train_batch_size`: 16
390
+ - `per_device_eval_batch_size`: 16
391
+ - `per_gpu_train_batch_size`: None
392
+ - `per_gpu_eval_batch_size`: None
393
+ - `gradient_accumulation_steps`: 1
394
+ - `eval_accumulation_steps`: None
395
+ - `learning_rate`: 5e-05
396
+ - `weight_decay`: 0.0
397
+ - `adam_beta1`: 0.9
398
+ - `adam_beta2`: 0.999
399
+ - `adam_epsilon`: 1e-08
400
+ - `max_grad_norm`: 1
401
+ - `num_train_epochs`: 1
402
+ - `max_steps`: -1
403
+ - `lr_scheduler_type`: linear
404
+ - `lr_scheduler_kwargs`: {}
405
+ - `warmup_ratio`: 0.0
406
+ - `warmup_steps`: 0
407
+ - `log_level`: passive
408
+ - `log_level_replica`: warning
409
+ - `log_on_each_node`: True
410
+ - `logging_nan_inf_filter`: True
411
+ - `save_safetensors`: True
412
+ - `save_on_each_node`: False
413
+ - `save_only_model`: False
414
+ - `restore_callback_states_from_checkpoint`: False
415
+ - `no_cuda`: False
416
+ - `use_cpu`: False
417
+ - `use_mps_device`: False
418
+ - `seed`: 42
419
+ - `data_seed`: None
420
+ - `jit_mode_eval`: False
421
+ - `use_ipex`: False
422
+ - `bf16`: False
423
+ - `fp16`: False
424
+ - `fp16_opt_level`: O1
425
+ - `half_precision_backend`: auto
426
+ - `bf16_full_eval`: False
427
+ - `fp16_full_eval`: False
428
+ - `tf32`: None
429
+ - `local_rank`: 0
430
+ - `ddp_backend`: None
431
+ - `tpu_num_cores`: None
432
+ - `tpu_metrics_debug`: False
433
+ - `debug`: []
434
+ - `dataloader_drop_last`: False
435
+ - `dataloader_num_workers`: 0
436
+ - `dataloader_prefetch_factor`: None
437
+ - `past_index`: -1
438
+ - `disable_tqdm`: False
439
+ - `remove_unused_columns`: True
440
+ - `label_names`: None
441
+ - `load_best_model_at_end`: False
442
+ - `ignore_data_skip`: False
443
+ - `fsdp`: []
444
+ - `fsdp_min_num_params`: 0
445
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
446
+ - `fsdp_transformer_layer_cls_to_wrap`: None
447
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
448
+ - `deepspeed`: None
449
+ - `label_smoothing_factor`: 0.0
450
+ - `optim`: adamw_torch
451
+ - `optim_args`: None
452
+ - `adafactor`: False
453
+ - `group_by_length`: False
454
+ - `length_column_name`: length
455
+ - `ddp_find_unused_parameters`: None
456
+ - `ddp_bucket_cap_mb`: None
457
+ - `ddp_broadcast_buffers`: False
458
+ - `dataloader_pin_memory`: True
459
+ - `dataloader_persistent_workers`: False
460
+ - `skip_memory_metrics`: True
461
+ - `use_legacy_prediction_loop`: False
462
+ - `push_to_hub`: False
463
+ - `resume_from_checkpoint`: None
464
+ - `hub_model_id`: None
465
+ - `hub_strategy`: every_save
466
+ - `hub_private_repo`: False
467
+ - `hub_always_push`: False
468
+ - `gradient_checkpointing`: False
469
+ - `gradient_checkpointing_kwargs`: None
470
+ - `include_inputs_for_metrics`: False
471
+ - `eval_do_concat_batches`: True
472
+ - `fp16_backend`: auto
473
+ - `push_to_hub_model_id`: None
474
+ - `push_to_hub_organization`: None
475
+ - `mp_parameters`:
476
+ - `auto_find_batch_size`: False
477
+ - `full_determinism`: False
478
+ - `torchdynamo`: None
479
+ - `ray_scope`: last
480
+ - `ddp_timeout`: 1800
481
+ - `torch_compile`: False
482
+ - `torch_compile_backend`: None
483
+ - `torch_compile_mode`: None
484
+ - `dispatch_batches`: None
485
+ - `split_batches`: None
486
+ - `include_tokens_per_second`: False
487
+ - `include_num_input_tokens_seen`: False
488
+ - `neftune_noise_alpha`: None
489
+ - `optim_target_modules`: None
490
+ - `batch_eval_metrics`: False
491
+ - `eval_on_start`: False
492
+ - `batch_sampler`: batch_sampler
493
+ - `multi_dataset_batch_sampler`: round_robin
494
+
495
+ </details>
496
+
497
+ ### Training Logs
498
+ | Epoch | Step | Training Loss |
499
+ |:------:|:-----:|:-------------:|
500
+ | 0.0110 | 500 | 0.4667 |
501
+ | 0.0219 | 1000 | 0.179 |
502
+ | 0.0329 | 1500 | 0.1543 |
503
+ | 0.0438 | 2000 | 0.1284 |
504
+ | 0.0548 | 2500 | 0.1123 |
505
+ | 0.0657 | 3000 | 0.101 |
506
+ | 0.0767 | 3500 | 0.0989 |
507
+ | 0.0876 | 4000 | 0.0941 |
508
+ | 0.0986 | 4500 | 0.0827 |
509
+ | 0.1095 | 5000 | 0.0874 |
510
+ | 0.1205 | 5500 | 0.0825 |
511
+ | 0.1314 | 6000 | 0.0788 |
512
+ | 0.1424 | 6500 | 0.0728 |
513
+ | 0.1533 | 7000 | 0.0768 |
514
+ | 0.1643 | 7500 | 0.0707 |
515
+ | 0.1752 | 8000 | 0.0691 |
516
+ | 0.1862 | 8500 | 0.0666 |
517
+ | 0.1971 | 9000 | 0.0644 |
518
+ | 0.2081 | 9500 | 0.0615 |
519
+ | 0.2190 | 10000 | 0.0651 |
520
+ | 0.2300 | 10500 | 0.0604 |
521
+ | 0.2409 | 11000 | 0.0595 |
522
+ | 0.2519 | 11500 | 0.0622 |
523
+ | 0.2628 | 12000 | 0.0537 |
524
+ | 0.2738 | 12500 | 0.0564 |
525
+ | 0.2848 | 13000 | 0.0622 |
526
+ | 0.2957 | 13500 | 0.052 |
527
+ | 0.3067 | 14000 | 0.0475 |
528
+ | 0.3176 | 14500 | 0.0569 |
529
+ | 0.3286 | 15000 | 0.0511 |
530
+ | 0.3395 | 15500 | 0.0476 |
531
+ | 0.3505 | 16000 | 0.0498 |
532
+ | 0.3614 | 16500 | 0.0527 |
533
+ | 0.3724 | 17000 | 0.0556 |
534
+ | 0.3833 | 17500 | 0.0495 |
535
+ | 0.3943 | 18000 | 0.0482 |
536
+ | 0.4052 | 18500 | 0.0556 |
537
+ | 0.4162 | 19000 | 0.0454 |
538
+ | 0.4271 | 19500 | 0.0452 |
539
+ | 0.4381 | 20000 | 0.0431 |
540
+ | 0.4490 | 20500 | 0.0462 |
541
+ | 0.4600 | 21000 | 0.0473 |
542
+ | 0.4709 | 21500 | 0.0387 |
543
+ | 0.4819 | 22000 | 0.041 |
544
+ | 0.4928 | 22500 | 0.0472 |
545
+ | 0.5038 | 23000 | 0.0435 |
546
+ | 0.5147 | 23500 | 0.0419 |
547
+ | 0.5257 | 24000 | 0.0395 |
548
+ | 0.5366 | 24500 | 0.043 |
549
+ | 0.5476 | 25000 | 0.0419 |
550
+ | 0.5585 | 25500 | 0.0394 |
551
+ | 0.5695 | 26000 | 0.0403 |
552
+ | 0.5805 | 26500 | 0.0436 |
553
+ | 0.5914 | 27000 | 0.0414 |
554
+ | 0.6024 | 27500 | 0.0418 |
555
+ | 0.6133 | 28000 | 0.0411 |
556
+ | 0.6243 | 28500 | 0.035 |
557
+ | 0.6352 | 29000 | 0.0397 |
558
+ | 0.6462 | 29500 | 0.0392 |
559
+ | 0.6571 | 30000 | 0.0373 |
560
+ | 0.6681 | 30500 | 0.0373 |
561
+ | 0.6790 | 31000 | 0.0363 |
562
+ | 0.6900 | 31500 | 0.0418 |
563
+ | 0.7009 | 32000 | 0.0377 |
564
+ | 0.7119 | 32500 | 0.0321 |
565
+ | 0.7228 | 33000 | 0.0331 |
566
+ | 0.7338 | 33500 | 0.0373 |
567
+ | 0.7447 | 34000 | 0.0342 |
568
+ | 0.7557 | 34500 | 0.0335 |
569
+ | 0.7666 | 35000 | 0.0323 |
570
+ | 0.7776 | 35500 | 0.0362 |
571
+ | 0.7885 | 36000 | 0.0376 |
572
+ | 0.7995 | 36500 | 0.0364 |
573
+ | 0.8104 | 37000 | 0.0396 |
574
+ | 0.8214 | 37500 | 0.0321 |
575
+ | 0.8323 | 38000 | 0.0358 |
576
+ | 0.8433 | 38500 | 0.0299 |
577
+ | 0.8543 | 39000 | 0.0304 |
578
+ | 0.8652 | 39500 | 0.0317 |
579
+ | 0.8762 | 40000 | 0.0334 |
580
+ | 0.8871 | 40500 | 0.0331 |
581
+ | 0.8981 | 41000 | 0.0326 |
582
+ | 0.9090 | 41500 | 0.0325 |
583
+ | 0.9200 | 42000 | 0.0321 |
584
+ | 0.9309 | 42500 | 0.0316 |
585
+ | 0.9419 | 43000 | 0.0321 |
586
+ | 0.9528 | 43500 | 0.0353 |
587
+ | 0.9638 | 44000 | 0.0315 |
588
+ | 0.9747 | 44500 | 0.0326 |
589
+ | 0.9857 | 45000 | 0.031 |
590
+ | 0.9966 | 45500 | 0.0315 |
591
+
592
+
593
+ ### Framework Versions
594
+ - Python: 3.12.2
595
+ - Sentence Transformers: 3.0.1
596
+ - Transformers: 4.42.3
597
+ - PyTorch: 2.3.1+cu121
598
+ - Accelerate: 0.32.1
599
+ - Datasets: 2.20.0
600
+ - Tokenizers: 0.19.1
601
+
602
+ ## Citation
603
+
604
+ ### BibTeX
605
+
606
+ #### Sentence Transformers
607
+ ```bibtex
608
+ @inproceedings{reimers-2019-sentence-bert,
609
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
610
+ author = "Reimers, Nils and Gurevych, Iryna",
611
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
612
+ month = "11",
613
+ year = "2019",
614
+ publisher = "Association for Computational Linguistics",
615
+ url = "https://arxiv.org/abs/1908.10084",
616
+ }
617
+ ```
618
+
619
+ #### MultipleNegativesRankingLoss
620
+ ```bibtex
621
+ @misc{henderson2017efficient,
622
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
623
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
624
+ year={2017},
625
+ eprint={1705.00652},
626
+ archivePrefix={arXiv},
627
+ primaryClass={cs.CL}
628
+ }
629
+ ```
630
+
631
+ <!--
632
+ ## Glossary
633
+
634
+ *Clearly define terms in order to be accessible across audiences.*
635
+ -->
636
+
637
+ <!--
638
+ ## Model Card Authors
639
+
640
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
641
+ -->
642
+
643
+ <!--
644
+ ## Model Card Contact
645
+
646
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
647
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "nreimers/MiniLM-L6-H384-uncased",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.42.3",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.3",
5
+ "pytorch": "2.3.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
log.txt ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {'loss': 0.4667, 'grad_norm': 13.565043449401855, 'learning_rate': 1.978138964338912e-05, 'epoch': 0.01}
2
+ {'loss': 0.179, 'grad_norm': 12.648781776428223, 'learning_rate': 1.95623411898712e-05, 'epoch': 0.02}
3
+ {'loss': 0.1543, 'grad_norm': 7.569096565246582, 'learning_rate': 1.9343292736353283e-05, 'epoch': 0.03}
4
+ {'loss': 0.1284, 'grad_norm': 8.04305648803711, 'learning_rate': 1.9124244282835366e-05, 'epoch': 0.04}
5
+ {'loss': 0.1123, 'grad_norm': 1.5785869359970093, 'learning_rate': 1.8905195829317446e-05, 'epoch': 0.05}
6
+ {'loss': 0.101, 'grad_norm': 11.429397583007812, 'learning_rate': 1.868614737579953e-05, 'epoch': 0.07}
7
+ {'loss': 0.0989, 'grad_norm': 2.253450393676758, 'learning_rate': 1.846709892228161e-05, 'epoch': 0.08}
8
+ {'loss': 0.0941, 'grad_norm': 2.223756790161133, 'learning_rate': 1.8248050468763692e-05, 'epoch': 0.09}
9
+ {'loss': 0.0827, 'grad_norm': 11.125922203063965, 'learning_rate': 1.8029002015245775e-05, 'epoch': 0.1}
10
+ {'loss': 0.0874, 'grad_norm': 1.5281122922897339, 'learning_rate': 1.7809953561727855e-05, 'epoch': 0.11}
11
+ {'loss': 0.0825, 'grad_norm': 1.3396228551864624, 'learning_rate': 1.7590905108209938e-05, 'epoch': 0.12}
12
+ {'loss': 0.0788, 'grad_norm': 3.097214460372925, 'learning_rate': 1.7371856654692018e-05, 'epoch': 0.13}
13
+ {'loss': 0.0728, 'grad_norm': 0.49445801973342896, 'learning_rate': 1.71528082011741e-05, 'epoch': 0.14}
14
+ {'loss': 0.0768, 'grad_norm': 4.991362571716309, 'learning_rate': 1.693375974765618e-05, 'epoch': 0.15}
15
+ {'loss': 0.0707, 'grad_norm': 0.243109792470932, 'learning_rate': 1.6714711294138264e-05, 'epoch': 0.16}
16
+ {'loss': 0.0691, 'grad_norm': 0.27024760842323303, 'learning_rate': 1.6495662840620347e-05, 'epoch': 0.18}
17
+ {'loss': 0.0666, 'grad_norm': 7.135988712310791, 'learning_rate': 1.6276614387102427e-05, 'epoch': 0.19}
18
+ {'loss': 0.0644, 'grad_norm': 12.5233154296875, 'learning_rate': 1.605756593358451e-05, 'epoch': 0.2}
19
+ {'loss': 0.0615, 'grad_norm': 4.112820148468018, 'learning_rate': 1.5838517480066593e-05, 'epoch': 0.21}
20
+ {'loss': 0.0651, 'grad_norm': 1.6842459440231323, 'learning_rate': 1.5619469026548676e-05, 'epoch': 0.22}
21
+ {'loss': 0.0604, 'grad_norm': 12.45095443725586, 'learning_rate': 1.5400420573030756e-05, 'epoch': 0.23}
22
+ {'loss': 0.0595, 'grad_norm': 2.162442445755005, 'learning_rate': 1.5181372119512839e-05, 'epoch': 0.24}
23
+ {'loss': 0.0622, 'grad_norm': 1.0790185928344727, 'learning_rate': 1.496232366599492e-05, 'epoch': 0.25}
24
+ {'loss': 0.0537, 'grad_norm': 3.753148317337036, 'learning_rate': 1.4743275212477002e-05, 'epoch': 0.26}
25
+ {'loss': 0.0564, 'grad_norm': 5.757555961608887, 'learning_rate': 1.4524226758959083e-05, 'epoch': 0.27}
26
+ {'loss': 0.0622, 'grad_norm': 5.632264614105225, 'learning_rate': 1.4305178305441165e-05, 'epoch': 0.28}
27
+ {'loss': 0.052, 'grad_norm': 8.988792419433594, 'learning_rate': 1.4086129851923248e-05, 'epoch': 0.3}
28
+ {'loss': 0.0475, 'grad_norm': 5.292848587036133, 'learning_rate': 1.386708139840533e-05, 'epoch': 0.31}
29
+ {'loss': 0.0569, 'grad_norm': 6.716405391693115, 'learning_rate': 1.364803294488741e-05, 'epoch': 0.32}
30
+ {'loss': 0.0511, 'grad_norm': 0.022643841803073883, 'learning_rate': 1.3428984491369492e-05, 'epoch': 0.33}
31
+ {'loss': 0.0476, 'grad_norm': 8.440044403076172, 'learning_rate': 1.3209936037851574e-05, 'epoch': 0.34}
32
+ {'loss': 0.0498, 'grad_norm': 0.30164211988449097, 'learning_rate': 1.2990887584333655e-05, 'epoch': 0.35}
33
+ {'loss': 0.0527, 'grad_norm': 0.6262193918228149, 'learning_rate': 1.2771839130815738e-05, 'epoch': 0.36}
34
+ {'loss': 0.0556, 'grad_norm': 0.5987337231636047, 'learning_rate': 1.255279067729782e-05, 'epoch': 0.37}
35
+ {'loss': 0.0495, 'grad_norm': 0.11035127192735672, 'learning_rate': 1.2333742223779901e-05, 'epoch': 0.38}
36
+ {'loss': 0.0482, 'grad_norm': 8.444112777709961, 'learning_rate': 1.2114693770261983e-05, 'epoch': 0.39}
37
+ {'loss': 0.0556, 'grad_norm': 3.3394901752471924, 'learning_rate': 1.1895645316744064e-05, 'epoch': 0.41}
38
+ {'loss': 0.0454, 'grad_norm': 0.5866299867630005, 'learning_rate': 1.1676596863226145e-05, 'epoch': 0.42}
39
+ {'loss': 0.0452, 'grad_norm': 0.6568431854248047, 'learning_rate': 1.1457548409708229e-05, 'epoch': 0.43}
40
+ {'loss': 0.0431, 'grad_norm': 4.396225452423096, 'learning_rate': 1.123849995619031e-05, 'epoch': 0.44}
41
+ {'loss': 0.0462, 'grad_norm': 1.3214092254638672, 'learning_rate': 1.1019451502672391e-05, 'epoch': 0.45}
42
+ {'loss': 0.0473, 'grad_norm': 6.367979049682617, 'learning_rate': 1.0800403049154473e-05, 'epoch': 0.46}
43
+ {'loss': 0.0387, 'grad_norm': 2.694465160369873, 'learning_rate': 1.0581354595636554e-05, 'epoch': 0.47}
44
+ {'loss': 0.041, 'grad_norm': 1.0588641166687012, 'learning_rate': 1.0362306142118636e-05, 'epoch': 0.48}
45
+ {'loss': 0.0472, 'grad_norm': 2.1439011096954346, 'learning_rate': 1.0143257688600719e-05, 'epoch': 0.49}
46
+ {'loss': 0.0435, 'grad_norm': 1.194575548171997, 'learning_rate': 9.9242092350828e-06, 'epoch': 0.5}
47
+ {'loss': 0.0419, 'grad_norm': 11.476897239685059, 'learning_rate': 9.705160781564884e-06, 'epoch': 0.51}
48
+ {'loss': 0.0395, 'grad_norm': 8.643529891967773, 'learning_rate': 9.486112328046965e-06, 'epoch': 0.53}
49
+ {'loss': 0.043, 'grad_norm': 0.6745238900184631, 'learning_rate': 9.267063874529046e-06, 'epoch': 0.54}
50
+ {'loss': 0.0419, 'grad_norm': 0.9084439873695374, 'learning_rate': 9.048015421011128e-06, 'epoch': 0.55}
51
+ {'loss': 0.0394, 'grad_norm': 0.5197725892066956, 'learning_rate': 8.82896696749321e-06, 'epoch': 0.56}
52
+ {'loss': 0.0403, 'grad_norm': 0.03646567091345787, 'learning_rate': 8.609918513975292e-06, 'epoch': 0.57}
53
+ {'loss': 0.0436, 'grad_norm': 1.5766927003860474, 'learning_rate': 8.390870060457374e-06, 'epoch': 0.58}
54
+ {'loss': 0.0414, 'grad_norm': 8.600505828857422, 'learning_rate': 8.171821606939455e-06, 'epoch': 0.59}
55
+ {'loss': 0.0418, 'grad_norm': 0.32232749462127686, 'learning_rate': 7.952773153421538e-06, 'epoch': 0.6}
56
+ {'loss': 0.0411, 'grad_norm': 1.5211155414581299, 'learning_rate': 7.73372469990362e-06, 'epoch': 0.61}
57
+ {'loss': 0.035, 'grad_norm': 0.3087010085582733, 'learning_rate': 7.514676246385701e-06, 'epoch': 0.62}
58
+ {'loss': 0.0397, 'grad_norm': 7.905180931091309, 'learning_rate': 7.295627792867783e-06, 'epoch': 0.64}
59
+ {'loss': 0.0392, 'grad_norm': 0.3070434331893921, 'learning_rate': 7.076579339349865e-06, 'epoch': 0.65}
60
+ {'loss': 0.0373, 'grad_norm': 7.915885925292969, 'learning_rate': 6.8575308858319466e-06, 'epoch': 0.66}
61
+ {'loss': 0.0373, 'grad_norm': 1.2518105506896973, 'learning_rate': 6.638482432314029e-06, 'epoch': 0.67}
62
+ {'loss': 0.0363, 'grad_norm': 1.4480468034744263, 'learning_rate': 6.41943397879611e-06, 'epoch': 0.68}
63
+ {'loss': 0.0418, 'grad_norm': 1.203717589378357, 'learning_rate': 6.200385525278192e-06, 'epoch': 0.69}
64
+ {'loss': 0.0377, 'grad_norm': 1.5048280954360962, 'learning_rate': 5.981337071760274e-06, 'epoch': 0.7}
65
+ {'loss': 0.0321, 'grad_norm': 0.9017734527587891, 'learning_rate': 5.7622886182423555e-06, 'epoch': 0.71}
66
+ {'loss': 0.0331, 'grad_norm': 1.6583552360534668, 'learning_rate': 5.543240164724437e-06, 'epoch': 0.72}
67
+ {'loss': 0.0373, 'grad_norm': 0.6316823959350586, 'learning_rate': 5.324191711206519e-06, 'epoch': 0.73}
68
+ {'loss': 0.0342, 'grad_norm': 4.767064094543457, 'learning_rate': 5.105143257688601e-06, 'epoch': 0.74}
69
+ {'loss': 0.0335, 'grad_norm': 0.1754075288772583, 'learning_rate': 4.886094804170683e-06, 'epoch': 0.76}
70
+ {'loss': 0.0323, 'grad_norm': 2.113138437271118, 'learning_rate': 4.667046350652764e-06, 'epoch': 0.77}
71
+ {'loss': 0.0362, 'grad_norm': 12.435863494873047, 'learning_rate': 4.447997897134847e-06, 'epoch': 0.78}
72
+ {'loss': 0.0376, 'grad_norm': 3.4276435375213623, 'learning_rate': 4.228949443616928e-06, 'epoch': 0.79}
73
+ {'loss': 0.0364, 'grad_norm': 9.459793090820312, 'learning_rate': 4.0099009900990104e-06, 'epoch': 0.8}
74
+ {'loss': 0.0396, 'grad_norm': 0.425851970911026, 'learning_rate': 3.7908525365810923e-06, 'epoch': 0.81}
75
+ {'loss': 0.0321, 'grad_norm': 0.4842585623264313, 'learning_rate': 3.5718040830631738e-06, 'epoch': 0.82}
76
+ {'loss': 0.0358, 'grad_norm': 4.428570747375488, 'learning_rate': 3.3527556295452556e-06, 'epoch': 0.83}
77
+ {'loss': 0.0299, 'grad_norm': 1.5310533046722412, 'learning_rate': 3.1337071760273375e-06, 'epoch': 0.84}
78
+ {'loss': 0.0304, 'grad_norm': 0.5274935364723206, 'learning_rate': 2.9146587225094194e-06, 'epoch': 0.85}
79
+ {'loss': 0.0317, 'grad_norm': 0.3205825090408325, 'learning_rate': 2.695610268991501e-06, 'epoch': 0.87}
80
+ {'loss': 0.0334, 'grad_norm': 0.3429725468158722, 'learning_rate': 2.476561815473583e-06, 'epoch': 0.88}
81
+ {'loss': 0.0331, 'grad_norm': 4.579667568206787, 'learning_rate': 2.2575133619556646e-06, 'epoch': 0.89}
82
+ {'loss': 0.0326, 'grad_norm': 0.6423684358596802, 'learning_rate': 2.038464908437747e-06, 'epoch': 0.9}
83
+ {'loss': 0.0325, 'grad_norm': 2.7020912170410156, 'learning_rate': 1.8194164549198285e-06, 'epoch': 0.91}
84
+ {'loss': 0.0321, 'grad_norm': 0.07015173882246017, 'learning_rate': 1.6003680014019102e-06, 'epoch': 0.92}
85
+ {'loss': 0.0316, 'grad_norm': 3.600149631500244, 'learning_rate': 1.3813195478839918e-06, 'epoch': 0.93}
86
+ {'loss': 0.0321, 'grad_norm': 1.3229649066925049, 'learning_rate': 1.162271094366074e-06, 'epoch': 0.94}
87
+ {'loss': 0.0353, 'grad_norm': 0.21412307024002075, 'learning_rate': 9.432226408481557e-07, 'epoch': 0.95}
88
+ {'loss': 0.0315, 'grad_norm': 0.8051860928535461, 'learning_rate': 7.241741873302376e-07, 'epoch': 0.96}
89
+ {'loss': 0.0326, 'grad_norm': 0.09261493384838104, 'learning_rate': 5.051257338123193e-07, 'epoch': 0.97}
90
+ {'loss': 0.031, 'grad_norm': 1.8044675588607788, 'learning_rate': 2.8607728029440114e-07, 'epoch': 0.99}
91
+ {'loss': 0.0315, 'grad_norm': 5.446718215942383, 'learning_rate': 6.702882677648297e-08, 'epoch': 1.0}
92
+ {'train_runtime': 19215.0068, 'train_samples_per_second': 38.015, 'train_steps_per_second': 2.376, 'train_loss': 0.056530908753462804, 'epoch': 1.0}
93
+ 2024-07-20 07:20:41 - Save model to ./output
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a79d917d72dc3e520bf6b01823bc96c4f00cb981ffd5fae6a3621778ac51de78
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff