sileod commited on
Commit
8b05176
·
verified ·
1 Parent(s): e4d59a9

Upload ModernBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +199 -0
  2. config.json +329 -0
  3. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "answerdotai/ModernBERT-large",
3
+ "architectures": [
4
+ "ModernBertForSequenceClassification"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 50281,
9
+ "classifier_activation": "gelu",
10
+ "classifier_bias": false,
11
+ "classifier_dropout": 0.0,
12
+ "classifier_pooling": "mean",
13
+ "classifiers_size": [
14
+ 3,
15
+ 2,
16
+ 2,
17
+ 2,
18
+ 2,
19
+ 2,
20
+ 1,
21
+ 2,
22
+ 3,
23
+ 2,
24
+ 2,
25
+ 2,
26
+ 3,
27
+ 3,
28
+ 3,
29
+ 3,
30
+ 3,
31
+ 3,
32
+ 2,
33
+ 2,
34
+ 3,
35
+ 2,
36
+ 2,
37
+ 2,
38
+ 2,
39
+ 2,
40
+ 6,
41
+ 2,
42
+ 2,
43
+ 2,
44
+ 2,
45
+ 2,
46
+ 3,
47
+ 3,
48
+ 3,
49
+ 3,
50
+ 3,
51
+ 3,
52
+ 3,
53
+ 2,
54
+ 2,
55
+ 2,
56
+ 2,
57
+ 3,
58
+ 3,
59
+ 3,
60
+ 3,
61
+ 3,
62
+ 3,
63
+ 3,
64
+ 3,
65
+ 2,
66
+ 2,
67
+ 2,
68
+ 2,
69
+ 2,
70
+ 2,
71
+ 16,
72
+ 100,
73
+ 13,
74
+ 100,
75
+ 8,
76
+ 3,
77
+ 3,
78
+ 2,
79
+ 3,
80
+ 2,
81
+ 4,
82
+ 3,
83
+ 2,
84
+ 3,
85
+ 2,
86
+ 2,
87
+ 2,
88
+ 2,
89
+ 2,
90
+ 3,
91
+ 2,
92
+ 3,
93
+ 2,
94
+ 4,
95
+ 3,
96
+ 3,
97
+ 3,
98
+ 2,
99
+ 3,
100
+ 1,
101
+ 2,
102
+ 2,
103
+ 3,
104
+ 13,
105
+ 2,
106
+ 2,
107
+ 3,
108
+ 2,
109
+ 2,
110
+ 3,
111
+ 3,
112
+ 3,
113
+ 3,
114
+ 2,
115
+ 3,
116
+ 2,
117
+ 3,
118
+ 2,
119
+ 3,
120
+ 2,
121
+ 2,
122
+ 2,
123
+ 2,
124
+ 2,
125
+ 3,
126
+ 4,
127
+ 3,
128
+ 3,
129
+ 2,
130
+ 2,
131
+ 3,
132
+ 3,
133
+ 2,
134
+ 2,
135
+ 2,
136
+ 2,
137
+ 2,
138
+ 4,
139
+ 3,
140
+ 2,
141
+ 2,
142
+ 2,
143
+ 3,
144
+ 3,
145
+ 3,
146
+ 2,
147
+ 3
148
+ ],
149
+ "cls_token_id": 50281,
150
+ "decoder_bias": true,
151
+ "deterministic_flash_attn": false,
152
+ "embedding_dropout": 0.0,
153
+ "eos_token_id": 50282,
154
+ "global_attn_every_n_layers": 3,
155
+ "global_rope_theta": 160000.0,
156
+ "gradient_checkpointing": false,
157
+ "hidden_activation": "gelu",
158
+ "hidden_size": 1024,
159
+ "id2label": {
160
+ "0": "entailment",
161
+ "1": "neutral",
162
+ "2": "contradiction"
163
+ },
164
+ "initializer_cutoff_factor": 2.0,
165
+ "initializer_range": 0.02,
166
+ "intermediate_size": 2624,
167
+ "label2id": {
168
+ "contradiction": 2,
169
+ "entailment": 0,
170
+ "neutral": 1
171
+ },
172
+ "layer_norm_eps": 1e-05,
173
+ "local_attention": 128,
174
+ "local_rope_theta": 10000.0,
175
+ "max_position_embeddings": 2048,
176
+ "mlp_bias": false,
177
+ "mlp_dropout": 0.0,
178
+ "model_type": "modernbert",
179
+ "norm_bias": false,
180
+ "norm_eps": 1e-05,
181
+ "num_attention_heads": 16,
182
+ "num_hidden_layers": 28,
183
+ "pad_token_id": 50283,
184
+ "position_embedding_type": "absolute",
185
+ "problem_type": "single_label_classification",
186
+ "reference_compile": true,
187
+ "sep_token_id": 50282,
188
+ "sparse_pred_ignore_index": -100,
189
+ "sparse_prediction": false,
190
+ "tasks": [
191
+ "glue/mnli",
192
+ "glue/qnli",
193
+ "glue/rte",
194
+ "glue/wnli",
195
+ "glue/mrpc",
196
+ "glue/qqp",
197
+ "glue/stsb",
198
+ "super_glue/boolq",
199
+ "super_glue/cb",
200
+ "super_glue/multirc",
201
+ "super_glue/wic",
202
+ "super_glue/axg",
203
+ "anli/a1",
204
+ "anli/a2",
205
+ "anli/a3",
206
+ "sick/label",
207
+ "sick/entailment_AB",
208
+ "snli",
209
+ "scitail/snli_format",
210
+ "hans",
211
+ "WANLI",
212
+ "recast/recast_ner",
213
+ "recast/recast_sentiment",
214
+ "recast/recast_verbnet",
215
+ "recast/recast_megaveridicality",
216
+ "recast/recast_verbcorner",
217
+ "recast/recast_kg_relations",
218
+ "recast/recast_factuality",
219
+ "recast/recast_puns",
220
+ "probability_words_nli/reasoning_1hop",
221
+ "probability_words_nli/usnli",
222
+ "probability_words_nli/reasoning_2hop",
223
+ "nan-nli",
224
+ "nli_fever",
225
+ "breaking_nli",
226
+ "conj_nli",
227
+ "fracas",
228
+ "dialogue_nli",
229
+ "mpe",
230
+ "dnc",
231
+ "recast_white/fnplus",
232
+ "recast_white/sprl",
233
+ "recast_white/dpr",
234
+ "robust_nli/IS_CS",
235
+ "robust_nli/LI_LI",
236
+ "robust_nli/ST_WO",
237
+ "robust_nli/PI_SP",
238
+ "robust_nli/PI_CD",
239
+ "robust_nli/ST_SE",
240
+ "robust_nli/ST_NE",
241
+ "robust_nli/ST_LM",
242
+ "robust_nli_is_sd",
243
+ "robust_nli_li_ts",
244
+ "add_one_rte",
245
+ "paws/labeled_final",
246
+ "glue/cola",
247
+ "glue/sst2",
248
+ "pragmeval/pdtb",
249
+ "lex_glue/eurlex",
250
+ "lex_glue/scotus",
251
+ "lex_glue/ledgar",
252
+ "lex_glue/unfair_tos",
253
+ "dynasent/dynabench.dynasent.r1.all/r1",
254
+ "dynasent/dynabench.dynasent.r2.all/r2",
255
+ "cycic_classification",
256
+ "lingnli",
257
+ "monotonicity-entailment",
258
+ "scinli",
259
+ "naturallogic",
260
+ "dynahate",
261
+ "syntactic-augmentation-nli",
262
+ "autotnli",
263
+ "defeasible-nli/atomic",
264
+ "defeasible-nli/snli",
265
+ "help-nli",
266
+ "nli-veridicality-transitivity",
267
+ "lonli",
268
+ "dadc-limit-nli",
269
+ "folio",
270
+ "tomi-nli",
271
+ "puzzte",
272
+ "temporal-nli",
273
+ "counterfactually-augmented-snli",
274
+ "cnli",
275
+ "boolq-natural-perturbations",
276
+ "equate",
277
+ "chaos-mnli-ambiguity",
278
+ "logiqa-2.0-nli",
279
+ "mindgames",
280
+ "ConTRoL-nli",
281
+ "logical-fallacy",
282
+ "cladder",
283
+ "conceptrules_v2",
284
+ "zero-shot-label-nli",
285
+ "scone",
286
+ "monli",
287
+ "SpaceNLI",
288
+ "propsegment/nli",
289
+ "FLD.v2/default",
290
+ "FLD.v2/star",
291
+ "SDOH-NLI",
292
+ "scifact_entailment",
293
+ "feasibilityQA",
294
+ "AdjectiveScaleProbe-nli",
295
+ "resnli",
296
+ "semantic_fragments_nli",
297
+ "dataset_train_nli",
298
+ "nlgraph",
299
+ "ruletaker",
300
+ "PARARULE-Plus",
301
+ "logical-entailment",
302
+ "nope",
303
+ "LogicNLI",
304
+ "contract-nli/contractnli_a/seg",
305
+ "contract-nli/contractnli_b/full",
306
+ "nli4ct_semeval2024",
307
+ "biosift-nli",
308
+ "SIGA-nli",
309
+ "FOL-nli",
310
+ "doc-nli",
311
+ "mctest-nli",
312
+ "natural-language-satisfiability",
313
+ "idioms-nli",
314
+ "lifecycle-entailment",
315
+ "MSciNLI",
316
+ "hover-3way/nli",
317
+ "seahorse_summarization_evaluation",
318
+ "missing-item-prediction/contrastive",
319
+ "Pol_NLI",
320
+ "synthetic-retrieval-NLI/count",
321
+ "synthetic-retrieval-NLI/position",
322
+ "synthetic-retrieval-NLI/binary",
323
+ "babi_nli",
324
+ "gen_debiased_nli"
325
+ ],
326
+ "torch_dtype": "float32",
327
+ "transformers_version": "4.48.0.dev0",
328
+ "vocab_size": 50368
329
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f56738ff6d17e2c4b28eac0969290cc5325be886a7be069ddc2c6d69128f1e2a
3
+ size 1583355740