hiroki-rad commited on
Commit
7feb2b6
·
verified ·
1 Parent(s): 0ec8d85
Files changed (1) hide show
  1. README.md +24 -180
README.md CHANGED
@@ -20,6 +20,17 @@ pipeline_tag: text-classification
20
 
21
 
22
  ## Model Details
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ### Model Description
25
 
@@ -35,31 +46,20 @@ This is the model card of a 🤗 transformers model that has been pushed on the
35
  - **License:** [More Information Needed]
36
  - **Finetuned from model [optional]:** [cl-tohoku/bert-base-japanese-v3]
37
 
38
- ### Model Sources [optional]
39
-
40
- <!-- Provide the basic links for the model. -->
41
-
42
- - **Repository:** [More Information Needed]
43
- - **Paper [optional]:** [More Information Needed]
44
- - **Demo [optional]:** [More Information Needed]
45
-
46
- ## Uses
47
-
48
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
49
-
50
  ### Direct Use
51
-
52
  from transformers import pipeline
53
 
54
- このlabel2idで学習しました。label2idはこれを利用してください。
55
- label2id = {'Task_Solution': 0,
56
- 'Creative_Generation': 1,
57
- 'Knowledge_Explanation': 2,
58
- 'Analytical_Reasoning': 3,
59
- 'Information_Extraction': 4,
60
- 'Step_by_Step_Calculation': 5,
61
- 'Role_Play_Response': 6,
62
- 'Opinion_Perspective': 7}
 
63
 
64
  def preprocess_text_classification(examples: dict[str, list]) -> BatchEncoding:
65
  """バッチ処理用に修正"""
@@ -75,7 +75,7 @@ def preprocess_text_classification(examples: dict[str, list]) -> BatchEncoding:
75
  encoded_examples["labels"] = [label2id[label] for label in examples["labels"]]
76
  return encoded_examples
77
 
78
- ##使用するデータセット
79
  test_data = test_data.to_pandas()
80
  test_data["labels"] = test_data["labels"].apply(lambda x: label2id[x])
81
  test_data
@@ -88,13 +88,11 @@ label2id = {label: id for id, label in enumerate(class_label)}
88
  id2label = {id: label for id, label in enumerate(class_label)}
89
 
90
  results: list[dict[str, float | str]] = []
91
-
92
  for i, example in tqdm(enumerate(test_data.itertuples())):
93
  # モデルの予測結果を取得
94
  model_prediction = classify_pipe(example.questions)[0]
95
  # 正解のラベルIDをラベル名に変換
96
  true_label = id2label[example.labels]
97
-
98
  results.append(
99
  {
100
  "example_id": i,
@@ -103,158 +101,4 @@ for i, example in tqdm(enumerate(test_data.itertuples())):
103
  "true_label": true_label,
104
  }
105
  )
106
-
107
- ### Downstream Use [optional]
108
-
109
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
110
-
111
- [More Information Needed]
112
-
113
- ### Out-of-Scope Use
114
-
115
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
116
-
117
- [More Information Needed]
118
-
119
- ## Bias, Risks, and Limitations
120
-
121
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
122
-
123
- [More Information Needed]
124
-
125
- ### Recommendations
126
-
127
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
128
-
129
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
130
-
131
- ## How to Get Started with the Model
132
-
133
- Use the code below to get started with the model.
134
-
135
- [More Information Needed]
136
-
137
- ## Training Details
138
-
139
- ### Training Data
140
-
141
- <!https://huggingface.co/datasets/elyza/ELYZA-tasks-100>
142
-
143
- [More Information Needed]
144
-
145
- ### Training Procedure
146
-
147
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
148
-
149
- #### Preprocessing [optional]
150
-
151
- [More Information Needed]
152
-
153
-
154
- #### Training Hyperparameters
155
-
156
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
157
-
158
- #### Speeds, Sizes, Times [optional]
159
-
160
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
161
-
162
- [More Information Needed]
163
-
164
- ## Evaluation
165
-
166
- <!-- This section describes the evaluation protocols and provides the results. -->
167
-
168
- ### Testing Data, Factors & Metrics
169
-
170
- #### Testing Data
171
-
172
- <!-- This should link to a Dataset Card if possible. -->
173
-
174
- [More Information Needed]
175
-
176
- #### Factors
177
-
178
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
179
-
180
- [More Information Needed]
181
-
182
- #### Metrics
183
-
184
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
185
-
186
- [More Information Needed]
187
-
188
- ### Results
189
-
190
- [More Information Needed]
191
-
192
- #### Summary
193
-
194
-
195
-
196
- ## Model Examination [optional]
197
-
198
- <!-- Relevant interpretability work for the model goes here -->
199
-
200
- [More Information Needed]
201
-
202
- ## Environmental Impact
203
-
204
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
205
-
206
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
207
-
208
- - **Hardware Type:** [More Information Needed]
209
- - **Hours used:** [More Information Needed]
210
- - **Cloud Provider:** [More Information Needed]
211
- - **Compute Region:** [More Information Needed]
212
- - **Carbon Emitted:** [More Information Needed]
213
-
214
- ## Technical Specifications [optional]
215
-
216
- ### Model Architecture and Objective
217
-
218
- [More Information Needed]
219
-
220
- ### Compute Infrastructure
221
-
222
- [More Information Needed]
223
-
224
- #### Hardware
225
-
226
- [More Information Needed]
227
-
228
- #### Software
229
-
230
- [More Information Needed]
231
-
232
- ## Citation [optional]
233
-
234
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
235
-
236
- **BibTeX:**
237
-
238
- [More Information Needed]
239
-
240
- **APA:**
241
-
242
- [More Information Needed]
243
-
244
- ## Glossary [optional]
245
-
246
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
247
-
248
- [More Information Needed]
249
-
250
- ## More Information [optional]
251
-
252
- [More Information Needed]
253
-
254
- ## Model Card Authors [optional]
255
-
256
- [More Information Needed]
257
-
258
- ## Model Card Contact
259
-
260
- [More Information Needed]
 
20
 
21
 
22
  ## Model Details
23
+ elyzaタスク100のタスクのinputを入力してタスクを分類するためのタスクです。
24
+ タスクの分類は以下のものです。
25
+
26
+ - 知識説明型 Knowledge Explanation
27
+ - 創作型 Creative Generation
28
+ - 分析推論型 Analytical Reasoning
29
+ - 課題解決型 Task Solution
30
+ - 情報抽出型 Information Extraction
31
+ - 計算・手順型 Step-by-Step Calculation
32
+ - 意見・視点型 Opinion-Perspective
33
+ - ロールプレイ型 Role-Play Response
34
 
35
  ### Model Description
36
 
 
46
  - **License:** [More Information Needed]
47
  - **Finetuned from model [optional]:** [cl-tohoku/bert-base-japanese-v3]
48
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Direct Use
50
+ ```python
51
  from transformers import pipeline
52
 
53
+ label2id = {
54
+ 'Task_Solution': 0,
55
+ 'Creative_Generation': 1,
56
+ 'Knowledge_Explanation': 2,
57
+ 'Analytical_Reasoning': 3,
58
+ 'Information_Extraction': 4,
59
+ 'Step_by_Step_Calculation': 5,
60
+ 'Role_Play_Response': 6,
61
+ 'Opinion_Perspective': 7
62
+ }
63
 
64
  def preprocess_text_classification(examples: dict[str, list]) -> BatchEncoding:
65
  """バッチ処理用に修正"""
 
75
  encoded_examples["labels"] = [label2id[label] for label in examples["labels"]]
76
  return encoded_examples
77
 
78
+ # 使用するデータセット
79
  test_data = test_data.to_pandas()
80
  test_data["labels"] = test_data["labels"].apply(lambda x: label2id[x])
81
  test_data
 
88
  id2label = {id: label for id, label in enumerate(class_label)}
89
 
90
  results: list[dict[str, float | str]] = []
 
91
  for i, example in tqdm(enumerate(test_data.itertuples())):
92
  # モデルの予測結果を取得
93
  model_prediction = classify_pipe(example.questions)[0]
94
  # 正解のラベルIDをラベル名に変換
95
  true_label = id2label[example.labels]
 
96
  results.append(
97
  {
98
  "example_id": i,
 
101
  "true_label": true_label,
102
  }
103
  )
104
+ ```