lewtun HF staff commited on
Commit
8c0d4ae
1 Parent(s): e7143f5

Add evaluation results on the default config of quoref

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 馃憢!\
Your model has been evaluated on the default config of the [quoref](https://huggingface.co/datasets/quoref) dataset by

@nbroad
, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449897).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=quoref).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=quoref).

Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -9,7 +9,24 @@ datasets:
9
  - duorc
10
  model-index:
11
  - name: rob-base-gc1
12
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
9
  - duorc
10
  model-index:
11
  - name: rob-base-gc1
12
+ results:
13
+ - task:
14
+ type: question-answering
15
+ name: Question Answering
16
+ dataset:
17
+ name: quoref
18
+ type: quoref
19
+ config: default
20
+ split: validation
21
+ metrics:
22
+ - name: Exact Match
23
+ type: exact_match
24
+ value: 78.403
25
+ verified: true
26
+ - name: F1
27
+ type: f1
28
+ value: 82.1408
29
+ verified: true
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You