hakunanatasha
commited on
Commit
·
9408b89
1
Parent(s):
8463279
fix blurb readme
Browse files
README.md
CHANGED
@@ -2,51 +2,67 @@
|
|
2 |
language: en
|
3 |
license: other
|
4 |
multilinguality: monolingual
|
5 |
-
pretty_name:
|
6 |
---
|
7 |
|
8 |
|
9 |
-
# Dataset Card for
|
10 |
|
11 |
## Dataset Description
|
12 |
|
13 |
-
- **Homepage:**
|
14 |
- **Pubmed:** True
|
15 |
-
- **Public:**
|
16 |
-
- **Tasks:**
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
the
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
-
|
28 |
-
Natural Language Processing'
|
29 |
|
30 |
|
31 |
## Citation Information
|
32 |
|
33 |
```
|
34 |
-
@article{
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
pages = 138
|
51 |
}
|
52 |
```
|
|
|
2 |
language: en
|
3 |
license: other
|
4 |
multilinguality: monolingual
|
5 |
+
pretty_name: BLURB
|
6 |
---
|
7 |
|
8 |
|
9 |
+
# Dataset Card for BLURB
|
10 |
|
11 |
## Dataset Description
|
12 |
|
13 |
+
- **Homepage:** https://microsoft.github.io/BLURB/tasks.html
|
14 |
- **Pubmed:** True
|
15 |
+
- **Public:** True
|
16 |
+
- **Tasks:** Named Entity Recognition
|
17 |
|
18 |
+
BLURB is a collection of resources for biomedical natural language processing.
|
19 |
+
In general domains, such as newswire and the Web, comprehensive benchmarks and
|
20 |
+
leaderboards such as GLUE have greatly accelerated progress in open-domain NLP.
|
21 |
+
In biomedicine, however, such resources are ostensibly scarce. In the past,
|
22 |
+
there have been a plethora of shared tasks in biomedical NLP, such as
|
23 |
+
BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These
|
24 |
+
efforts have played a significant role in fueling interest and progress by the
|
25 |
+
research community, but they typically focus on individual tasks. The advent of
|
26 |
+
neural language models, such as BERT provides a unifying foundation to leverage
|
27 |
+
transfer learning from unlabeled text to support a wide range of NLP
|
28 |
+
applications. To accelerate progress in biomedical pretraining strategies and
|
29 |
+
task-specific methods, it is thus imperative to create a broad-coverage
|
30 |
+
benchmark encompassing diverse biomedical tasks.
|
31 |
|
32 |
+
Inspired by prior efforts toward this direction (e.g., BLUE), we have created
|
33 |
+
BLURB (short for Biomedical Language Understanding and Reasoning Benchmark).
|
34 |
+
BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP
|
35 |
+
applications, as well as a leaderboard for tracking progress by the community.
|
36 |
+
BLURB includes thirteen publicly available datasets in six diverse tasks. To
|
37 |
+
avoid placing undue emphasis on tasks with many available datasets, such as
|
38 |
+
named entity recognition (NER), BLURB reports the macro average across all tasks
|
39 |
+
as the main score. The BLURB leaderboard is model-agnostic. Any system capable
|
40 |
+
of producing the test predictions using the same training and development data
|
41 |
+
can participate. The main goal of BLURB is to lower the entry barrier in
|
42 |
+
biomedical NLP and help accelerate progress in this vitally important field for
|
43 |
+
positive societal and human impact.
|
44 |
|
45 |
+
This implementation contains a subset of 5 tasks as of 2022.10.06, with their original train, dev, and test splits.
|
|
|
46 |
|
47 |
|
48 |
## Citation Information
|
49 |
|
50 |
```
|
51 |
+
@article{gu2021domain,
|
52 |
+
title = {
|
53 |
+
Domain-specific language model pretraining for biomedical natural
|
54 |
+
language processing
|
55 |
+
},
|
56 |
+
author = {
|
57 |
+
Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
|
58 |
+
Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
|
59 |
+
Jianfeng and Poon, Hoifung
|
60 |
+
},
|
61 |
+
year = 2021,
|
62 |
+
journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
|
63 |
+
publisher = {ACM New York, NY},
|
64 |
+
volume = 3,
|
65 |
+
number = 1,
|
66 |
+
pages = {1--23}
|
|
|
67 |
}
|
68 |
```
|