Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
hakunanatasha commited on
Commit
9408b89
·
1 Parent(s): 8463279

fix blurb readme

Browse files
Files changed (1) hide show
  1. README.md +47 -31
README.md CHANGED
@@ -2,51 +2,67 @@
2
  language: en
3
  license: other
4
  multilinguality: monolingual
5
- pretty_name: BioASQ Task B
6
  ---
7
 
8
 
9
- # Dataset Card for BioASQ Task B
10
 
11
  ## Dataset Description
12
 
13
- - **Homepage:** http://participants-area.bioasq.org/datasets/
14
  - **Pubmed:** True
15
- - **Public:** False
16
- - **Tasks:** Question Answering
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- The BioASQ corpus contains multiple question
20
- answering tasks annotated by biomedical experts, including yes/no, factoid, list,
21
- and summary questions. Pertaining to our objective of comparing neural language
22
- models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
23
- of other tasks to future work. Each question is paired with a reference text
24
- containing multiple sentences from a PubMed abstract and a yes/no answer. We use
25
- the official train/dev/test split of 670/75/140 questions.
 
 
 
 
 
26
 
27
- See 'Domain-Specific Language Model Pretraining for Biomedical
28
- Natural Language Processing'
29
 
30
 
31
  ## Citation Information
32
 
33
  ```
34
- @article{tsatsaronis2015overview,
35
- title = {
36
- An overview of the BIOASQ large-scale biomedical semantic indexing and
37
- question answering competition
38
- },
39
- author = {
40
- Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
41
- and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
42
- Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
43
- Polychronopoulos, Dimitris and others
44
- },
45
- year = 2015,
46
- journal = {BMC bioinformatics},
47
- publisher = {BioMed Central Ltd},
48
- volume = 16,
49
- number = 1,
50
- pages = 138
51
  }
52
  ```
 
2
  language: en
3
  license: other
4
  multilinguality: monolingual
5
+ pretty_name: BLURB
6
  ---
7
 
8
 
9
+ # Dataset Card for BLURB
10
 
11
  ## Dataset Description
12
 
13
+ - **Homepage:** https://microsoft.github.io/BLURB/tasks.html
14
  - **Pubmed:** True
15
+ - **Public:** True
16
+ - **Tasks:** Named Entity Recognition
17
 
18
+ BLURB is a collection of resources for biomedical natural language processing.
19
+ In general domains, such as newswire and the Web, comprehensive benchmarks and
20
+ leaderboards such as GLUE have greatly accelerated progress in open-domain NLP.
21
+ In biomedicine, however, such resources are ostensibly scarce. In the past,
22
+ there have been a plethora of shared tasks in biomedical NLP, such as
23
+ BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These
24
+ efforts have played a significant role in fueling interest and progress by the
25
+ research community, but they typically focus on individual tasks. The advent of
26
+ neural language models, such as BERT provides a unifying foundation to leverage
27
+ transfer learning from unlabeled text to support a wide range of NLP
28
+ applications. To accelerate progress in biomedical pretraining strategies and
29
+ task-specific methods, it is thus imperative to create a broad-coverage
30
+ benchmark encompassing diverse biomedical tasks.
31
 
32
+ Inspired by prior efforts toward this direction (e.g., BLUE), we have created
33
+ BLURB (short for Biomedical Language Understanding and Reasoning Benchmark).
34
+ BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP
35
+ applications, as well as a leaderboard for tracking progress by the community.
36
+ BLURB includes thirteen publicly available datasets in six diverse tasks. To
37
+ avoid placing undue emphasis on tasks with many available datasets, such as
38
+ named entity recognition (NER), BLURB reports the macro average across all tasks
39
+ as the main score. The BLURB leaderboard is model-agnostic. Any system capable
40
+ of producing the test predictions using the same training and development data
41
+ can participate. The main goal of BLURB is to lower the entry barrier in
42
+ biomedical NLP and help accelerate progress in this vitally important field for
43
+ positive societal and human impact.
44
 
45
+ This implementation contains a subset of 5 tasks as of 2022.10.06, with their original train, dev, and test splits.
 
46
 
47
 
48
  ## Citation Information
49
 
50
  ```
51
+ @article{gu2021domain,
52
+ title = {
53
+ Domain-specific language model pretraining for biomedical natural
54
+ language processing
55
+ },
56
+ author = {
57
+ Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
58
+ Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
59
+ Jianfeng and Poon, Hoifung
60
+ },
61
+ year = 2021,
62
+ journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
63
+ publisher = {ACM New York, NY},
64
+ volume = 3,
65
+ number = 1,
66
+ pages = {1--23}
 
67
  }
68
  ```