File size: 1,551 Bytes
0b714c1 c655efd 0b714c1 ac068e3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: extra
path: data/extra-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: train
num_bytes: 538947
num_examples: 473
- name: extra
num_bytes: 11497
num_examples: 109
download_size: 140314
dataset_size: 550444
license: mit
task_categories:
- token-classification
language:
- ug
size_categories:
- n<1K
---
# Uyghur NER dataset
## Description
This dataset is in [WikiAnn](https://huggingface.co/datasets/wikiann) format. The dataset is assembled from named entities parsed from Wikipedia, Wiktionary and Dbpedia. For some words, new case forms have been created using [Apertium-uig](https://github.com/apertium/apertium-uig). Some locations have been translated using the Google Translate API.
The dataset is divided into two parts: `train` and `extra`. `Train` has full sentences, `extra` has only named entities.
Tags: `O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6)`
## Data example
```
{
'tokens': ['قاراماي', 'شەھىرى', '«مەملىكەت', 'بويىچە', 'مىللەتل…'],
'ner_tags': [5, 0, 0, 0, 0],
'langs': ['ug', 'ug', 'ug', 'ug', 'ug'],
'spans': ['LOC: قاراماي']
}
```
## Usage with `datasets` library
```py
from datasets import load_dataset
dataset = load_dataset("codemurt/uyghur_ner_dataset")
``` |