Merge branch 'main' of https://huggingface.co/datasets/IVLLab/MultiDialog into main
Browse files
README.md
CHANGED
@@ -25,10 +25,21 @@ size_categories:
|
|
25 |
- **Point of Contact:** [[email protected]](mailto:[email protected])
|
26 |
|
27 |
## Dataset Description
|
28 |
-
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
### Example Usage
|
31 |
-
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is
|
32 |
```python
|
33 |
from datasets import load_dataset
|
34 |
|
@@ -43,34 +54,41 @@ transcription = MultiD["valid_freq"][0]["value"] # first transcription
|
|
43 |
```
|
44 |
|
45 |
### Supported Tasks
|
46 |
-
- `multimodal dialogue generation`
|
47 |
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
48 |
- `text-to-speech`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
49 |
|
50 |
### Languages
|
51 |
Multidialog contains audio and transcription data in English.
|
52 |
|
|
|
|
|
|
|
|
|
|
|
53 |
## Dataset Structure
|
54 |
### Data Instances
|
55 |
```python
|
56 |
{
|
|
|
57 |
'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b',
|
58 |
'utterance_id': 0,
|
59 |
'from': 'gpt',
|
60 |
'audio':
|
61 |
{
|
62 |
-
# in streaming mode 'path' will be '
|
63 |
-
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/
|
64 |
'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32),
|
65 |
'sampling_rate': 16000
|
66 |
},
|
67 |
'value': 'Are you a football fan?',
|
68 |
'emotion': 'Neutral',
|
69 |
-
'original_full_path': '
|
70 |
}
|
71 |
```
|
72 |
|
73 |
### Data Fields
|
|
|
74 |
* conv_id (string) - unique identifier for each conversation.
|
75 |
* utterance_id (float) - uterrance index.
|
76 |
* from (string) - who the message is from (human, gpt).
|
|
|
25 |
- **Point of Contact:** [[email protected]](mailto:[email protected])
|
26 |
|
27 |
## Dataset Description
|
28 |
+
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes. For access to video files of MultiDialog, download them [here](https://drive.google.com/drive/folders/1RPMwVHU34yX0R_HbxAWmxF2EHy961HA3?usp=sharing).
|
29 |
+
|
30 |
+
### Dataset Statistics
|
31 |
+
| | train | valid_freq | valid_rare | test_freq | test_rare | Total |
|
32 |
+
|-----------------------|---------|---------|---------|---------|---------|----------|
|
33 |
+
| \# dialogues | 7,011 | 448 | 443 | 450 | 381 | 8,733 |
|
34 |
+
| \# utterance | 151,645 | 8,516 | 9,556 | 9,811 | 8,331 | 187,859 |
|
35 |
+
| avg \# utterance/dialogue | 21.63 | 19.01 | 21.57 | 21.80 | 21.87 | 21.51 |
|
36 |
+
| avg length/utterance (s) | 6.50 | 6.23 | 6.40 | 6.99 | 6.49 | 6.51 |
|
37 |
+
| avg length/dialogue (min) | 2.34 | 1.97 | 2.28 | 2.54 | 2.36 | 2.33 |
|
38 |
+
| total length (hr) | 273.93 | 14.74 | 17.00 | 19.04 | 15.01 | 339.71 |
|
39 |
+
|
40 |
|
41 |
### Example Usage
|
42 |
+
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is an example usage.
|
43 |
```python
|
44 |
from datasets import load_dataset
|
45 |
|
|
|
54 |
```
|
55 |
|
56 |
### Supported Tasks
|
57 |
+
- `multimodal dialogue generation` : The dataset can be used to train an end-to-end multimodal
|
58 |
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
59 |
- `text-to-speech`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
60 |
|
61 |
### Languages
|
62 |
Multidialog contains audio and transcription data in English.
|
63 |
|
64 |
+
### Gold Emotion Dialogue Subset
|
65 |
+
We provide a gold emotion dialogue subset in the MultiDialog dataset, a more reliable resource for studying emotional dynamics in conversations.
|
66 |
+
We classify dialogues from actors that exhibit emotion accuracy above 40% as gold emotion dialogue. Please use dialogues from actors with the following ids: a, b, c, e, f, g, i, j, and k.
|
67 |
+
|
68 |
+
|
69 |
## Dataset Structure
|
70 |
### Data Instances
|
71 |
```python
|
72 |
{
|
73 |
+
'file_name': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b/t_ffa55df6-114d-4b36-87a1-7af6b8b63d9b_0k.wav'
|
74 |
'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b',
|
75 |
'utterance_id': 0,
|
76 |
'from': 'gpt',
|
77 |
'audio':
|
78 |
{
|
79 |
+
# in streaming mode 'path' will be 't_152ee99a-fec0-4d37-87a8-b1510a9dc7e5/t_152ee99a-fec0-4d37-87a8-b1510a9dc7e5_0i.wav'
|
80 |
+
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/cache_id/t_152ee99a-fec0-4d37-87a8-b1510a9dc7e5/t_152ee99a-fec0-4d37-87a8-b1510a9dc7e5_0i.wav,
|
81 |
'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32),
|
82 |
'sampling_rate': 16000
|
83 |
},
|
84 |
'value': 'Are you a football fan?',
|
85 |
'emotion': 'Neutral',
|
86 |
+
'original_full_path': 'valid_freq/t_ffa55df6-114d-4b36-87a1-7af6b8b63d9b/t_ffa55df6-114d-4b36-87a1-7af6b8b63d9b_0k.wav'
|
87 |
}
|
88 |
```
|
89 |
|
90 |
### Data Fields
|
91 |
+
* file_name (string) - relative file path to the audio sample in the specific split directory.
|
92 |
* conv_id (string) - unique identifier for each conversation.
|
93 |
* utterance_id (float) - uterrance index.
|
94 |
* from (string) - who the message is from (human, gpt).
|