Create README
Browse files
README.md
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Wiki-Talks
|
2 |
+
|
3 |
+
The Wiki-Talks dataset is a collection of conversational threads extracted from the talk pages on Wikipedia.
|
4 |
+
This dataset captures collaborative dialogue, discussion patterns, and consensus-building among Wikipedia contributors.
|
5 |
+
It is useful for NLP research focused on dialogue, sentiment analysis, and community dynamics.
|
6 |
+
|
7 |
+
## Details
|
8 |
+
|
9 |
+
### Description
|
10 |
+
|
11 |
+
The Wiki-Talks dataset contains discussion threads from Wikipedia’s talk pages.
|
12 |
+
Each thread includes a conversation title, a main message, and nested replies, capturing the sequence and structure of discussions.
|
13 |
+
This dataset provides a unique view into how contributors collaborate, negotiate meaning, and resolve conflicts when editing content on Wikipedia.
|
14 |
+
|
15 |
+
- **Curated by:** @lflage
|
16 |
+
- **Language(s) (NLP):** Various languages, reflecting the multilingual nature of Wikipedia (e.g., English, Spanish, German, and many others)
|
17 |
+
- **License:** CC BY-SA 3.0
|
18 |
+
|
19 |
+
### Dataset Sources [optional]
|
20 |
+
|
21 |
+
- **Repository:** [Wiki-Talks](https://github.com/lflage/wiki-talks)
|
22 |
+
|
23 |
+
## Uses
|
24 |
+
|
25 |
+
The Wiki-Talks dataset is intended for research and development in NLP fields that require structured conversational data.
|
26 |
+
Some use cases include:
|
27 |
+
|
28 |
+
- **Dialogue Modeling:** Training models for conversational agents that need to capture turn-taking and structured discourse.
|
29 |
+
- **Sentiment Analysis:** Analyzing language use, tone, and sentiment in collaborative or contentious settings.
|
30 |
+
- **Toxicity and Moderation:** Developing models for detecting inappropriate content and toxicity in user interactions.
|
31 |
+
- **Discourse and Pragmatics Research:** Studying patterns of collaboration, argumentation, and consensus-building in online communities.
|
32 |
+
|
33 |
+
## Dataset Structure
|
34 |
+
|
35 |
+
The Wiki-Talks dataset is organized in a nested format to capture the hierarchical structure of conversations. Each thread contains:
|
36 |
+
|
37 |
+
The `threads` column contains the nested thread structure.
|
38 |
+
|
39 |
+
- `title` (str): The title of the discussion thread.
|
40 |
+
- `message` (str): The main message initiating the conversation.
|
41 |
+
- `replies` (list of reply objects): Each `reply` object contains:
|
42 |
+
- `message` (str): The reply message.
|
43 |
+
- `replies` (list of nested reply objects): Nested replies within the conversation, capturing deeper levels of response and discussion.
|
44 |
+
|
45 |
+
```
|
46 |
+
thread = { "title": str,
|
47 |
+
"message": str,
|
48 |
+
"replies: list[reply]}
|
49 |
+
reply = { "message": str,
|
50 |
+
replies: list[reply]}
|
51 |
+
```
|
52 |
+
|
53 |
+
## Dataset Creation
|
54 |
+
|
55 |
+
### Source Data
|
56 |
+
|
57 |
+
The data was collected from publicly available Wikipedia talk pages, more specifically the pages-meta-current.
|
58 |
+
|
59 |
+
#### Data Collection and Processing
|
60 |
+
|
61 |
+
Data was selected from Wikipedia's talk pages and processed to structure the threads and replies hierarchically.
|
62 |
+
The dataset includes only messages from public Wikipedia dumps and was processed using WikiTextParser, lxml and python standard libraries.
|
63 |
+
It uses the time signature of a comment on a thread to delimit a comment.
|
64 |
+
|
65 |
+
### Recommendations
|
66 |
+
|
67 |
+
When using this dataset, the following should be considered:
|
68 |
+
|
69 |
+
- **Bias**: As Wikipedia is user-generated, discussions may reflect certain biases based on the contributors' demographics, interests, or viewpoints.
|
70 |
+
- **Limitations**: Conversations may sometimes be off-topic or contain incomplete responses due to the open nature of Wikipedia.
|
71 |
+
- **Technical**: The dataset can be large and memory-intensive due to nested structures; consider streaming methods for processing.
|
72 |
+
|
73 |
+
## Citation [optional]
|
74 |
+
|
75 |
+
For citation please refer to the original repository.
|
76 |
+
|
77 |
+
## Contribution
|
78 |
+
|
79 |
+
Refer to the [repository](https://github.com/lflage/wiki-talks) for how to contribute to expand the dataset
|