Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,95 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- conversational
|
6 |
+
- dialogue
|
7 |
+
- response generation
|
8 |
license: apache-2.0
|
9 |
+
datasets:
|
10 |
+
- allenai/soda
|
11 |
+
- allenai/prosocial-dialog
|
12 |
---
|
13 |
+
|
14 |
+
# Model Card for 🧑🏻🚀COSMO
|
15 |
+
|
16 |
+
🧑🏻🚀COSMO is a conversation agent with greater generalizability, outperforming previous best-performing agents (e.g., GODEL, BlenderBot, DialoGPT) on both in- and out-of-domain datasets (e.g., DailyDialog, BlendedSkillTalk). It is trained on two datasets: SODA and ProsocialDialog. COSMO is especially aiming to model natural human conversations. It can accept situation descriptions as well as instructions on what role it should play in the situation.
|
17 |
+
|
18 |
+
## Model Description
|
19 |
+
- **Repository:** [Code](https://github.com/skywalker023/sodaverse)
|
20 |
+
- **Paper:** [SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization](https://arxiv.org/abs/2212.10465)
|
21 |
+
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])
|
22 |
+
|
23 |
+
## Model Training
|
24 |
+
|
25 |
+
🧑🏻🚀COSMO is trained on our two recent datasets: 🥤[SODA](https://huggingface.co/datasets/allenai/soda) and [Prosocial Dialog](https://huggingface.co/datasets/allenai/prosocial-dialog).
|
26 |
+
The backbone model of COSMO is the [lm-adapted T5](https://huggingface.co/google/t5-xl-lm-adapt).
|
27 |
+
|
28 |
+
### How to use
|
29 |
+
|
30 |
+
```python
|
31 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
32 |
+
|
33 |
+
tokenizer = AutoTokenizer.from_pretrained("allenai/cosmo-xl")
|
34 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/cosmo-xl")
|
35 |
+
# model.to('cuda')
|
36 |
+
|
37 |
+
def set_input(narrative, instruction, dialogue_history):
|
38 |
+
input_text = " <turn> ".join(dialogue_history)
|
39 |
+
|
40 |
+
if instruction != "":
|
41 |
+
input_text = instruction + " <sep> " + input_text
|
42 |
+
|
43 |
+
if narrative != "":
|
44 |
+
input_text = narrative + " <sep> " + input_text
|
45 |
+
|
46 |
+
|
47 |
+
return input_text
|
48 |
+
|
49 |
+
def generate(narrative, instruction, dialogue_history):
|
50 |
+
"""
|
51 |
+
narrative: the description of situation/context with the characters included (e.g., "David goes to an amusement park")
|
52 |
+
instruction: the perspective/speaker instruction (e.g., "Imagine you are David and speak to his friend Sarah").
|
53 |
+
dialogue history: the previous utterances in the dialogue in a list
|
54 |
+
"""
|
55 |
+
|
56 |
+
input_text = set_input(narrative, instruction, dialogue_history)
|
57 |
+
|
58 |
+
inputs = tokenizer([input_text], return_tensors="pt")
|
59 |
+
# inputs = inputs.to('cuda')
|
60 |
+
outputs = model.generate(inputs["input_ids"], max_new_tokens=128, temperature=1.0, top_p=.95, do_sample=True)
|
61 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
62 |
+
|
63 |
+
return response
|
64 |
+
|
65 |
+
situation = "Cosmo had a really fun time participating in the EMNLP conference at Abu Dhabi."
|
66 |
+
instruction = "You are Cosmo and you are talking to a friend." # You can also leave the instruction empty
|
67 |
+
|
68 |
+
dialogue = [
|
69 |
+
"Hey, how was your trip to Abu Dhabi?"
|
70 |
+
]
|
71 |
+
|
72 |
+
response = generate(situation, instruction, dialogue)
|
73 |
+
print(response)
|
74 |
+
```
|
75 |
+
|
76 |
+
### Further Details, Social Impacts, Bias, and Limitations
|
77 |
+
Please refer to our [paper](https://arxiv.org/abs/2212.10465).
|
78 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
|
79 |
+
|
80 |
+
## Additional Information
|
81 |
+
|
82 |
+
For a brief summary of our paper, please see this [tweet](https://twitter.com/hyunw__kim/status/1605400305126248448).
|
83 |
+
|
84 |
+
### Citation
|
85 |
+
|
86 |
+
Please cite our work if you find the resources in this repository useful:
|
87 |
+
```
|
88 |
+
@article{kim2022soda,
|
89 |
+
title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
|
90 |
+
author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
|
91 |
+
journal={ArXiv},
|
92 |
+
year={2022},
|
93 |
+
volume={abs/2212.10465}
|
94 |
+
}
|
95 |
+
```
|