Datasets:
jacob-recastai
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -30,5 +30,65 @@ configs:
|
|
30 |
path: data/train-*
|
31 |
---
|
32 |
# Dataset Card for "databricks-dolly-15k-chatml"
|
|
|
|
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
path: data/train-*
|
31 |
---
|
32 |
# Dataset Card for "databricks-dolly-15k-chatml"
|
33 |
+
## Dataset Summary
|
34 |
+
This dataset has been created by **Re:cast AI** to transform the existing dataset [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) into a [chatml](https://huggingface.co/docs/transformers/main/en/chat_templating) friendly format for use in SFT tasks with pretrained models.
|
35 |
|
36 |
+
|
37 |
+
|
38 |
+
## Dataset Structure
|
39 |
+
```python
|
40 |
+
messages = [
|
41 |
+
{ "content": "You are an expert Q&A system that is trusted around the world. You always... etc.", "role": "system" },
|
42 |
+
{ "content": "(Optional) Context information is below.\n----------------\nVirgin Australia, the... etc.", "role": "user" },
|
43 |
+
{ "content": "Virgin Australia commenced services on 31 August 2000... etc.", "role": "assistant" } ]
|
44 |
+
]
|
45 |
+
```
|
46 |
+
|
47 |
+
## Usage
|
48 |
+
```python
|
49 |
+
from datasets import load_dataset
|
50 |
+
dataset = load_dataset("recastai/databricks-dolly-15k-chatml", split="train")
|
51 |
+
```
|
52 |
+
|
53 |
+
## Processing applied to original dataste
|
54 |
+
```python
|
55 |
+
INSTRUCTIONS = """You are an expert Q&A system that is trusted around the world. You always answer the user's query in a helpful and friendly way.
|
56 |
+
Some rules you always follow:
|
57 |
+
1. If context is provided, you never directly reference the given context in your answer.
|
58 |
+
2. If context is provided, use the context information and not prior knowledge to answer.
|
59 |
+
3. Avoid statements like 'Based on the context, ...' or 'The context information ...' or 'The answer to the user's query...' or anything along those lines.
|
60 |
+
4. If no context is provided use your internal knowledge to answer."""
|
61 |
+
|
62 |
+
# databricks-dolly-15k features:
|
63 |
+
# - instruction: The user query/question
|
64 |
+
# - context: (optional) context to use to help the assistant
|
65 |
+
# - response: The assistant's response to the query/question
|
66 |
+
#
|
67 |
+
key_mapping = dict(
|
68 |
+
query = "instruction",
|
69 |
+
context = "context",
|
70 |
+
response = "response"
|
71 |
+
)
|
72 |
+
|
73 |
+
def process_chatml_fn(example, validation=False):
|
74 |
+
"""
|
75 |
+
Processing specific to databricks-dolly-15k into a chat format.
|
76 |
+
"""
|
77 |
+
user_content = (
|
78 |
+
"(Optional) Context information is below.\n"
|
79 |
+
"----------------\n"
|
80 |
+
"{context}\n"
|
81 |
+
"----------------\n"
|
82 |
+
"Answer the following query.\n"
|
83 |
+
"{query}\n"
|
84 |
+
)
|
85 |
+
assistant_content = "{response}"
|
86 |
+
|
87 |
+
message = [
|
88 |
+
{"role": "system", "content": INSTRUCTIONS},
|
89 |
+
{"role": "user", "content": user_content.format(context=example[key_mapping['context']], query=example[key_mapping['query']])},
|
90 |
+
{"role": "assistant", "content": assistant_content.format(response=example[key_mapping['response']])}
|
91 |
+
]
|
92 |
+
|
93 |
+
return message
|
94 |
+
```
|