|
--- |
|
dataset_info: |
|
features: |
|
- name: conversations |
|
list: |
|
- name: from |
|
dtype: string |
|
- name: value |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 9599 |
|
num_examples: 6 |
|
download_size: 6420 |
|
dataset_size: 9599 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
language: |
|
- en |
|
pretty_name: Ethical Alignment (Seeds) |
|
--- |
|
# Ethical and Legal Alignment of Large Language Models |
|
Current large language models, both pretrained base models and finetuned "aligned" models, are targets of criticism that they are created on the backs of unethically and illegally obtained text corpuses and instruction datasets. Furthermore, we observe that many "open-source" instruction finetuning datasets are guilty of 'laundering' copyrighted data from pre-existing large language models. To alleviate these concerns during post-training, we introduce fully human-seeded and openly licensed datasets and recipes for the full finetuning and alignment of a pretrained language model. |
|
|
|
## Licensing |
|
All data, documentation, and code herein is licensed under a variation of the Hippocratic License 3.0. Please refer to [LICENSE.md](https://huggingface.co/datasets/allura-org/ethical-alignment-seeds/blob/main/LICENSE.md) for more information. |