Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ library_name: transformers
|
|
27 |
|
28 |
Relay is motivated by this question: What does it take to chat with a base LLM?
|
29 |
|
30 |
-
Several papers (e.g., [URIAL](https://arxiv.org/abs/2312.01552)) have shown that base models can be used more reliably than expected. At the same time, we also increasingly find that RLHF, and other post-training approaches, may limit the creativity of LLMs.
|
31 |
LLMs can be more than smart assistants. In fact, they should have the potential to emulate all sorts of behaviours or patterns found in their pre-training datasets (usually a large chunk of the internet).
|
32 |
|
33 |
Relay is focused on a particular pattern that should be relatively frequent in pre-training datasets: IRC chats. IRC provides a rich context for conversational modeling, combining natural dialogue with command-based interactions. Yet, it remains largely overlooked.
|
|
|
27 |
|
28 |
Relay is motivated by this question: What does it take to chat with a base LLM?
|
29 |
|
30 |
+
Several papers (e.g., [URIAL](https://arxiv.org/abs/2312.01552)) have shown that base models can be used more reliably than expected. At the same time, we also increasingly find that RLHF, and other post-training approaches, may [limit](https://x.com/aidan_mclau/status/1860026205547954474) the creativity of LLMs.
|
31 |
LLMs can be more than smart assistants. In fact, they should have the potential to emulate all sorts of behaviours or patterns found in their pre-training datasets (usually a large chunk of the internet).
|
32 |
|
33 |
Relay is focused on a particular pattern that should be relatively frequent in pre-training datasets: IRC chats. IRC provides a rich context for conversational modeling, combining natural dialogue with command-based interactions. Yet, it remains largely overlooked.
|