Triangle104's picture
Update README.md
590a00d verified
metadata
license: apache-2.0
language:
  - en
base_model: LatitudeGames/Wayfarer-12B
tags:
  - text adventure
  - roleplay
  - llama-cpp
  - gguf-my-repo

Triangle104/Wayfarer-12B-Q5_K_M-GGUF

This model was converted to GGUF format from LatitudeGames/Wayfarer-12B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Model details:

We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on.

Similarly, great games need opposition. You must be able to fail, die, and may even have to start over. This makes games more fun!

However, the vast majority of AI models, through alignment RLHF, have been trained away from darkness, violence, or conflict, preventing them from fulfilling this role. To give our players better options, we decided to train our own model to fix these issues.

Wayfarer is an adventure role-play model specifically trained to give players a challenging and dangerous experience. We thought they would like it, but since releasing it on AI Dungeon, players have reacted even more positively than we expected.

Because they loved it so much, we’ve decided to open-source the model so anyone can experience unforgivingly brutal AI adventures! Anyone can download the model to run locally.

Or if you want to easily try this model for free, you can do so at https://aidungeon.com.

We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created.

Quantized GGUF weights can be downloaded here. Model details

Wayfarer 12B was trained on top of the Nemo base model using a two-stage SFT approach, with the first stage containing 180K chat-formatted instruct data instances and the second stage consisting of a 50/50 mixture of synthetic 8k context text adventures and roleplay experiences. How It Was Made

Wayfarer’s text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples.

One language model played the role of narrator, with the other playing the user. They were blind to each other’s underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died.

Wayfarer’s general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist. This serves to counter the positivity bias so inherent in our language models nowadays. Inference

The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course.

"temperature": 0.8, "repetition_penalty": 1.05, "min_p": 0.025

Limitations

Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other styles will work as well but may produce suboptimal results.

Additionally, Wayfarer was trained exclusively on single-turn chat data. Prompt Format

ChatML was used for both finetuning stages.

<|im_start|>system You're a masterful storyteller and gamemaster. Write in second person present tense (You are), crafting vivid, engaging narratives with authority and confidence.<|im_end|> <|im_start|>user

You peer into the darkness. <|im_start|>assistant You have been eaten by a grue.

GAME OVER

Credits

Thanks to Gryphe Padar for collaborating on this finetune with us!


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Wayfarer-12B-Q5_K_M-GGUF --hf-file wayfarer-12b-q5_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Wayfarer-12B-Q5_K_M-GGUF --hf-file wayfarer-12b-q5_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Wayfarer-12B-Q5_K_M-GGUF --hf-file wayfarer-12b-q5_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Wayfarer-12B-Q5_K_M-GGUF --hf-file wayfarer-12b-q5_k_m.gguf -c 2048