StackMix-v0.1 / README.md
kalomaze's picture
Update README.md
7261885 verified
metadata
license: apache-2.0

An experimental small-ish dataset I whipped up in an afternoon.

What I did:

  • Curated StackOverflow and other StackExchange subforums for some good examples that were not GPT generated, including writing advice & other real world questions.
  • Duplicated said examples with different prompt formatting (so it would generalize to both Alpaca & the official llama2-chat prompt layouts)
  • Two long context outliers to ensure long context works (a TV episode script, 10k tokens, and the first few chapters of 1984, 32k tokens.)
  • Another example which is a combination of the one shot responses one after the other in a long context (to help teach the model to sometimes ignore older parts of context when appropriate and not overfit/repeat)

This comes out to about ~60k tokens total, give or take.