kalomaze commited on
Commit
7261885
·
verified ·
1 Parent(s): db07b1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -7,5 +7,6 @@ What I did:
7
  - Curated StackOverflow and other StackExchange subforums for some good examples that were **not** GPT generated, including writing advice & other real world questions.
8
  - Duplicated said examples with different prompt formatting (so it would generalize to both Alpaca & the official llama2-chat prompt layouts)
9
  - Two long context outliers to ensure long context works (a TV episode script, 10k tokens, and the first few chapters of 1984, 32k tokens.)
 
10
 
11
  This comes out to about ~60k tokens total, give or take.
 
7
  - Curated StackOverflow and other StackExchange subforums for some good examples that were **not** GPT generated, including writing advice & other real world questions.
8
  - Duplicated said examples with different prompt formatting (so it would generalize to both Alpaca & the official llama2-chat prompt layouts)
9
  - Two long context outliers to ensure long context works (a TV episode script, 10k tokens, and the first few chapters of 1984, 32k tokens.)
10
+ - Another example which is a combination of the one shot responses one after the other in a long context (to help teach the model to sometimes ignore older parts of context when appropriate and not overfit/repeat)
11
 
12
  This comes out to about ~60k tokens total, give or take.