Qwen2-1.5B-stepbasin-books
this was finetuned at 16384 context length
This is an experiment on long context text generation (i.e. 6k+ tokens generated) to evaluate if/when generation breaks down, etc. As such, all the data on which this model has been fine-tuned are full-length books.
Details
This model is a fine-tuned version of Qwen/Qwen2-1.5B on https://github.com/stepbasin/books/tree/master/books
It achieves the following results on the evaluation set:
- Loss: 2.8110
- Accuracy: 0.4298
- Num Input Tokens Seen: 44040192
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for BEE-spoke-data/Qwen2-1.5B-stepbasin-books
Base model
Qwen/Qwen2-1.5B