view article Article πΊπ¦ββ¬ LLM Comparison/Test: 25 SOTA LLMs (including QwQ) through 59 MMLU-Pro CS benchmark runs By wolfram β’ Dec 4, 2024 β’ 76
view post Post 2371 π€ I trained what is probably the smallest (600k ~) TinyStories model! It really can write grammatically correct stories! raincandy-u/TinyStories-656KTry this space based on this minuscule model! raincandy-u/Story-TellerEdit: Moreover, the model weight size is only 1.31MB under bf16, and can be reduced to the 700KB level when using Q8_0 quantization Uβ’γ§β’*UEdit: Now 1000K params chat model! raincandy-u/TinyChat-1776K 2 replies Β· π 3 3 π₯ 3 3 π 2 2 π€― 2 2 + Reply