|
SmolLM2, a family of compact language models available in three sizes: 135M, 360M, and 1.7B parameters. |
|
|
|
In this repo is WASM compiled 1.7B model suitable for [WebLLM](https://llm.mlc.ai/docs/deploy/webllm.html#webllm-runtime) |
|
|
|
**SmolLM2-1.7B** |
|
|
|
Demonstrates significant improvements over its predecessor, SmolLM1-1.7B, in instruction following, knowledge, reasoning, and mathematics. |
|
Training: Trained on 11 trillion tokens using a diverse dataset combination including FineWeb-Edu, DCLM, The Stack, and new mathematics and coding datasets. |
|
Fine-Tuning: Developed through supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) using UltraFeedback. |
|
|
|
**Capabilities:** |
|
|
|
Tasks: Supports tasks such as text rewriting, summarization, and function calling. |
|
Datasets: Utilizes datasets developed by Argilla, such as Synth-APIGen-v0.1. |
|
|
|
--- |
|
license: apache-2.0 |
|
--- |
|
|