Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ tags:
|
|
28 |
The importance of a small parameter large language model (LLM) lies in its ability to balance performance and efficiency. As LLMs grow increasingly sophisticated, the trade-off between model size and computational resource demands becomes critical. A smaller parameter model offers significant advantages, such as reduced memory usage, faster inference times, and lower energy consumption, all while retaining a high level of accuracy and contextual understanding. These models are particularly valuable in real-world applications where resources like processing power and storage are limited, such as on mobile devices, edge computing, or low-latency environments.
|
29 |
|
30 |
## Llama 3.2 Chibi 3B
|
31 |
-
This experimental model is
|
32 |
|
33 |
## Architecture
|
34 |
[Llama 3.2 3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
|
|
|
28 |
The importance of a small parameter large language model (LLM) lies in its ability to balance performance and efficiency. As LLMs grow increasingly sophisticated, the trade-off between model size and computational resource demands becomes critical. A smaller parameter model offers significant advantages, such as reduced memory usage, faster inference times, and lower energy consumption, all while retaining a high level of accuracy and contextual understanding. These models are particularly valuable in real-world applications where resources like processing power and storage are limited, such as on mobile devices, edge computing, or low-latency environments.
|
29 |
|
30 |
## Llama 3.2 Chibi 3B
|
31 |
+
This experimental model is a result from continual pre-training of [Meta's Llama 3.2 3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on a small mixture of japanese datasets.
|
32 |
|
33 |
## Architecture
|
34 |
[Llama 3.2 3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
|