File size: 3,486 Bytes
64c173c d376499 800a6a4 cf66261 800a6a4 64c173c 800a6a4 64c173c 800a6a4 ae11b3a f28580f 87c426e 64c173c 87c426e 800a6a4 4076540 800a6a4 64c173c 87c426e 64c173c 87c426e 64c173c 87c426e 64c173c 800a6a4 64c173c 800a6a4 f28580f 800a6a4 b2f1bae 800a6a4 64c173c 036be57 c27e91d 036be57 64c173c f8a8c01 8014e14 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
license: apache-2.0
language:
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- pretrained
---
# Typhoon-7B: Thai Large Language Model (Pretrained)
**Typhoon-7B** is a *pretrained* Thai ๐น๐ญ large language model with 7 billion parameters, and it is based on Mistral-7B.
**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text.
**This is not an instruction-tuned model** - It may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
The Instruct model (chat-model) will be released soon. The beta version register is open at https://opentyphoon.ai/ or follow us for future model release https://twitter.com/opentyphoon.
<div align="center">
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</div>
For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951).
## Model Description
- **Model type**: A 7B pretrained decoder-only model
- **Requirement**: transformers 4.34.0 or newer.
- **Primary Language(s)**: Thai ๐น๐ญ and English ๐ฌ๐ง
- **License**: Apache-2.0 (Commercial)
## Performance on Thai Benchmark
| **Model** | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** |
|---------------------|----------|--------|----------|------------|-------------|
| Typhoon-7B | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 |
| SeaLLM-7B | 0.342 | 0.256 | 0.589 | 0.336 | 0.305 |
| OpenThaiGPT-beta-7B | 0.180 | 0.278 | 0.411 | 0.319 | 0.243 |
| WangChanGLM | 0.192 | 0.271 | 0.167 | 0.172 | 0.175 |
| SEA-LION-7B | 0.179 | 0.290 | 0.244 | 0.198 | 0.175 |
| Avg. Human | 0.318 | - | 0.472 | 0.406 | - |
## Intended Uses & Limitations
This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
## Follow us
https://twitter.com/opentyphoon
## Support / Ask any question
https://discord.gg/CqyBscMFpg
## SCB 10X AI Team
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-7B useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## Contact Us
- General & Collaboration: [email protected], [email protected]
- Technical: [email protected]
|