zhengr commited on
Commit
dcb9741
·
verified ·
1 Parent(s): e835600

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -111,6 +111,12 @@ model-index:
111
 
112
  MixTAO-7Bx2-MoE is a Mixure of Experts (MoE).
113
  This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.
 
 
 
 
 
 
114
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
115
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zhengr__MixTAO-7Bx2-MoE-v8.1)
116
 
 
111
 
112
  MixTAO-7Bx2-MoE is a Mixure of Experts (MoE).
113
  This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.
114
+
115
+ ### 🦒 Colab
116
+ | Colab Link | Info - Model Name |
117
+ | --- | --- |
118
+ |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1y2XmAGrQvVfbgtimTsCBO3tem735q7HZ?usp=sharing) | MixTAO-7Bx2-MoE-v8.1 |
119
+
120
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
121
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zhengr__MixTAO-7Bx2-MoE-v8.1)
122