# Evaluation | |
This folder contains scripts and results for intermediate evaluation, mostly based on zero-shot prompting performance. Most are performed with Eleuther AI's [LM eval harness](https://github.com/EleutherAI/lm-evaluation-harness). | |
Evaluated models: | |
- BLOOM (tr11 / The `bigscience/bloom` model in 176B / 6B3 / 2B5 / 1B3 / 750M / 350M variants) | |
- [13B](https://github.com/bigscience-workshop/bigscience/blob/master/evaluation/Tr1-13B-harness-eval.json) | |