Model Description
Optimized Layer Merging (OLM) Is a transformer optimization framework implementing automated layer recombination.
Olm create Frankenstein's monster out of language models by cherry-picking the best performing layers across different models to create a superior hybrid. The core mechanism:
- Takes multiple language models as input
- Uses a base model as the foundation
- Iteratively replaces individual layers, evaluating performance on specified datasets
- Keeps the best performing layer at each position based on metrics like perplexity, exact match, and a custom "quality" score
- Builds a fusion model layer-by-layer while maintaining or improving performance
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.