Model Card for oopere/pruned60-llama-3.2-3b

This model is a pruned version of the Llama-3.2-3B model, with a parameter reduction of 60% in the MLP layers. The pruning process aims to achieve significant computational efficiency gains, though at the cost of notable performance degradation across several benchmarks. This model is not intended to be used directly but rather to be fine-tuned for specific tasks where it can achieve acceptable performance in a highly resource-constrained environment.

Model Details

  • Model Type: Pruned version of LLaMA-3.2 using structured pruning
  • Original Model: meta-llama/Llama-3.2-3B
  • Pruning Method: Structured pruning of MLP layers using importance scores based on absolute maximum weights
  • Size Reduction: 40% (from 3.21B to 1.94B parameters)
  • Architecture: Same as original LLaMA but with reduced MLP layer sizes
  • Language(s): Same as original model
  • License: Same as original model
  • Developed by: Pere Martra

These models are part of the study "Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models". They explore structured pruning in GLU-based architectures using Llama-3.2 (1B and 3B variants). The pruning experiments target optimal expansion ratios to balance performance, computational efficiency, and environmental sustainability. The models were evaluated across multiple benchmarks, including BoolQ, ARC-Easy, and MUSR, and demonstrate significant efficiency gains while maintaining robust task performance.

Performance on Standard Benchmarks

Benchmark Original Model Pruned Model Relative Change
ARC-Easy 65.19% 32.32% -50.4%
BoolQ 64.16% 50.70% -21.0%
LAMBADA-OpenAI 62.20% 6.75% -89.1%
LAMBADA-Standard 53.46% 6.37% -88.1%

Key Findings

  • Extreme Performance Drop: Pruning to 60% results in significant degradation across most benchmarks, especially tasks requiring nuanced reasoning and long-range comprehension.
  • ARC-Easy: Retains minimal accuracy, showing that the model can still perform basic reasoning tasks at reduced efficacy.
  • BoolQ: Maintains better performance compared to other tasks, indicating potential for binary classification tasks under strict constraints.
  • LAMBADA: Both OpenAI and Standard versions show steep declines, highlighting the difficulty of handling language completion tasks.

Limitations

  • Severe Impact on Long-Range Dependencies: Performance on tasks like LAMBADA suggests the model is inadequate for understanding and predicting longer sequences.
  • Restricted Usability: Significant performance losses make the model unsuitable for applications requiring high accuracy or nuanced understanding.
  • High Perplexity: Perplexity values are exceptionally high, indicating difficulty in generating coherent language outputs.

Implementation Details

Pruning Method

  • Technique: Structured pruning targeting MLP layers
  • Pruning Ratio: 60% of neurons removed from MLP layers
  • Selection Criteria: Importance scoring based on absolute maximum weights
  • Architecture Specifics: Maintained GLU structure during pruning

Hardware Requirements

  • Reduced memory footprint compared to original model
  • Can run on hardware with ~40% less memory than original

Acknowledgments

Downloads last month
40
Safetensors
Model size
1.94B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for oopere/pruned60-llama-3.2-3b

Finetuned
(76)
this model
Quantizations
1 model

Collection including oopere/pruned60-llama-3.2-3b