|
--- |
|
library_name: transformers |
|
tags: |
|
- code |
|
license: llama2 |
|
--- |
|
|
|
Converted version of [CodeLlama-7b](https://huggingface.co/meta-llama/CodeLlama-7b-hf) to 4-bit using bitsandbytes. For more information about the model, refer to the model's page. |
|
|
|
## Impact on performance |
|
|
|
In the following figure, we can see the impact on the performance of a set of models relative to the required RAM space. It is noticeable that the quantized models have equivalent performance while providing a significant gain in RAM usage. |
|
|
|
![constellation](https://i.postimg.cc/PfDgN82g/constellation.png) |