File size: 1,896 Bytes
7978591 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
language: en
datasets:
- 37 popular Python code repositories
- See princeton-nlp/SWE-bench train split
- See the `make_datasets` documentation on SWE-bench's [GitHub](https://github.com/princeton-nlp/SWE-bench/tree/main/inference/make_datasets) for details on formatting input.
---
# SWE-Llama
SWE-Llama are variants of the [CodeLlama](https://arxiv.org/abs/2308.12950) model fine-tuned on software engineering tasks extracted from real-world GitHub issues and pull requests. They were introduced and evaluated on the SWE-bench benchmark in this [paper](https://arxiv.org/abs/2310.06770).
## Model Details
- **Architecture:** Transformer, based on [CodeLlama](https://arxiv.org/abs/2308.12950) architecture
- **Parameters:** 7 billion for SWE-Llama-7b, 13 billion for SWE-Llama-13b
- **Objective:** Generating patches to resolve GitHub issues, conditioned on issue description and code context
## Training Data
SWE-Llama was fine-tuned on 19,000 issues and pull requests collected from 37 popular Python code repositories on GitHub, disjoint from those used in SWE-bench.
## Training Procedure
- Fine-tuned only the attention matrices using LoRA method
- Trained for 4 epochs with a batch size of 32
- Selected best checkpoint based on validation perplexity
## Evaluation Results
When evaluated on the SWE-bench benchmark:
- SWE-Llama-7b achieved 3.0% issue resolution rate using oracle context retrieval
- SWE-Llama-13b achieved 4.0% issue resolution rate using oracle context retrieval
## BibTeX Entry
```tex
@misc{jimenez2023swebench,
title={SWE-bench: Can Language Models Resolve Real-World GitHub Issues?},
author={Carlos E. Jimenez and John Yang
and Alexander Wettig and Shunyu Yao
and Kexin Pei and Ofir Press and Karthik Narasimhan},
year={2023},
eprint={2310.06770},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |