Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework
Abstract
In real world software development, improper or missing exception handling can severely impact the robustness and reliability of code. Exception handling mechanisms require developers to detect, capture, and manage exceptions according to high standards, but many developers struggle with these tasks, leading to fragile code. This problem is particularly evident in open-source projects and impacts the overall quality of the software ecosystem. To address this challenge, we explore the use of large language models (LLMs) to improve exception handling in code. Through extensive analysis, we identify three key issues: Insensitive Detection of Fragile Code, Inaccurate Capture of Exception Block, and Distorted Handling Solution. These problems are widespread across real world repositories, suggesting that robust exception handling practices are often overlooked or mishandled. In response, we propose Seeker, a multi-agent framework inspired by expert developer strategies for exception handling. Seeker uses agents: Scanner, Detector, Predator, Ranker, and Handler to assist LLMs in detecting, capturing, and resolving exceptions more effectively. Our work is the first systematic study on leveraging LLMs to enhance exception handling practices in real development scenarios, providing valuable insights for future improvements in code reliability.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Code Knowledge Graph-Enhanced System for LLM-Based Fuzz Driver Generation (2024)
- Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement (2024)
- ConAIR:Consistency-Augmented Iterative Interaction Framework to Enhance the Reliability of Code Generation (2024)
- On the Adversarial Robustness of Instruction-Tuned Large Language Models for Code (2024)
- Generating executable oracles to check conformance of client code to requirements of JDK Javadocs using LLMs (2024)
- A Real-World Benchmark for Evaluating Fine-Grained Issue Solving Capabilities of Large Language Models (2024)
- Do Advanced Language Models Eliminate the Need for Prompt Engineering in Software Engineering? (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper