--- license: apache-2.0 task_categories: - text2text-generation language: - en size_categories: - n<1K tags: - code generation --- ## HumanEvalComm: Benchmarking the Communication Skills of Code Generation for LLMs and LLM Agent
## Dataset Description HumanEvalComm is a benchmark dataset for evaluating the communication skills of Large Language Models (LLMs) in code generation tasks. It is built upon the widely used [HumanEval benchmark](https://github.com/openai/human-eval). HumanEvalComm contains 762 modified problem descriptions based on the 164 problems in the HumanEval dataset. The modifications are created by applying one or a combination of the aforementioned clarification types. Each modified problem description is manually verified to ensure it triggers clarifying questions. The goal of HumanEvalComm is to evaluate the ability of LLMs to ask clarifying questions when faced with incomplete, inconsistent, or ambiguous requirements in coding problems: - Ambiguity: Statements in the problem descriptions are modified to have multiple interpretations. For example, changing "sort the array descendingly" to "sort the array (descendingly or ascendingly)". - Inconsistency: Modifications are made to create contradictions between the problem description and examples. For instance, changing the output of test examples to contradict the provided textual description. - Incompleteness: Parts of the problem description are removed to make it incomplete, requiring the model to ask questions to recover the missing content. | Clarification Category | *Ambiguity* | *Inconsistency* | *Incompleteness* | **Count** | |------------------------|:-----------:|:---------------:|:----------------:|:---------:| | 1a | ✔️ | | | 164 | | 1c | | ✔️ | | 164 | | 1p | | | ✔️ | 164 | | 2ac | ✔️ | ✔️ | | 162 | | 2cp | | ✔️ | ✔️ | 34 | | 2ap | ✔️ | | ✔️ | 74 | | **Total** | -- | -- | -- | 762 | *Note*: The smaller size for 2ac (same applies for 2cp and 2ap) is because we directly applied a combination of two clarification types from 1a, 1c strictly, and we create a new modified problem as 2ac only if applying a combination of 1a and 1c leads to a new problem description that is different from either 1a or 1c. 2cp and 2ap have smaller counts because the ambiguous (a) or inconsistent (c) parts are removed in (p) for a large number of problems. ## Dataset Structure The fields are the same as the fields in [HumanEval](https://huggingface.co/datasets/openai/openai_humaneval) benchmark, except the following fields: - **prompt1a**: Coding problem description with 1a clarification type (Ambiguity) - **prompt1c**: Coding problem description with 1a clarification type (Inconsistency) - **prompt1p**: Coding problem description with 1a clarification type (Incompleteness) - **prompt2ac**: Coding problem description with 1a clarification type (Ambiguity and Inconsistency) - **prompt2cp**: Coding problem description with 1a clarification type (Inconsistency and Incompleteness) - **prompt2ap**: Coding problem description with 1a clarification type (Ambiguity and Incompleteness) ## Prompt Format Each task is formatted with a clear instruction and provided function signature to guide the model in generating the responses. There are two rounds where the model is prompted with input: 1. Round one: ````markdown You are an expert software developer who writes high quality code. With below information, please either generate Python3 code (Respond directly with code only with markdown), or ask clarifying questions: {code_problem} (field prompt{1a,1c,1p,2ac,2cp,2ap}) ```` 2. Round two: ````markdown {code_problem} {clarifying questions} {answers to clarifying questions} Given above conversations, generate Python code directly (Markdown) to solve the coding problem: ```` ## Usage You can easily load the dataset using the Hugging Face `datasets` library. See more details of the usage in our [github repo](https://github.com/jie-jw-wu/human-eval-comm). ```python from datasets import load_dataset humanevalcomm = load_dataset("jie-jw-wu/HumanEvalComm", split="test") ``` ## Citation ```bibtex @article{wu2024benchmarking, title={Benchmarking the Communication Competence of Code Generation for LLMs and LLM Agent}, author={Wu, Jie JW and Fard, Fatemeh H}, journal={arXiv preprint arXiv:2406.00215}, year={2024} } ```