--- dataset_info: features: - name: query_id dtype: string - name: corpus_id dtype: string - name: score dtype: int64 splits: - name: train num_bytes: 13644 num_examples: 561 - name: valid num_bytes: 5413 num_examples: 226 - name: test num_bytes: 5293 num_examples: 221 download_size: 15613 dataset_size: 24350 configs: - config_name: default data_files: - split: train path: data/train-* - split: valid path: data/valid-* - split: test path: data/test-* --- Employing the COIR evaluation framework's dataset version, utilize the code below for assessment: ```python import coir from coir.data_loader import get_tasks from coir.evaluation import COIR from coir.models import YourCustomDEModel model_name = "intfloat/e5-base-v2" # Load the model model = YourCustomDEModel(model_name=model_name) # Get tasks #all task ["codetrans-dl","stackoverflow-qa","apps","codefeedback-mt","codefeedback-st","codetrans-contest","synthetic- # text2sql","cosqa","codesearchnet","codesearchnet-ccr"] tasks = get_tasks(tasks=["codetrans-contest"]) # Initialize evaluation evaluation = COIR(tasks=tasks,batch_size=128) # Run evaluation results = evaluation.run(model, output_folder=f"results/{model_name}") print(results) ```