Text Generation
Transformers
Safetensors
English
Eval Results
Inference Endpoints
hawei nielsr HF staff commited on
Commit
0be5dd6
·
verified ·
1 Parent(s): f6e4188

Add missing metadata (#1)

Browse files

- Add missing metadata (d2189899bb9c9f57d26e774dd2fef6015484ce99)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -74,12 +74,14 @@ model-index:
74
  value: 0.4029255319148936
75
  stderr: 0.004471732136513382
76
  verified: false
 
 
77
  ---
 
78
  # Control-LLM-Llama3.1-8B-OpenCoder8
79
- This is a fine-tuned model of Llama-3.1-8B-Instruct for coding tasks on OpenCoder SFT dataset.
80
 
81
- ## Linked Paper
82
- This model is associated with the paper: [Control-LLM](https://arxiv.org/abs/2501.10979).
83
 
84
  ## Linked Open Source code - training, eval and benchmark
85
  This model is associated with the github: [Control-LLM](https://github.com/linkedin/ControlLLM).
@@ -117,4 +119,4 @@ The table below summarizes evaluation results across coding tasks and original c
117
  - **MLU**: MMLU (Massive Multitask Language Understanding)
118
  - **MLUP**: MMLU Pro
119
  - **O-Avg**: Original Capability - Size Weighted Average across ARC, GPQA, MMLU, and MMLU Pro
120
- - **Overall**: Combined average across all tasks
 
74
  value: 0.4029255319148936
75
  stderr: 0.004471732136513382
76
  verified: false
77
+ pipeline_tag: text-generation
78
+ library_name: transformers
79
  ---
80
+
81
  # Control-LLM-Llama3.1-8B-OpenCoder8
82
+ This is a fine-tuned model of Llama-3.1-8B-Instruct for coding tasks on OpenCoder SFT dataset described in the paper: [Control LLM: Controlled Evolution for Intelligence Retention in LLM](https://huggingface.co/papers/2501.10979).
83
 
84
+ Code: https://github.com/linkedin/ControlLLM.
 
85
 
86
  ## Linked Open Source code - training, eval and benchmark
87
  This model is associated with the github: [Control-LLM](https://github.com/linkedin/ControlLLM).
 
119
  - **MLU**: MMLU (Massive Multitask Language Understanding)
120
  - **MLUP**: MMLU Pro
121
  - **O-Avg**: Original Capability - Size Weighted Average across ARC, GPQA, MMLU, and MMLU Pro
122
+ - **Overall**: Combined average across all tasks