strongpear commited on
Commit
b51e1ff
·
verified ·
1 Parent(s): ad91b68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -20,7 +20,7 @@ It achieves the following results on the evaluation set:
20
 
21
  ## Model description
22
 
23
- This model is a fine-tuned version of Llama-3.1-8B optimized for question-answering tasks on Wikipedia content using instruction tuning and Chain-of-Thought (CoT) reasoning. The model was trained using the LoRA (Low-Rank Adaptation) method with rank 64 to efficiently adapt the base model while maintaining its core capabilities.
24
 
25
  ## How to use
26
 
@@ -36,7 +36,7 @@ from vllm import LLM, SamplingParams
36
  from vllm.lora.request import LoRARequest
37
  from huggingface_hub import snapshot_download
38
  from huggingface_hub.hf_api import HfFolder
39
- HfFolder.save_token('your token here')
40
 
41
  model_id = 'meta-llama/Llama-3.1-8B'
42
  lora_id = 'strongpear/Llama3.1-8B-QA_CoT-WIKI-Instruct-r64'
 
20
 
21
  ## Model description
22
 
23
+ This model is a fine-tuned version of Llama-3.1-8B optimized for Question-Answering (QA) tasks on VIETNAMESE Wikipedia content using Instruction Tuning and Chain-of-Thought (CoT) reasoning. The model was trained using the LoRA (Low-Rank Adaptation) method with rank 64 to efficiently adapt the base model while maintaining its core capabilities.
24
 
25
  ## How to use
26
 
 
36
  from vllm.lora.request import LoRARequest
37
  from huggingface_hub import snapshot_download
38
  from huggingface_hub.hf_api import HfFolder
39
+ HfFolder.save_token('your hf token here')
40
 
41
  model_id = 'meta-llama/Llama-3.1-8B'
42
  lora_id = 'strongpear/Llama3.1-8B-QA_CoT-WIKI-Instruct-r64'