File size: 3,537 Bytes
87aa065
 
 
 
 
a4a10c9
 
87aa065
 
 
 
 
 
2637ab7
87aa065
 
a4a10c9
 
 
 
 
87aa065
 
 
 
 
 
a4a10c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87aa065
 
a4a10c9
87aa065
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
base_model: akjindal53244/Llama-3.1-Storm-8B
library_name: peft
---

# LoRA fine-tuned Llama-3.1-Storm-8B on CV/resume and job description matching task
![Alt text](LLMF.png)


## Model Details

### Model Description

Tired of sifting through endless resumes and job descriptions? Meet the newest AI talent matchmaker, crafted with [LlamaFactory.AI](https://llamafactory.ai/)! This model delivers comprehensive candidate evaluations by analyzing the intricate relationships between resumes and job requirements. The model processes detailed instructions alongside CV and job description inputs to generate thorough matching analyses, complete with quantitative scores and specific recommendations. Going beyond traditional keyword matching, it evaluates candidate qualifications in context, providing structured insights that help streamline the recruitment process. This tool represents a significant step forward in bringing objective, consistent, and scalable candidate assessment to HR professionals and hiring managers. Developed to enhance recruitment efficiency while maintaining high accuracy in candidate evaluation, this model demonstrates the practical application of AI in solving real-world hiring challenges


- **Developed by:** [llamafactory.ai](https://llamafactory.ai/)
- **Model type:** text generation
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)




## How to Get Started with the Model

```python

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig

base_model_name = "akjindal53244/Llama-3.1-Storm-8B"

# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)

# Load the LoRA adapter
peft_model_id = "LlamaFactoryAI/cv-job-description-matching"
config = PeftConfig.from_pretrained(peft_model_id)
model = PeftModel.from_pretrained(base_model, peft_model_id)

# Use the model
messages = [
    {
        "role": "system",
        "content": """You are an advanced AI model designed to analyze the compatibility between a CV and a job description. You will receive a CV and a job description. Your task is to output a structured JSON format that includes the following:

1. matching_analysis: Analyze the CV against the job description to identify key strengths and gaps.
2. description: Summarize the relevance of the CV to the job description in a few concise sentences.
3. score: Provide a numerical compatibility score (0-100) based on qualifications, skills, and experience.
4. recommendation: Suggest actions for the candidate to improve their match or readiness for the role.

Your output must be in JSON format as follows:
{
  "matching_analysis": "Your detailed analysis here.",
  "description": "A brief summary here.",
  "score": 85,
  "recommendation": "Your suggestions here."
}
""",
    },
    {"role": "user", "content": "<CV> {cv} </CV>\n<job_description> {job_description} </job_description>"},
]
inputs = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
)
outputs = model.generate(inputs, max_new_tokens=128)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```



## Model Card Authors 

[@xbeket](https://huggingface.co/xbeket)
## Model Card Contact

[Discord](https://discord.com/invite/TrPmq2GT2V)
### Framework versions

- PEFT 0.12.0