|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- google-bert/bert-base-uncased |
|
pipeline_tag: text-classification |
|
tags: |
|
- bert |
|
- transformers |
|
- text-classification |
|
- pytorch |
|
- fine-tuned |
|
metrics: |
|
- f1 |
|
- accuracy |
|
- precision |
|
- recall |
|
library_name: transformers |
|
--- |
|
# BERT Job Chatbot |
|
|
|
This is a fine-tuned BERT model for job-related query classification. It was trained on a custom dataset for understanding and classifying job-related questions. |
|
|
|
## Usage |
|
|
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
|
model_name = "razaque/bert-job-chatbot" |
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
query = "What are the best software engineering roles?" |
|
inputs = tokenizer(query, return_tensors="pt", truncation=True, padding="max_length", max_length=512) |
|
outputs = model(**inputs) |
|
|
|
predicted_class = outputs.logits.argmax(dim=-1).item() |
|
print(f"Predicted class: {predicted_class}") |