Model Card for PsyCare1.0-Llama3.1-8B

image/webp

Model Details

PsyCare1.0-Llama3.1-8B is a robust and adaptable model, providing accessible mental health resources for individuals and organizations seeking to make a positive impact in this critical domain.

Model Description

PsyCare1.0-Llama3.1-8B is an 8B parameter model developed by RekklesAI to support mental health applications. Designed to provide empathetic and professional responses, the model aims to assist users in addressing a wide range of mental health concerns through natural and meaningful interactions.

Fine-tuned with advanced techniques using the Unsloth framework, the model achieves a balance between efficiency and effectiveness, enabling it to deliver high-quality guidance while being resource-efficient. Its robust conversational capabilities make it a reliable tool for offering mental health advice and fostering better emotional well-being.

PsyCare1.0-Llama3.1-8B empowers organizations and developers to integrate AI into mental health support systems, making mental health resources more accessible and impactful for individuals in need.

  • Developed by: RekklesAI
  • Model type: Natural Language Processing (NLP) model for mental health applications
  • Language(s) (NLP): English
  • License: Apache license 2.0
  • Finetuned from model: meta-llama/Llama-3.1-8B

Inference with vLLM


from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

# Load the model and tokenizer
llm = LLM(model="RekklesAI/PsyCare1.0-Llama3.1-8B", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RekklesAI/PsyCare1.0-Llama3.1-8B")

# Set sampling parameters for inference
sampling_params = SamplingParams(
    temperature=0.7,  # Controls randomness of responses
    top_p=0.9,        # Nucleus sampling to focus on high-probability tokens
    max_tokens=1024,  # Maximum tokens for generated output
    stop=["<|eot_id|>"]  # Define stop tokens
)

# Define input messages
messages = [
    {"role": "user", "content": "What should I do if I feel overwhelmed with stress at work?"}
]

# Prepare input prompts using the chat template
prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
print("Input Prompt:")
print(prompts[0])  # Displays the generated input prompt

# Generate model output
outputs = llm.generate(prompts=prompts, sampling_params=sampling_params)

# Print the generated response
print("Model Response:")
print(outputs[0].outputs[0].text)

Citation

If you use PsyCare1.0-Llama3.1-8B in your research or applications, please cite the following:

Dataset

@misc{Amod_mental_health,
  author = {Amod},
  title = {Mental Health Counseling Conversations Dataset},
  year = {2024},
  url = {https://huggingface.co/datasets/Amod/mental_health_counseling_conversations}
}

Base Model

@misc{meta_llama3.1,
  author = {Meta AI},
  title = {Llama-3.1-8B-Instruct},
  year = {2024},
  url = {https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct}
}

Model Developed by RekklesAI

@misc{PsyCare1.0,
  author = {RekklesAI},
  title = {PsyCare1.0-Llama3.1-8B},
  year = {2025},
  note = {A fine-tuned model for mental health support applications},
  url = {https://huggingface.co/RekklesAI/PsyCare1.0-Llama3.1-8B}
}

Model Card Contact

Issue Reporting: Please use the issue tracker on the model's repository.

Downloads last month
51
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for RekklesAI/PsyCare1.0-Llama3.1-8B

Finetuned
(781)
this model
Quantizations
1 model

Dataset used to train RekklesAI/PsyCare1.0-Llama3.1-8B