You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

GGUF Version - Risk Assessment LLaMA Model

Model Overview

This is the GGUF quantized version of the Risk Assessment LLaMA Model, fine-tuned from meta-llama/Llama-3.1-8B-Instruct using the theeseus-ai/RiskClassifier dataset. The model is designed for risk classification and assessment tasks involving critical thinking scenarios.

This version is optimized for low-latency inference and deployment in environments with constrained resources using llama.cpp.

Model Details

  • Base Model: meta-llama/Llama-3.1-8B-Instruct
  • Quantization Format: GGUF
  • Fine-tuned Dataset: theeseus-ai/RiskClassifier
  • Architecture: Transformer-based language model (LLaMA 3.1)
  • Use Case: Risk analysis, classification, and reasoning tasks.

Supported Platforms

This GGUF model is compatible with:

  • llama.cpp
  • text-generation-webui
  • ollama
  • GPT4All
  • KoboldAI

Quantization Details

This model is available in the GGUF format, allowing it to run efficiently on:

  • CPUs (Intel/AMD processors)
  • GPUs via ROCm, CUDA, or Metal backend
  • Apple Silicon (M1/M2)
  • Embedded devices like Raspberry Pi

Quantized Sizes Available:

  • Q4_0, Q4_K_M, Q5_0, Q5_K, Q8_0 (Choose based on performance needs.)

Model Capabilities

The model performs the following tasks:

  • Risk Classification: Analyzes contexts and assigns risk levels (Low, Moderate, High, Very High).
  • Critical Thinking Assessments: Processes complex scenarios and evaluates reasoning.
  • Explanations: Provides justifications for assigned risk levels.

Example Use

Inference with llama.cpp

./main -m risk-assessment-gguf-model.gguf -p "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"

Inference with Python (llama-cpp-python)

from llama_cpp import Llama

model = Llama(model_path="risk-assessment-gguf-model.gguf")
prompt = "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
output = model(prompt)
print(output)

Applications

  • Fraud detection and transaction monitoring.
  • Automated risk evaluation for compliance and auditing.
  • Decision support systems for cybersecurity.
  • Risk-level assessments in critical scenarios.

Limitations

  • The model's output should be reviewed by domain experts before taking actionable decisions.
  • Performance depends on context length and prompt design.
  • May require further tuning for domain-specific applications.

Evaluation

Metrics:

  • Accuracy on Risk Levels: Evaluated against test cases with labeled risk scores.
  • F1-Score and Recall: Measured for correct classification of risk categories.

Results:

  • Accuracy: 91.2%
  • F1-Score: 0.89

Ethical Considerations

  • Bias Mitigation: Efforts were made to reduce biases, but users should validate outputs for fairness and objectivity.
  • Sensitive Data: Avoid using the model for decisions involving personal data without human review.

Model Sources

Citation

@misc{riskclassifier2024,
  title={Risk Assessment LLaMA Model (GGUF)},
  author={Theeseus AI},
  year={2024},
  publisher={HuggingFace},
  url={https://huggingface.co/theeseus-ai/RiskClassifier}
}

Contact

Downloads last month
42
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for theeseus-ai/CriticalThinkerRisk-8B-GGUF

Quantized
(303)
this model

Dataset used to train theeseus-ai/CriticalThinkerRisk-8B-GGUF