Overview

DeepSeek developed and released the DeepSeek R1 Distill Llama 8B model, a distilled version of the Llama 8B language model. This variant is fine-tuned for high-performance text generation, optimized for dialogue, and tailored for information-seeking tasks. It offers a robust balance between model size and performance, making it suitable for demanding conversational AI and research use cases.

The model is designed to deliver accurate, efficient, and safe responses in applications such as customer support, knowledge systems, and research environments.

Variants

No Variant Cortex CLI command
1 gguf cortex run deepseek-r1-distill-llama-8b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/deepseek-r1-distill-llama-8b
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run deepseek-r1-distill-llama-8b
    

Credits

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including cortexso/deepseek-r1-distill-llama-8b