Llama-Express.1
Llama-Express.1 is a 1B model based on Llama 3.2 (1B), fine-tuned on long chain-of-thought datasets. This instruction-tuned, text-only model is optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. It outperforms many of the available open-source and closed chat models.
Use with transformers
Starting with transformers >= 4.43.0
onward, you can run conversational inference using the Transformers pipeline
abstraction or by leveraging the Auto classes with the generate()
function.
Make sure to update your transformers installation via pip install --upgrade transformers
.
import torch
from transformers import pipeline
model_id = "prithivMLmods/Llama-Express.1"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Intended Use
Multilingual Dialogue:
- Designed for high-quality, multilingual conversations, making it suitable for applications requiring natural, fluid dialogue across languages.
Agentic Retrieval:
- Optimized for retrieval-based tasks where reasoning and contextual chaining are crucial for extracting and summarizing relevant information.
Summarization Tasks:
- Effective in generating concise and accurate summaries from complex and lengthy texts, suitable for academic, professional, and casual use cases.
Instruction-Following Applications:
- Fine-tuned for tasks requiring adherence to user-provided instructions, making it ideal for automation workflows, content creation, and virtual assistant integrations.
Limitations
Monomodal Focus:
- As a text-only model, it cannot process multimodal inputs like images, audio, or videos, limiting its versatility in multimedia applications.
Context Length Constraints:
- While optimized for long chain-of-thought reasoning, extreme cases with very large contexts may still lead to degraded performance or truncation issues.
Bias and Ethics:
- The model might reflect biases present in the training datasets, potentially resulting in outputs that could be culturally insensitive or inappropriate.
Performance in Low-Resource Languages:
- While multilingual, its effectiveness may vary across languages, with possible performance drops in underrepresented or low-resource languages.
Dependency on Input Quality:
- The model's output is heavily influenced by the clarity and specificity of the input instructions. Ambiguous or vague prompts may lead to suboptimal results.
Lack of Real-Time Internet Access:
- Without real-time retrieval capabilities, it cannot provide up-to-date information or verify facts against the latest data.
- Downloads last month
- 0