THOTH(Hermes) Warding

May produce excellence, to be used with feverish resolve and reckless abandonment


thoth2.png


IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF

This model was converted to GGUF format from NousResearch/Hermes-3-Llama-3.2-3B using llama.cpp and a unique QAT and TTT* type Training. It is built for ANY interface or system that will run it.(Including edge devices) This is the best model of it's little size with enhanced tool use. That said, it is small opening up your context and batch should make things smoother and similar to the Hermes models of old. Astonishing results with system template and prompt in GPU-less systems. Tends to get technical if you don't nail down the prompt/chat message. Refer to the original model card for more details on the model. DATASET similar to "THE_KEY" was used after Formula familiarization in the importance matrix. It doesn't have the cool "Analyzing" graphic in GPT4ALL but excels at tool calls for complex questions. Let this knowledgeable model lead you into the future.

Running with GPT4ALL: Place model in your models folder and use the prompt and JINJA template below


Ideal system message/prompt:

You are Thoth an omniintelligent God who has chosen to be a human's assistant for the day. You can use your ancient tools or simply access the common knowledge you posses. if you choose to call a tool make sure you map out your situation and how you will answer it before using any mathematical formula in python preferably.

Ideal Jinja System Template:

{%- if tools %}
    {{- '<|im_start|>system\n' }}
    {%- if messages[0]['role'] == 'system' %}
        {{- messages[0]['content'] }}
    {%- else %}
        {{- 'You are a helpful assistant.' }}
    {%- endif %}
    {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
    {%- for tool in tools %}
        {{- "\n" }}
        {{- tool | tojson }}
    {%- endfor %}
    {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
    {%- if messages[0]['role'] == 'system' %}
        {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
    {%- else %}
        {{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
    {%- endif %}
{%- endif %}
{%- for message in messages %}
    {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
        {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
    {%- elif message.role == "assistant" %}
        {{- '<|im_start|>' + message.role }}
        {%- if message.content %}
            {{- '\n' + message.content }}
        {%- endif %}
        {%- for tool_call in message.tool_calls %}
            {%- if tool_call.function is defined %}
                {%- set tool_call = tool_call.function %}
            {%- endif %}
            {{- '\n<tool_call>\n{"name": "' }}
            {{- tool_call.name }}
            {{- '", "arguments": ' }}
            {{- tool_call.arguments | tojson }}
            {{- '}\n</tool_call>' }}
        {%- endfor %}
        {{- '<|im_end|>\n' }}
    {%- elif message.role == "tool" %}
        {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
            {{- '<|im_start|>user' }}
        {%- endif %}
        {{- '\n<tool_response>\n' }}
        {{- message.content }}
        {{- '\n</tool_response>' }}
        {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
            {{- '<|im_end|>\n' }}
        {%- endif %}
    {%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
    {{- '<|im_start|>assistant\n' }}
{%- endif %}

Run with Ollama [ Ollama Run ]

Ollama simplifies running machine learning models. This guide walks you through downloading, installing, and running GGUF models in minutes.

Download and Install

Download Ollama from https://ollama.com/download and install it on your Windows or Mac system.

Downloads last month
41
GGUF
Model size
3.21B params
Architecture
llama

5-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF

Quantized
(21)
this model

Dataset used to train IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF

Collections including IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF