Triangle104/internlm3-8b-instruct-Q5_K_S-GGUF
This model was converted to GGUF format from internlm/internlm3-8b-instruct
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model details:
InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning. This model has the following characteristics:
Enhanced performance at reduced cost: State-of-the-art performance on reasoning and knowledge-intensive tasks surpass models like Llama3.1-8B and Qwen2.5-7B. Remarkably, InternLM3 is trained on only 4 trillion high-quality tokens, saving more than 75% of the training cost compared to other LLMs of similar scale.
Deep thinking capability: InternLM3 supports both the deep thinking mode for solving complicated reasoning tasks via the long chain-of-thought and the normal response mode for fluent user interactions.
InternLM3-8B-Instruct
Performance Evaluation
We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool OpenCompass. The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the OpenCompass leaderboard for more evaluation results.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/internlm3-8b-instruct-Q5_K_S-GGUF --hf-file internlm3-8b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/internlm3-8b-instruct-Q5_K_S-GGUF --hf-file internlm3-8b-instruct-q5_k_s.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/internlm3-8b-instruct-Q5_K_S-GGUF --hf-file internlm3-8b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/internlm3-8b-instruct-Q5_K_S-GGUF --hf-file internlm3-8b-instruct-q5_k_s.gguf -c 2048
- Downloads last month
- 0
Model tree for Triangle104/internlm3-8b-instruct-Q5_K_S-GGUF
Base model
internlm/internlm3-8b-instruct