A quantized version of Granite Guardian 3.1 2B model from IBM.
Quantization is done by llama.cpp.
P.S. The llama.cpp library encountered issues during model initialization in both Python and llama-server modes, even with the quantized 8B version from other distributors. However, you can use LM Studio for inference!
Model Summary (from original repository)
Granite Guardian 3.1 2B is a fine-tuned Granite 3.1 2B Instruct model designed to detect risks in prompts and responses. It can help with risk detection along many key dimensions catalogued in the IBM AI Risk Atlas. It is trained on unique data comprising human annotations and synthetic data informed by internal red-teaming. It outperforms other open-source models in the same space on standard benchmarks.
- Developers: IBM Research
- GitHub Repository: ibm-granite/granite-guardian
- Cookbook: Granite Guardian Recipes
- Website: Granite Guardian Docs
- Paper: Granite Guardian
- Release Date: December 18, 2024
- License: Apache 2.0
- Downloads last month
- 2
Model tree for ktoprakucar/granite-guardian-3.1-2b-Q8-GGUF
Base model
ibm-granite/granite-guardian-3.1-2b