ozyman commited on
Commit
f292e6a
Β·
verified Β·
1 Parent(s): 1ca0704

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -21,7 +21,20 @@ base_model:
21
 
22
  ![NAVI Cover](assets/cover.jpg)
23
 
24
- NAVI (Verification Intelligence) is a hallucination detection safety model designed primarily for policy alignment verification. It reviews various types of text against documents and policies to identify non-compliant or violating content. Optimized for enterprise applications requiring compliance verification for automated text generation, NAVI supports lengthy and complex documents. To push policy verification in the open-source community, we release NAVI-small-preview, an open-weights version of the model we have deployed on the platform. NAVI-small-preview is centered around verifying specifically assistant outputs against policy documents. The full solution is accessible via the [NAVI platform and API](https://naviml.com/).
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  The chart below illustrates NAVI's strong performance on Policy Alignment Verification test set, with the full model achieving an F1 score of 90.4%, outperforming all competitors. NAVI-small-preview also demonstrates impressive results, providing an open-source option with significant improvements over baseline models while maintaining reliable policy alignment verification.
27
 
 
21
 
22
  ![NAVI Cover](assets/cover.jpg)
23
 
24
+ NAVI (Verification Intelligence) is a hallucination detection safety model designed primarily for policy alignment verification. It reviews various types of text against documents and policies to identify non-compliant or violating content. Optimized for enterprise applications requiring compliance verification for automated text generation, NAVI supports lengthy and complex documents. To push policy verification in the open-source community, we release NAVI-small-preview, an open-weights version of the model we have deployed on the platform. NAVI-small-preview is centered around verifying specifically assistant outputs against policy documents. The full solution is accessible via the links below.
25
+
26
+ ✨ **Exciting News!** ✨ We are temporarily offering **free API and platform access** to empower developers and researchers to explore NAVI's capabilities! πŸš€
27
+
28
+ ---
29
+
30
+ ## 🌐 **Cross-Links for NAVI's Ecosystem** 🌐
31
+
32
+ 1. **🌍 Platform Access:** [NAVI Platform](https://naviml.com) – Dive into NAVI's full capabilities and explore how it ensures policy alignment and compliance.
33
+ 2. **πŸ“œ API Documentation:** [API Docs](https://naviml.mintlify.app/introduction) – Your starting point for integrating NAVI into your applications.
34
+ 3. **πŸ“ Introductory Blogpost:** [Policy-Driven Safeguards Comparison](https://naviml.com/articles/policy-driven-safeguards-comparison) – A deep dive into the challenges and solutions NAVI addresses.
35
+ 4. **πŸ“Š Public Dataset:** [Policy Alignment Verification Dataset](https://huggingface.co/datasets/nace-ai/policy-alignment-verification-dataset) – Test and benchmark your models with NAVI's open-source dataset.
36
+
37
+ ## Performance Overview
38
 
39
  The chart below illustrates NAVI's strong performance on Policy Alignment Verification test set, with the full model achieving an F1 score of 90.4%, outperforming all competitors. NAVI-small-preview also demonstrates impressive results, providing an open-source option with significant improvements over baseline models while maintaining reliable policy alignment verification.
40