Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
base_model: OEvortex/lite-hermes
|
4 |
+
inference: false
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
license: mit
|
8 |
+
tags:
|
9 |
+
- HelpingAI
|
10 |
+
- lite
|
11 |
+
- code
|
12 |
+
---
|
13 |
+
|
14 |
+
#### Description
|
15 |
+
|
16 |
+
Optimize your engagement with [This project](https://huggingface.co/OEvortex/OEvortex/HelpingAI-unvelite) by seamlessly integrating GGUF Format model files.
|
17 |
+
Please Subscribe to my youtube channel [OEvortex](https://youtube.com/@OEvortex)
|
18 |
+
### GGUF Technical Specifications
|
19 |
+
|
20 |
+
Delve into the intricacies of GGUF, a meticulously crafted format that builds upon the robust foundation of the GGJT model. Tailored for heightened extensibility and user-centric functionality, GGUF introduces a suite of indispensable features:
|
21 |
+
|
22 |
+
**Single-file Deployment:** Streamline distribution and loading effortlessly. GGUF models have been meticulously architected for seamless deployment, necessitating no external files for supplementary information.
|
23 |
+
|
24 |
+
**Extensibility:** Safeguard the future of your models. GGUF seamlessly accommodates the integration of new features into GGML-based executors, ensuring compatibility with existing models.
|
25 |
+
|
26 |
+
**mmap Compatibility:** Prioritize efficiency. GGUF models are purposefully engineered to support mmap, facilitating rapid loading and saving, thus optimizing your workflow.
|
27 |
+
|
28 |
+
**User-Friendly:** Simplify your coding endeavors. Load and save models effortlessly, irrespective of the programming language used, obviating the dependency on external libraries.
|
29 |
+
|
30 |
+
**Full Information:** A comprehensive repository in a single file. GGUF models encapsulate all requisite information for loading, eliminating the need for users to furnish additional data.
|
31 |
+
|
32 |
+
The differentiator between GGJT and GGUF lies in the deliberate adoption of a key-value structure for hyperparameters (now termed metadata). Bid farewell to untyped lists, and embrace a structured approach that seamlessly accommodates new metadata without compromising compatibility with existing models. Augment your model with supplementary information for enhanced inference and model identification.
|
33 |
+
|
34 |
+
|
35 |
+
**QUANTIZATION_METHODS:**
|
36 |
+
|
37 |
+
| Method | Quantization | Advantages | Trade-offs |
|
38 |
+
|---|---|---|---|
|
39 |
+
| q2_k | 2-bit integers | Significant model size reduction | Minimal impact on accuracy |
|
40 |
+
| q3_k_l | 3-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
|
41 |
+
| q3_k_m | 3-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
|
42 |
+
| q3_k_s | 3-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
|
43 |
+
| q4_0 | 4-bit integers | Significant model size reduction | Moderate impact on accuracy |
|
44 |
+
| q4_1 | 4-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
|
45 |
+
| q4_k_m | 4-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy |
|
46 |
+
| q4_k_s | 4-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
|
47 |
+
| q5_0 | 5-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
|
48 |
+
| q5_1 | 5-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
|
49 |
+
| q5_k_m | 5-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy |
|
50 |
+
| q5_k_s | 5-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
|
51 |
+
| q6_k | 6-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
|
52 |
+
| q8_0 | 8-bit integers | Significant model size reduction | Minimal impact on accuracy |
|
53 |
+
|