Triangle104
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,69 @@ library_name: transformers
|
|
18 |
This model was converted to GGUF format from [`Spestly/AwA-1.5B`](https://huggingface.co/Spestly/AwA-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/Spestly/AwA-1.5B) for more details on the model.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Use with llama.cpp
|
22 |
Install llama.cpp through brew (works on Mac and Linux)
|
23 |
|
|
|
18 |
This model was converted to GGUF format from [`Spestly/AwA-1.5B`](https://huggingface.co/Spestly/AwA-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/Spestly/AwA-1.5B) for more details on the model.
|
20 |
|
21 |
+
---
|
22 |
+
Model details:
|
23 |
+
-
|
24 |
+
AwA (Answers with Athena) is my portfolio project, showcasing a cutting-edge Chain-of-Thought (CoT) reasoning model. I created AwA to excel in providing detailed, step-by-step answers to complex questions across diverse domains. This model represents my dedication to advancing AI’s capability for enhanced comprehension, problem-solving, and knowledge synthesis.
|
25 |
+
Key Features
|
26 |
+
|
27 |
+
Chain-of-Thought Reasoning: AwA delivers step-by-step breakdowns of solutions, mimicking logical human thought processes.
|
28 |
+
|
29 |
+
Domain Versatility: Performs exceptionally across a wide range of domains, including mathematics, science, literature, and more.
|
30 |
+
|
31 |
+
Adaptive Responses: Adjusts answer depth and complexity based on input queries, catering to both novices and experts.
|
32 |
+
|
33 |
+
Interactive Design: Designed for educational tools, research assistants, and decision-making systems.
|
34 |
+
|
35 |
+
Intended Use Cases
|
36 |
+
|
37 |
+
Educational Applications: Supports learning by breaking down complex problems into manageable steps.
|
38 |
+
|
39 |
+
Research Assistance: Generates structured insights and explanations in academic or professional research.
|
40 |
+
|
41 |
+
Decision Support: Enhances understanding in business, engineering, and scientific contexts.
|
42 |
+
|
43 |
+
General Inquiry: Provides coherent, in-depth answers to everyday questions.
|
44 |
+
|
45 |
+
Type: Chain-of-Thought (CoT) Reasoning Model
|
46 |
+
|
47 |
+
Base Architecture: Adapted from [qwen2]
|
48 |
+
|
49 |
+
Parameters: [1.54B]
|
50 |
+
|
51 |
+
Fine-tuning: Specialized fine-tuning on Chain-of-Thought reasoning datasets to enhance step-by-step explanatory capabilities.
|
52 |
+
|
53 |
+
Ethical Considerations
|
54 |
+
|
55 |
+
Bias Mitigation: I have taken steps to minimise biases in the training data. However, users are encouraged to cross-verify outputs in sensitive contexts.
|
56 |
+
|
57 |
+
Limitations: May not provide exhaustive answers for niche topics or domains outside its training scope.
|
58 |
+
|
59 |
+
User Responsibility: Designed as an assistive tool, not a replacement for expert human judgment.
|
60 |
+
|
61 |
+
Usage
|
62 |
+
Option A: Local
|
63 |
+
|
64 |
+
Using locally with the Transformers library
|
65 |
+
|
66 |
+
# Use a pipeline as a high-level helper
|
67 |
+
from transformers import pipeline
|
68 |
+
|
69 |
+
messages = [
|
70 |
+
{"role": "user", "content": "Who are you?"},
|
71 |
+
]
|
72 |
+
pipe = pipeline("text-generation", model="Spestly/AwA-1.5B")
|
73 |
+
pipe(messages)
|
74 |
+
|
75 |
+
Option B: API & Space
|
76 |
+
|
77 |
+
You can use the AwA HuggingFace space or the AwA API (Coming soon!)
|
78 |
+
Roadmap
|
79 |
+
|
80 |
+
More AwA model sizes e.g 7B and 14B
|
81 |
+
Create AwA API via spestly package
|
82 |
+
|
83 |
+
---
|
84 |
## Use with llama.cpp
|
85 |
Install llama.cpp through brew (works on Mac and Linux)
|
86 |
|