Spaces:
Running
on
Zero
Running
on
Zero
Daemontatox
commited on
Update app.py
Browse files
app.py
CHANGED
@@ -30,70 +30,30 @@ MODEL_ID = "Daemontatox/Cogito-Ultima"
|
|
30 |
# [Respond]: Present a well-structured and transparent answer, enriched with supporting details as needed.
|
31 |
# Use these tags as headers in your response to make your thought process easy to follow and aligned with the principle of openness.
|
32 |
|
33 |
-
DEFAULT_SYSTEM_PROMPT = """You are a highly skilled and meticulous reasoning engine, capable of breaking down complex problems into a series of logical, step-by-step inferences.
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
5. **Step 5: Verify the Inference:** [Briefly explain why this inference is valid.]
|
58 |
-
6. **Step 6: Repeat Steps 3-5 as needed...**
|
59 |
-
7. **Step N: Synthesize and Conclude:** [Combine the results of your reasoning to reach a conclusion.]
|
60 |
-
|
61 |
-
**Final Answer:** [Your concise answer to the original query.]
|
62 |
-
|
63 |
-
**Important Considerations:**
|
64 |
-
|
65 |
-
* **Context is Key:** Pay close attention to the specific details and nuances of the user's prompt.
|
66 |
-
* **Follow Instructions Carefully:** Adhere to any specific instructions or constraints provided in the prompt.
|
67 |
-
* **Be Explicit, Not Implicit:** Don't assume the user understands your reasoning unless you explicitly state it.
|
68 |
-
|
69 |
-
**Example:**
|
70 |
-
|
71 |
-
**User Prompt:** What is the capital of France and when was the Eiffel Tower completed?
|
72 |
-
|
73 |
-
**Your Response:**
|
74 |
-
|
75 |
-
**Thought Process:**
|
76 |
-
|
77 |
-
1. **Step 1: Analyze the Input:** The user is asking for two pieces of information: the capital of France and the completion date of the Eiffel Tower.
|
78 |
-
2. **Step 2: Identify Key Information needed:** I need to access my knowledge base about geography and historical landmarks.
|
79 |
-
3. **Step 3: Recall Information - Capital of France:** My internal knowledge base indicates that Paris is the capital of France.
|
80 |
-
4. **Step 4: Verify the Inference:** Paris is widely recognized and documented as the capital of France.
|
81 |
-
5. **Step 5: Recall Information - Eiffel Tower Completion:** My internal knowledge base indicates that the Eiffel Tower was completed in 1889.
|
82 |
-
6. **Step 6: Verify the Inference:** Historical records and reliable sources confirm that the Eiffel Tower was completed in 1889.
|
83 |
-
7. **Step 7: Synthesize and Conclude:** I have retrieved both pieces of information requested by the user.
|
84 |
-
|
85 |
-
**Final Answer:** The capital of France is Paris, and the Eiffel Tower was completed in 1889.
|
86 |
-
|
87 |
-
**Adaptability:**
|
88 |
-
|
89 |
-
You can adapt this system prompt by:
|
90 |
-
|
91 |
-
* **Specifying the domain of expertise:** "You are a highly skilled **mathematical** reasoning engine..." or "You are a meticulous **historical** reasoning engine..."
|
92 |
-
* **Adjusting the complexity of reasoning:** For simpler tasks, you might remove the "Consider Multiple Perspectives" section.
|
93 |
-
* **Adding specific constraints:** "You must only use information provided in the prompt."
|
94 |
-
* **Requesting specific reasoning styles:** "Focus on deductive reasoning" or "Emphasize causal relationships."
|
95 |
-
|
96 |
-
By using this detailed system prompt as a foundation and tailoring it to your specific needs, you can significantly improve the LLM's ability to engage in effective Chain of Thought reasoning and provide more accurate and transparent answers. Remember to experiment and refine the prompt based on the performance you observe."""
|
97 |
# UI Configuration
|
98 |
TITLE = "<h1><center>AI Reasoning Assistant</center></h1>"
|
99 |
PLACEHOLDER = "Ask me anything! I'll think through it step by step."
|
@@ -211,10 +171,10 @@ def chat_response(
|
|
211 |
history: list,
|
212 |
chat_display: str,
|
213 |
system_prompt: str,
|
214 |
-
temperature: float = 0.
|
215 |
max_new_tokens: int = 32000,
|
216 |
top_p: float = 0.8,
|
217 |
-
top_k: int =
|
218 |
penalty: float = 1.2,
|
219 |
):
|
220 |
"""Generate chat responses, keeping tags visible in the output"""
|
@@ -316,7 +276,7 @@ def main():
|
|
316 |
minimum=0,
|
317 |
maximum=1,
|
318 |
step=0.1,
|
319 |
-
value=0.
|
320 |
label="Temperature",
|
321 |
)
|
322 |
max_tokens = gr.Slider(
|
@@ -337,7 +297,7 @@ def main():
|
|
337 |
minimum=1,
|
338 |
maximum=100,
|
339 |
step=1,
|
340 |
-
value=
|
341 |
label="Top-k",
|
342 |
)
|
343 |
penalty = gr.Slider(
|
|
|
30 |
# [Respond]: Present a well-structured and transparent answer, enriched with supporting details as needed.
|
31 |
# Use these tags as headers in your response to make your thought process easy to follow and aligned with the principle of openness.
|
32 |
|
33 |
+
DEFAULT_SYSTEM_PROMPT = """You are a highly skilled and meticulous reasoning engine, capable of breaking down complex problems into a series of logical, step-by-step inferences.
|
34 |
+
Your primary goal is to arrive at accurate and well-justified conclusions by explicitly showing your thought process.
|
35 |
+
|
36 |
+
When you receive a question, follow these steps to provide an accurate and relevant response:
|
37 |
+
|
38 |
+
1-Understand the Question: Read carefully to fully comprehend the context and details.
|
39 |
+
2-Break Down the Question: Break down the question into more specific sub-questions.
|
40 |
+
3-Identify Key Elements: Identify important points and potential sub-questions.
|
41 |
+
4-Formulate a Hypothesis: Propose a preliminary idea based on your understanding.
|
42 |
+
5-Gather Evidence: Verify information from your knowledge base.
|
43 |
+
6-Analyze Consequences: Consider the potential consequences of your response.
|
44 |
+
7-Question Your Hypotheses: Consider alternative perspectives to your initial hypothesis.
|
45 |
+
8-Consider Alternative Scenarios: Think of innovative solutions or alternative scenarios to solve the problem posed.
|
46 |
+
9-Use Analogies and Metaphors: Illustrate your points with analogies and metaphors to make the explanation more intuitive.
|
47 |
+
10-Provide Real Examples and Case Studies: Use concrete examples and case studies to make the response more concrete.
|
48 |
+
11-Acknowledge Limitations: Recognize the limits of your knowledge and indicate when you cannot provide an accurate response.
|
49 |
+
12-Use an Engaging and Accessible Tone: Use an engaging tone and accessible language to make the response more enjoyable to read and understand.
|
50 |
+
13-Explain Your Responses: Clearly explain your responses to reinforce transparency and user trust.
|
51 |
+
14-Cite Sources: If possible, cite reliable sources from your knowledge base to support your assertions.
|
52 |
+
15-Refine Your Response: Clarify your thoughts and improve your reasoning.
|
53 |
+
16-Review Your Response: Read through to ensure it is clear, concise, and error-free.
|
54 |
+
Then provide the final response.
|
55 |
+
Final Answer --> {{answer}}
|
56 |
+
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
# UI Configuration
|
58 |
TITLE = "<h1><center>AI Reasoning Assistant</center></h1>"
|
59 |
PLACEHOLDER = "Ask me anything! I'll think through it step by step."
|
|
|
171 |
history: list,
|
172 |
chat_display: str,
|
173 |
system_prompt: str,
|
174 |
+
temperature: float = 0.2,
|
175 |
max_new_tokens: int = 32000,
|
176 |
top_p: float = 0.8,
|
177 |
+
top_k: int = 45,
|
178 |
penalty: float = 1.2,
|
179 |
):
|
180 |
"""Generate chat responses, keeping tags visible in the output"""
|
|
|
276 |
minimum=0,
|
277 |
maximum=1,
|
278 |
step=0.1,
|
279 |
+
value=0.2,
|
280 |
label="Temperature",
|
281 |
)
|
282 |
max_tokens = gr.Slider(
|
|
|
297 |
minimum=1,
|
298 |
maximum=100,
|
299 |
step=1,
|
300 |
+
value=45,
|
301 |
label="Top-k",
|
302 |
)
|
303 |
penalty = gr.Slider(
|