Update README.md
Browse files
README.md
CHANGED
@@ -1,25 +1,25 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
-
# SLIM-
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
-
**slim-
|
11 |
|
12 |
-
`{'
|
13 |
|
14 |
-
This model is
|
15 |
|
16 |
-
For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-
|
17 |
|
18 |
|
19 |
## Prompt format:
|
20 |
|
21 |
-
`function = "
|
22 |
-
`params = "{
|
23 |
`prompt = "<human> " + {text} + "\n" + `
|
24 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
25 |
|
@@ -27,11 +27,11 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
27 |
<details>
|
28 |
<summary>Transformers Script </summary>
|
29 |
|
30 |
-
model = AutoModelForCausalLM.from_pretrained("llmware/slim-
|
31 |
-
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-
|
32 |
|
33 |
-
function = "
|
34 |
-
params = "
|
35 |
|
36 |
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
|
37 |
|
@@ -53,6 +53,8 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
53 |
|
54 |
print("output only: ", output_only)
|
55 |
|
|
|
|
|
56 |
# here's the fun part
|
57 |
try:
|
58 |
output_only = ast.literal_eval(llm_string_output)
|
@@ -70,8 +72,8 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
70 |
<summary>Using as Function Call in LLMWare</summary>
|
71 |
|
72 |
from llmware.models import ModelCatalog
|
73 |
-
slim_model = ModelCatalog().load_model("llmware/slim-
|
74 |
-
response = slim_model.function_call(text,params=["
|
75 |
|
76 |
print("llmware - llm_response: ", response)
|
77 |
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
+
# SLIM-Q-GEN-PHI-3
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
+
**slim-q-gen-phi-3** implements a specialized function-calling question generation from a context passage, with output in the form of a python dictionary, e.g.,
|
11 |
|
12 |
+
`{'question': ['What were earnings per share in the most recent quarter?'] }
|
13 |
|
14 |
+
This model is finetuned on top of phi-3-mini-4k-instruct base.
|
15 |
|
16 |
+
For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-q-gen-phi-3-tool'**](https://huggingface.co/llmware/slim-q-gen-phi-3-tool).
|
17 |
|
18 |
|
19 |
## Prompt format:
|
20 |
|
21 |
+
`function = "generate"`
|
22 |
+
`params = "{'question', 'boolean', or 'multiple choice'}"`
|
23 |
`prompt = "<human> " + {text} + "\n" + `
|
24 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
25 |
|
|
|
27 |
<details>
|
28 |
<summary>Transformers Script </summary>
|
29 |
|
30 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/slim-q-gen-phi-3")
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-q-gen-phi-3")
|
32 |
|
33 |
+
function = "generate"
|
34 |
+
params = "boolean"
|
35 |
|
36 |
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
|
37 |
|
|
|
53 |
|
54 |
print("output only: ", output_only)
|
55 |
|
56 |
+
`{'llm_response': {'question': ['Did Telsa stock decline more than 8% yesterday?']} }`
|
57 |
+
|
58 |
# here's the fun part
|
59 |
try:
|
60 |
output_only = ast.literal_eval(llm_string_output)
|
|
|
72 |
<summary>Using as Function Call in LLMWare</summary>
|
73 |
|
74 |
from llmware.models import ModelCatalog
|
75 |
+
slim_model = ModelCatalog().load_model("llmware/slim-q-gen-phi-3")
|
76 |
+
response = slim_model.function_call(text,params=["boolean"], function="generate")
|
77 |
|
78 |
print("llmware - llm_response: ", response)
|
79 |
|