doberst commited on
Commit
07a3bf3
·
verified ·
1 Parent(s): 4e4aaa8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -14
README.md CHANGED
@@ -1,25 +1,25 @@
1
  ---
2
- license: cc-by-sa-4.0
3
  inference: false
4
  ---
5
 
6
- # SLIM-EXTRACT
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
- **slim-extract** implements a specialized function-calling customizable 'extract' capability that takes as an input a context passage, a customized key, and outputs a python dictionary with key that corresponds to the customized key, with a value consisting of a list of items extracted from the text corresponding to that key, e.g.,
11
 
12
- &nbsp;&nbsp;&nbsp;&nbsp;`{'universities': ['Berkeley, Stanford, Yale, University of Florida, ...'] }`
13
 
14
- This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
15
 
16
- For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-extract-tool'**](https://huggingface.co/llmware/slim-extract-tool).
17
 
18
 
19
  ## Prompt format:
20
 
21
- `function = "extract"`
22
- `params = "{custom key}"`
23
  `prompt = "<human> " + {text} + "\n" + `
24
  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
25
 
@@ -27,11 +27,11 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
27
  <details>
28
  <summary>Transformers Script </summary>
29
 
30
- model = AutoModelForCausalLM.from_pretrained("llmware/slim-extract")
31
- tokenizer = AutoTokenizer.from_pretrained("llmware/slim-extract")
32
 
33
- function = "extract"
34
- params = "company"
35
 
36
  text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
37
 
@@ -53,6 +53,8 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
53
 
54
  print("output only: ", output_only)
55
 
 
 
56
  # here's the fun part
57
  try:
58
  output_only = ast.literal_eval(llm_string_output)
@@ -70,8 +72,8 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
70
  <summary>Using as Function Call in LLMWare</summary>
71
 
72
  from llmware.models import ModelCatalog
73
- slim_model = ModelCatalog().load_model("llmware/slim-extract")
74
- response = slim_model.function_call(text,params=["company"], function="extract")
75
 
76
  print("llmware - llm_response: ", response)
77
 
 
1
  ---
2
+ license: apache-2.0
3
  inference: false
4
  ---
5
 
6
+ # SLIM-Q-GEN-PHI-3
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ **slim-q-gen-phi-3** implements a specialized function-calling question generation from a context passage, with output in the form of a python dictionary, e.g.,
11
 
12
+ &nbsp;&nbsp;&nbsp;&nbsp;`{'question': ['What were earnings per share in the most recent quarter?'] }
13
 
14
+ This model is finetuned on top of phi-3-mini-4k-instruct base.
15
 
16
+ For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-q-gen-phi-3-tool'**](https://huggingface.co/llmware/slim-q-gen-phi-3-tool).
17
 
18
 
19
  ## Prompt format:
20
 
21
+ `function = "generate"`
22
+ `params = "{'question', 'boolean', or 'multiple choice'}"`
23
  `prompt = "<human> " + {text} + "\n" + `
24
  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
25
 
 
27
  <details>
28
  <summary>Transformers Script </summary>
29
 
30
+ model = AutoModelForCausalLM.from_pretrained("llmware/slim-q-gen-phi-3")
31
+ tokenizer = AutoTokenizer.from_pretrained("llmware/slim-q-gen-phi-3")
32
 
33
+ function = "generate"
34
+ params = "boolean"
35
 
36
  text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
37
 
 
53
 
54
  print("output only: ", output_only)
55
 
56
+ `{'llm_response': {'question': ['Did Telsa stock decline more than 8% yesterday?']} }`
57
+
58
  # here's the fun part
59
  try:
60
  output_only = ast.literal_eval(llm_string_output)
 
72
  <summary>Using as Function Call in LLMWare</summary>
73
 
74
  from llmware.models import ModelCatalog
75
+ slim_model = ModelCatalog().load_model("llmware/slim-q-gen-phi-3")
76
+ response = slim_model.function_call(text,params=["boolean"], function="generate")
77
 
78
  print("llmware - llm_response: ", response)
79