nextai-team
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,33 @@ Overview
|
|
24 |
|
25 |
is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
Intended Use
|
28 |
|
29 |
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
|
|
|
24 |
|
25 |
is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
|
26 |
|
27 |
+
How to Use
|
28 |
+
|
29 |
+
```from transformers import AutoTokenizer
|
30 |
+
import transformers
|
31 |
+
import torch
|
32 |
+
|
33 |
+
model = "nextai-team/Moe-2x7b-QA-Code" #If you want to test your own model, replace this value with the model directory path
|
34 |
+
|
35 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
36 |
+
pipeline = transformers.pipeline(
|
37 |
+
"text-generation",
|
38 |
+
model=model,
|
39 |
+
device_map="auto",
|
40 |
+
model_kwargs={"torch_dtype": torch.float16},
|
41 |
+
)
|
42 |
+
|
43 |
+
def generate_resposne(query):
|
44 |
+
messages = [{"role": "user", "content": query}]
|
45 |
+
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
46 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
47 |
+
return outputs[0]['generated_text']
|
48 |
+
|
49 |
+
response = generate_resposne("How to learn coding .Please provide a step by step procedure")
|
50 |
+
print(response)
|
51 |
+
|
52 |
+
```
|
53 |
+
|
54 |
Intended Use
|
55 |
|
56 |
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
|