File size: 2,016 Bytes
c6ae826
 
 
 
 
e72dea3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: mit
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
language:
- en
tags:
- medical
---


<div align="center">
<h1>
  MedSSS-8B-Policy
</h1>
</div>

<div align="center">
<a href="https://github.com/pixas/MedSSS" target="_blank">GitHub</a> | <a href="" target="_blank">Paper</a>
</div>

# <span>Introduction</span>
**MedSSS-Policy** is a the policy model designed for slow-thinking medical reasoning. It will conduct explicit step-wise reasoning and finalize the answer at the end of the response.

For more information, visit our GitHub repository: 
[https://github.com/pixas/MedSSS](https://github.com/pixas/MedSSS).




# <span>Usage</span>
We build the policy model as a LoRA adapter, which saves the memory to use it.
As this LoRA adapter is built on `Meta-Llama3.1-8B-Instruct`, you need to first prepare the base model in your platform.
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang),  or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct",torch_dtype="auto",device_map="auto")
model = PeftModel.from_pretrained(base_model, "pixas/MedSSS_Policy", torc_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("pixas/MedSSS_Policy")
input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

MedSSS-Policy adopts a step-wise reasoning approach, with outputs formatted as:

```
Step 0: Let's break down this problem step by step.
Step 1: ...
[several steps]
Step N: [last reasoning step]\n\nThe answer is {answer}
```