A language model with calculator-like functionality

  • Supports up to 10 digit calculations
  • Nearly 100% accuracy
  • It use CoT to calculate, so the calculation process may be lengthy
  • v0.1 only support addition, subtraction and multiplication.
  • Addition supports the addition of multiple numbers, while subtraction and multiplication currently only supports operations with two numbers

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yuyijiong/llm_calculator_v0.1"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

#addition
prompt = "1234+12345+123456=?"

#subtraction
prompt="1234-12345=?"

#multiply
prompt="1234*12345=?"

messages = [

    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=4096,
    do_sample=False,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Example

Q: 3563+123=?

A:
calculate 23 * 541:
(1, 1) 3 * 1 -> 3 + carry -> 3 -> [3] & carry 0 -> [3]
(1, 2) 3 * 4 -> 12 + carry -> 12 -> [2] & carry 1 -> [20]
(1, 3) 3 * 5 -> 15 + carry -> 16 -> [6] & carry 1 -> [1600]
temp result: 1623
(2, 1) 2 * 1 -> 2 + carry -> 2 -> [2] & carry 0 -> [20]
(2, 2) 2 * 4 -> 8 + carry -> 8 -> [8] & carry 0 -> [800]
(2, 3) 2 * 5 -> 10 + carry -> 10 -> [0] & carry 1 -> [10000]
temp result: 10820
gather temp results: 1623 + 10820

calculate 1623 + 10820:

calculate 1623 + 10820:
(1) 3 + 0 + carry -> 3 -> [3] & carry 0
(2) 2 + 2 + carry -> 4 -> [4] & carry 0
(3) 6 + 8 + carry -> 14 -> [4] & carry 1
(4) 1 + 0 + carry -> 2 -> [2] & carry 0
(5) 0 + 1 + carry -> 1 -> [1] & carry 0
gather results: 12443
final answer: 12443
Downloads last month
33
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for yuyijiong/llm_calculator_v0.1

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(72)
this model
Quantizations
1 model

Dataset used to train yuyijiong/llm_calculator_v0.1