CustomModel / README.md
ManishThota's picture
Update README.md
18da287 verified
|
raw
history blame
2.81 kB
metadata
license: creativeml-openrail-m

Sparrow

Tiny Vision Language Model

A Custom Model Enhanced for Educational Contexts: This specialized model integrates slide-text pairs from machine learning classes, leveraging a unique training approach. It connects a frozen pre-trained vision encoder (SigLip) with a frozen language model (Phi-2) through an innovative projector. The model employs attention mechanisms and language modeling loss to deeply understand and generate educational content, specifically tailored to the context of machine learning education.

3B parameter model built by @Manish using SigLIP, Phi-2, Language Modeling Loss, LLaVa data, and Custom setting training dataset. The model is released for research purposes only, commercial use is not allowed.

Pretraining is done and if at all in future we are adding more question answer pairs, we can just do lora finetuning on top of this model

How to use

Install dependencies

pip install transformers # latest version is ok, but we recommend v4.31.0
pip install -q pillow accelerate einops

You can use the following code for model inference. The format of text instruction is similar to LLaVA.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

torch.set_default_device("cuda")

#Create model
model = AutoModelForCausalLM.from_pretrained(
    "ManishThota/Sparrow", 
    torch_dtype=torch.float16, 
    device_map="auto",
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ManishThota/Sparrow", trust_remote_code=True)

#function to generate the answer
def predict(question, image_path):
    #Set inputs
    text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{question}? ASSISTANT:"
    image = Image.open(image_path)
    
    input_ids = tokenizer(text, return_tensors='pt').input_ids.to('cuda')
    image_tensor = model.image_preprocess(image)
    
    #Generate the answer
    output_ids = model.generate(
        input_ids,
        max_new_tokens=25,
        images=image_tensor,
        use_cache=True)[0]
    
    return tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()