--- license: creativeml-openrail-m --- ---

Sparrow

Tiny Vision Language Model

3B parameter model built by @Manish using SigLIP, Phi-2, Language Modeling Loss, LLaVa data, and Custom setting training dataset. The model is released for research purposes only, commercial use is not allowed.

Pretraining is done and if at all in future we are adding more question answer pairs, we can just do lora finetuning on top of this model ## How to use **Install dependencies** ```bash pip install transformers # latest version is ok, but we recommend v4.31.0 pip install -q pillow accelerate einops ``` You can use the following code for model inference. The format of text instruction is similar to [LLaVA](https://github.com/haotian-liu/LLaVA). ```Python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image torch.set_default_device("cuda") #Create model model = AutoModelForCausalLM.from_pretrained( "ManishThota/Sparrow", torch_dtype=torch.float16, device_map="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ManishThota/Sparrow", trust_remote_code=True) #Set inputs text = "A chat between a curious user and an artificial intelligence assistant. USER: \nCan you explain the slide? ASSISTANT:" image = Image.open("images/week_02_page_02") input_ids = tokenizer(text, return_tensors='pt').input_ids image_tensor = model.image_preprocess(image) #Generate the answer output_ids = model.generate( input_ids, max_new_tokens=1500, images=image_tensor, use_cache=True)[0] print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()) ```