ManishThota commited on
Commit
33a4783
·
verified ·
1 Parent(s): 96a4472

Upload README (3).md

Browse files
Files changed (1) hide show
  1. README (3).md +62 -0
README (3).md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ language:
4
+ - en
5
+ metrics:
6
+ - bleu
7
+ ---
8
+ <h1 align='center' style='font-size: 36px; font-weight: bold;'>Sparrow</h1>
9
+ <h3 align='center' style='font-size: 24px;'>Blazzing Fast Tiny Vision Language Model</h3>
10
+
11
+
12
+ <p align="center">
13
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650c7fbb8ffe1f53bdbe1aec/DTjDSq2yG-5Cqnk6giPFq.jpeg" width="50%" height="auto"/>
14
+ </p>
15
+
16
+ <p align='center', style='font-size: 16px;' >A Custom 3B parameter Model Enhanced for Educational Contexts: This specialized model integrates slide-text pairs from machine learning classes, leveraging a unique training approach. It connects a frozen pre-trained vision encoder (SigLip) with a frozen language model (Phi-2) through an innovative projector. The model employs attention mechanisms and language modeling loss to deeply understand and generate educational content, specifically tailored to the context of machine learning education. Built by <a href="https://www.linkedin.com/in/manishkumarthota/">@Manish</a> The model is released for research purposes only, commercial use is not allowed. </p>
17
+
18
+ ## How to use
19
+
20
+
21
+ **Install dependencies**
22
+ ```bash
23
+ pip install transformers # latest version is ok, but we recommend v4.31.0
24
+ pip install -q pillow accelerate einops
25
+ ```
26
+
27
+ You can use the following code for model inference. The format of text instruction is similar to [LLaVA](https://github.com/haotian-liu/LLaVA).
28
+
29
+ ```Python
30
+ import torch
31
+ from transformers import AutoModelForCausalLM, AutoTokenizer
32
+ from PIL import Image
33
+
34
+ torch.set_default_device("cuda")
35
+
36
+ #Create model
37
+ model = AutoModelForCausalLM.from_pretrained(
38
+ "ManishThota/Sparrow",
39
+ torch_dtype=torch.float16,
40
+ device_map="auto",
41
+ trust_remote_code=True)
42
+ tokenizer = AutoTokenizer.from_pretrained("ManishThota/SparrowVQE", trust_remote_code=True)
43
+
44
+ #function to generate the answer
45
+ def predict(question, image_path):
46
+ #Set inputs
47
+ text = f"USER: <image>\n{question}? ASSISTANT:"
48
+ image = Image.open(image_path)
49
+
50
+ input_ids = tokenizer(text, return_tensors='pt').input_ids.to('cuda')
51
+ image_tensor = model.image_preprocess(image)
52
+
53
+ #Generate the answer
54
+ output_ids = model.generate(
55
+ input_ids,
56
+ max_new_tokens=25,
57
+ images=image_tensor,
58
+ use_cache=True)[0]
59
+
60
+ return tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()
61
+
62
+ ```