Simon Park commited on
Commit
dd37dac
·
1 Parent(s): 85c07d0

update model card

Browse files
Files changed (1) hide show
  1. README.md +34 -1
README.md CHANGED
@@ -1,3 +1,36 @@
1
  ---
2
- license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - PrincetonPLI/Eagle-X2-Llama3-8B
4
+ library_name: transformers
5
+ license: cc-by-nc-sa-4.0
6
+ pipeline_tag: image-text-to-text
7
  ---
8
+
9
+ # Model Card for Eagle-X2-Llama3-8B-ConsecutiveTableReadout-Mix-160k
10
+ This model follows the adapter-based VLM structure from [LLaVA](https://github.com/haotian-liu/LLaVA) and [Eagle](https://github.com/NVlabs/EAGLE). This model uses [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base LLM and CLIP-448 (based on [CLIP-336](openai/clip-vit-large-patch14-336)) and [ConvNeXt](https://github.com/facebookresearch/ConvNeXt) as the visual encoders.
11
+
12
+ ## Training Details
13
+ We trained [Eagle-X2-Llama3-8B](https://huggingface.co/PrincetonPLI/Eagle-X2-Llama3-8B) on 160k examples of **Mix** supervision on Consecutive Table Readout.
14
+
15
+ ## Citation
16
+ Paper: [Generalizing from SIMPLE to HARD Visual Reasoning](https://arxiv.org/abs/2501.02669)
17
+ ```
18
+ @misc{park2025generalizingsimplehardvisual,
19
+ title={Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?},
20
+ author={Simon Park and Abhishek Panigrahi and Yun Cheng and Dingli Yu and Anirudh Goyal and Sanjeev Arora},
21
+ year={2025},
22
+ eprint={2501.02669},
23
+ archivePrefix={arXiv},
24
+ primaryClass={cs.CV},
25
+ url={https://arxiv.org/abs/2501.02669},
26
+ }
27
+ ```
28
+
29
+ ## Contact
30
+ Simon Park, Princeton University
31
+
32
+ Abhishek Panigrahi, Princeton University
33
+
34
+ Yun Cheng, Princeton University
35
+
36
+ {juhyunp, ap34, yc6206} 'at' princeton 'dot' edu