File size: 1,560 Bytes
03bb5be
dd37dac
 
 
 
 
03bb5be
dd37dac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---

base_model:
- PrincetonPLI/Eagle-X2-Llama3-8B
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: image-text-to-text
---


# Model Card for Eagle-X2-Llama3-8B-ConsecutiveTableReadout-Mix-160k
This model follows the adapter-based VLM structure from [LLaVA](https://github.com/haotian-liu/LLaVA) and [Eagle](https://github.com/NVlabs/EAGLE). This model uses [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base LLM and CLIP-448 (based on [CLIP-336](openai/clip-vit-large-patch14-336)) and [ConvNeXt](https://github.com/facebookresearch/ConvNeXt) as the visual encoders.

## Training Details
We trained [Eagle-X2-Llama3-8B](https://huggingface.co/PrincetonPLI/Eagle-X2-Llama3-8B) on 160k examples of **Mix** supervision on Consecutive Table Readout.

## Citation
Paper: [Generalizing from SIMPLE to HARD Visual Reasoning](https://arxiv.org/abs/2501.02669)
```

@misc{park2025generalizingsimplehardvisual,

      title={Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?}, 

      author={Simon Park and Abhishek Panigrahi and Yun Cheng and Dingli Yu and Anirudh Goyal and Sanjeev Arora},

      year={2025},

      eprint={2501.02669},

      archivePrefix={arXiv},

      primaryClass={cs.CV},

      url={https://arxiv.org/abs/2501.02669}, 

}

```

## Contact
Simon Park, Princeton University

Abhishek Panigrahi, Princeton University

Yun Cheng, Princeton University

{juhyunp, ap34, yc6206} 'at' princeton 'dot' edu