Training details?
#2
by
MonolithFoundation
- opened
Training details?
I see the archecture didn't change bit compare with original llava-next, what's the performance comes in? More data?
We will soon release Ivy-VL2 and tech report, with all data sourced from open datasets.
I noticed that you have open-sourced multimodal large models. Would you be interested in collaborating to build even stronger and more impactful multimodal large models? Feel free to reach out to me at [email protected].
Yeah, with pleasure. I have GPU as well. Currently mostly interested in VE pretrain and then train MLLM.
I will send my contact information (wechat) to your email, can we make a contact?
yeah My pleasure
yeah, I have already sent a friend request.