Model
llava-phi-3-mini-pretrain is a LLaVA projector pretrained from microsoft/Phi-3-mini-4k-instruct and CLIP-ViT-Large-patch14-336 on ShareGPT4V-PT dataset by XTuner.
The fine-tuned LLaVA model can be found on xtuner/llava-phi-3-mini.
Citation
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.