view post Post 6054 Fine-tune Florence-2 on any task 🔥Today we release a notebook and a walkthrough blog on fine-tuning Florence-2 on DocVQA dataset @andito @SkalskiP Blog: https://huggingface.co/blog 📕Notebook: https://colab.research.google.com/drive/1hKDrJ5AH_o7I95PtZ9__VlCTNAo1Gjpf?usp=sharing 📖Florence-2 is a great vision-language model thanks to it's massive dataset and small size!This model requires conditioning through task prefixes and it's not as generalist, requiring fine-tuning on a new task, such as DocVQA 📝We have fine-tuned the model on A100 (and one can also use a smaller GPU with smaller batch size) and saw that model picks up new tasks 🥹See below how it looks like before and after FT 🤩Play with the demo here andito/Florence-2-DocVQA 🏄♀️ 🤗 27 27 🔥 7 7 + Reply
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report Paper • 2405.00732 • Published Apr 29, 2024 • 119