license: apache-2.0 | |
library_name: transformers | |
pipeline_tag: image-text-to-text | |
# <img src="assets/icon.png" width="35" /> ReFocus | |
This repo contains the model for the paper "ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding" | |
[**π Homepage**](https://zeyofu.github.io/ReFocus/) |[**π Paper**](https://huggingface.co/papers/2501.05452) | [**π Code**](https://github.com/zeyofu/ReFocus_Code) | |
# Introduction | |
![Alt text](assets/teaser.png) | |
# ReFocus Finetuning | |
We follow the [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook/blob/main/md/04.Fine-tuning/FineTuning_Vision.md) for the supervised finetuning experiments. | |
## Inference with the Finetuned Model | |
We release our best finetuned ReFocus model with full chain-of-thought data in this [HuggingFace Link](https://huggingface.co/Fiaa/ReFocus). | |
This model is finetuned based on Phi-3.5-vision, and we used the following prompt during evaluation | |
``` | |
<|image|>\n{question}\nThought: | |
``` | |
To enforce the model to generate bounding box coordinates to refocus, you could try this prompt: | |
``` | |
<|image_1|>\n{question}\nThought: The areas to focus on in the image have bounding box coordinates: | |
``` |