Image-Text-to-Text
PEFT
Safetensors
English
IDEFICS3_ROCO / README.md
eltorio's picture
Update README.md
50306c8 verified
|
raw
history blame
3.97 kB
---
license: apache-2.0
datasets:
- eltorio/ROCO-radiology
language:
- en
- fr
base_model:
- HuggingFaceM4/Idefics3-8B-Llama3
pipeline_tag: image-to-text
---
# IDEFICS3_ROCO
![Stage](https://img.shields.io/badge/stage-early%20development-yellow)![License](https://img.shields.io/badge/license-Apache%202.0-blue)![Contributors Welcome](https://img.shields.io/badge/contributors-welcome-brightgreen)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/eltorio/IDEFICS3_ROCO/blob/main/ROCO-idefics3.ipynb)
## A Fine-tuned Radiology-focused Model based on Hugging Face's Idefics3 Model
This repository contains a fine-tuned version of the Hugging Face [Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) model, built on top of the Meta Llama 3.1 8B architecture. Our model, `IDEFICS3_ROCO`, has been fine-tuned on the [Radiology Objects in Context (ROCO)](https://huggingface.co/datasets/eltorio/ROCO-radiology) dataset, a large-scale medical and multimodal imaging collection.
### Model Information
* **Base Model:** Idefics3-8B-Llama3
* **Fine-tuning Dataset:** Radiology Objects in Context (ROCO)
* **License:** Apache-2.0
* **Current Status:** Fine-tuning process is currently halted at checkpoint 640 (out of 24,000) due to limitations with Colab Free T4 GPU unit. Contributions to complete the fine-tuning process are welcome!
### Training Progress Status
* Current checkpoint: 2000/12267 (~16% completed)
* Estimated remaining GPU time: ~57 hours
* Hardware requirements: T4 GPU with >16GB VRAM
* Last update: november, 8th 2024
### Fine-tuning Code
The fine-tuning code is available as a Jupyter Notebook in the [ROCO-radiology dataset repository](https://huggingface.co/datasets/eltorio/ROCO-radiology) on Hugging Face:
* [ROCO-idefics3.ipynb](https://huggingface.co/eltorio/IDEFICS3_ROCO/blob/main/ROCO-idefics3.ipynb)
The [Junyper Notebook](https://colab.research.google.com/#fileId=https%3A//huggingface.co/eltorio/IDEFICS3_ROCO/blob/main/ROCO-idefics3.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/eltorio/IDEFICS3_ROCO/blob/main/ROCO-idefics3.ipynb) contains the code to fine-tune the Idefics3-8B-Llama3 model on the ROCO dataset. The fine-tuning process is currently halted at checkpoint 640 (out of 24,000) due to limitations with Colab Free T4 GPU unit. Contributions to complete the fine-tuning process are welcome!
### Contributions Welcome
If you have the resources to complete the fine-tuning process, we would appreciate your contribution. Please fork this repository, finish the fine-tuning process, and submit a pull request with your updates.
### Citation
If you use this model in your work, please cite the original Idefics3 model and our fine-tuned model:
* [Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3)
* [IDEFICS3_ROCO](https://huggingface.co/eltorio/IDEFICS3_ROCO)
### Contribution Guide
1. **Technical Requirements**
* Access to powerful GPU (T4, V100, A100 or equivalent)
* Python environment with PyTorch
* Disk space: ~50GB
2. **Getting Started**
* Fork the repository
* Resume from checkpoint 2000/12267
* Follow instructions in [ROCO-idefics3.ipynb](https://huggingface.co/eltorio/IDEFICS3_ROCO/blob/main/ROCO-idefics3.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/eltorio/IDEFICS3_ROCO/blob/main/ROCO-idefics3.ipynb)
3. **Contact**
* For questions: [link to issues/discussions](https://huggingface.co/eltorio/IDEFICS3_ROCO/discussions)
### Acknowledgments
This work was made possible by the [Hugging Face Transformers](https://huggingface.co/) library and the [ROCO-radiology dataset](https://huggingface.co/datasets/eltorio/ROCO-radiology).