File size: 5,463 Bytes
731ece0 9cc9e5d 97fb10e 21b24e0 43b7a51 631540a f739434 9e143b2 c134940 d5675e8 c99d851 a7ef999 631540a c99d851 631540a c99d851 836aebe 8228669 db5f0f5 631540a 8228669 db5f0f5 89b0ef8 631540a db5f0f5 c99d851 836aebe 8228669 836aebe db5f0f5 c99d851 836aebe 8228669 631540a 8228669 836aebe 68c38b8 b86a654 db5f0f5 9cc9e5d 06dcd41 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
---
title: YOLO
app_file: demo/hf_demo.py
sdk: gradio
sdk_version: 4.44.0
---
# YOLO: Official Implementation of YOLOv9, YOLOv7
[![Documentation Status](https://readthedocs.org/projects/yolo-docs/badge/?version=latest)](https://yolo-docs.readthedocs.io/en/latest/?badge=latest)
![GitHub License](https://img.shields.io/github/license/WongKinYiu/YOLO)
![WIP](https://img.shields.io/badge/status-WIP-orange)
[![Developer Mode Build & Test](https://github.com/WongKinYiu/YOLO/actions/workflows/develop.yaml/badge.svg)](https://github.com/WongKinYiu/YOLO/actions/workflows/develop.yaml)
[![Deploy Mode Validation & Inference](https://github.com/WongKinYiu/YOLO/actions/workflows/deploy.yaml/badge.svg)](https://github.com/WongKinYiu/YOLO/actions/workflows/deploy.yaml)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/yolov9-learning-what-you-want-to-learn-using/real-time-object-detection-on-coco)](https://paperswithcode.com/sota/real-time-object-detection-on-coco)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)]()
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-green)](https://huggingface.co/spaces/henry000/YOLO)
<!-- > [!IMPORTANT]
> This project is currently a Work In Progress and may undergo significant changes. It is not recommended for use in production environments until further notice. Please check back regularly for updates.
>
> Use of this code is at your own risk and discretion. It is advisable to consult with the project owner before deploying or integrating into any critical systems. -->
Welcome to the official implementation of YOLOv7 and YOLOv9. This repository will contains the complete codebase, pre-trained models, and detailed instructions for training and deploying YOLOv9.
## TL;DR
- This is the official YOLO model implementation with an MIT License.
- For quick deployment: you can directly install by pip+git:
```shell
pip install git+https://github.com/WongKinYiu/YOLO.git
yolo task.data.source=0 # source could be a single file, video, image folder, webcam ID
```
## Introduction
- [**YOLOv9**: Learning What You Want to Learn Using Programmable Gradient Information](https://arxiv.org/abs/2402.13616)
- [**YOLOv7**: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors](https://arxiv.org/abs/2207.02696)
## Installation
To get started using YOLOv9's developer mode, we recommand you clone this repository and install the required dependencies:
```shell
git clone [email protected]:WongKinYiu/YOLO.git
cd YOLO
pip install -r requirements.txt
```
## Features
<table>
<tr><td>
## Task
These are simple examples. For more customization details, please refer to [Notebooks](examples) and lower-level modifications **[HOWTO](docs/HOWTO.md)**.
## Training
To train YOLO on your machine/dataset:
1. Modify the configuration file `yolo/config/dataset/**.yaml` to point to your dataset.
2. Run the training script:
```shell
python yolo/lazy.py task=train dataset=** use_wandb=True
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c weight=False # or more args
```
### Transfer Learning
To perform transfer learning with YOLOv9:
```shell
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c dataset={dataset_config} device={cpu, mps, cuda}
```
### Inference
To use a model for object detection, use:
```shell
python yolo/lazy.py # if cloned from GitHub
python yolo/lazy.py task=inference \ # default is inference
name=AnyNameYouWant \ # AnyNameYouWant
device=cpu \ # hardware cuda, cpu, mps
model=v9-s \ # model version: v9-c, m, s
task.nms.min_confidence=0.1 \ # nms config
task.fast_inference=onnx \ # onnx, trt, deploy
task.data.source=data/toy/images/train \ # file, dir, webcam
+quite=True \ # Quite Output
yolo task.data.source={Any Source} # if pip installed
yolo task=inference task.data.source={Any}
```
### Validation
To validate model performance, or generate a json file in COCO format:
```shell
python yolo/lazy.py task=validation
python yolo/lazy.py task=validation dataset=toy
```
## Contributing
Contributions to the YOLO project are welcome! See [CONTRIBUTING](docs/CONTRIBUTING.md) for guidelines on how to contribute.
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=WongKinYiu/YOLO&type=Date)](https://star-history.com/#WongKinYiu/YOLO&Date)
## Citations
```
@misc{wang2022yolov7,
title={YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
author={Chien-Yao Wang and Alexey Bochkovskiy and Hong-Yuan Mark Liao},
year={2022},
eprint={2207.02696},
archivePrefix={arXiv},
primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
}
@misc{wang2024yolov9,
title={YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information},
author={Chien-Yao Wang and I-Hau Yeh and Hong-Yuan Mark Liao},
year={2024},
eprint={2402.13616},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|