Spaces:
vztu
/
Runtime error

nanushio commited on
Commit
f318285
·
1 Parent(s): feb2918

- [MINOR] [CONFIG] [UPDATE] 1. update README.md

Browse files
Files changed (2) hide show
  1. README copy.md +0 -163
  2. README.md +164 -0
README copy.md DELETED
@@ -1,163 +0,0 @@
1
- # COVER
2
-
3
- Official Code for [CVPR Workshop2024] Paper *"COVER: A Comprehensive Video Quality Evaluator"*.
4
- Official Code, Demo, Weights for the [Comprehensive Video Quality Evaluator (COVER)].
5
-
6
- # Todo:: update date, hugging face model below
7
- - xx xxx, 2024: We upload weights of [COVER](https://github.com/vztu/COVER/release/Model/COVER.pth) and [COVER++](TobeContinue) to Hugging Face models.
8
- - xx xxx, 2024: We upload Code of [COVER](https://github.com/vztu/COVER)
9
- - 12 Apr, 2024: COVER has been accepted by CVPR Workshop2024.
10
-
11
-
12
- # Todo:: update [visitors](link) below
13
- ![visitors](https://visitor-badge.laobi.icu/badge?page_id=teowu/TobeContinue) [![](https://img.shields.io/github/stars/vztu/COVER)](https://github.com/vztu/COVER)
14
- [![State-of-the-Art](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/QualityAssessment/COVER)
15
- <a href="https://colab.research.google.com/github/taskswithcode/COVER/blob/master/TWCCOVER.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
16
-
17
-
18
- # Todo:: update predicted score for YT-UGC challenge dataset specified by AIS
19
- **COVER** Pseudo-labelled Quality scores of [YT-UGC](https://www.deepmind.com/open-source/kinetics): [CSV](https://github.com/QualityAssessment/COVER/raw/master/cover_predictions/kinetics_400_1.csv)
20
-
21
-
22
- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/disentangling-aesthetic-and-technical-effects/video-quality-assessment-on-youtube-ugc)](https://paperswithcode.com/sota/video-quality-assessment-on-youtube-ugc?p=disentangling-aesthetic-and-technical-effects)
23
-
24
-
25
- ## Introduction
26
- # Todo:: Add Introduction here
27
-
28
- ### the proposed COVER
29
-
30
- *This inspires us to*
31
-
32
- ![Fig](figs/approach.png)
33
-
34
- ## Install
35
-
36
- The repository can be installed via the following commands:
37
- ```shell
38
- git clone https://github.com/vztu/COVER
39
- cd COVER
40
- pip install -e .
41
- mkdir pretrained_weights
42
- cd pretrained_weights
43
- wget https://github.com/vztu/COVER/release/Model/COVER.pth
44
- cd ..
45
- ```
46
-
47
-
48
- ## Evaluation: Judge the Quality of Any Video
49
-
50
- ### Try on Demos
51
- You can run a single command to judge the quality of the demo videos in comparison with videos in VQA datasets.
52
-
53
- ```shell
54
- python evaluate_one_video.py -v ./demo/video_1.mp4
55
- ```
56
-
57
- or
58
-
59
- ```shell
60
- python evaluate_one_video.py -v ./demo/video_2.mp4
61
- ```
62
-
63
- Or choose any video you like to predict its quality:
64
-
65
-
66
- ```shell
67
- python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$
68
- ```
69
-
70
- ### Outputs
71
-
72
- #### ITU-Standarized Overall Video Quality Score
73
-
74
- The script can directly score the video's overall quality (considering all perspectives).
75
-
76
- ```shell
77
- python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$
78
- ```
79
-
80
- The final output score is averaged among all perspectives.
81
-
82
-
83
- ## Evaluate on a Exsiting Video Dataset
84
-
85
-
86
- ```shell
87
- python evaluate_one_dataset.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$
88
- ```
89
-
90
- ## Evaluate on a Set of Unlabelled Videos
91
-
92
-
93
- ```shell
94
- python evaluate_a_set_of_videos.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$
95
- ```
96
-
97
- The results are stored as `.csv` files in cover_predictions in your `OUTPUT_CSV_PATH`.
98
-
99
- Please feel free to use COVER to pseudo-label your non-quality video datasets.
100
-
101
-
102
- ## Data Preparation
103
-
104
- We have already converted the labels for most popular datasets you will need for Blind Video Quality Assessment,
105
- and the download links for the **videos** are as follows:
106
-
107
- :book: LSVQ: [Github](https://github.com/baidut/PatchVQ)
108
-
109
- :book: KoNViD-1k: [Official Site](http://database.mmsp-kn.de/konvid-1k-database.html)
110
-
111
- :book: LIVE-VQC: [Official Site](http://live.ece.utexas.edu/research/LIVEVQC)
112
-
113
- :book: YouTube-UGC: [Official Site](https://media.withyoutube.com)
114
-
115
- *(Please contact the original authors if the download links were unavailable.)*
116
-
117
- After downloading, kindly put them under the `../datasets` or anywhere but remember to change the `data_prefix` respectively in the [config file](cover.yml).
118
-
119
- # Training: Adapt COVER to your video quality dataset!
120
-
121
- Now you can employ ***head-only/end-to-end transfer*** of COVER to get dataset-specific VQA prediction heads.
122
-
123
- We still recommend **head-only** transfer. As we have evaluated in the paper, this method has very similar performance with *end-to-end transfer* (usually 1%~2% difference), but will require **much less** GPU memory, as follows:
124
-
125
- ```shell
126
- python transfer_learning.py -t $YOUR_SPECIFIED_DATASET_NAME$
127
- ```
128
-
129
- For existing public datasets, type the following commands for respective ones:
130
-
131
- - `python transfer_learning.py -t val-kv1k` for KoNViD-1k.
132
- - `python transfer_learning.py -t val-ytugc` for YouTube-UGC.
133
- - `python transfer_learning.py -t val-cvd2014` for CVD2014.
134
- - `python transfer_learning.py -t val-livevqc` for LIVE-VQC.
135
-
136
-
137
- As the backbone will not be updated here, the checkpoint saving process will only save the regression heads with only `398KB` file size (compared with `200+MB` size of the full model). To use it, simply replace the head weights with the official weights [COVER.pth](https://github.com/vztu/COVER/release/Model/COVER.pth).
138
-
139
- We also support ***end-to-end*** fine-tune right now (by modifying the `num_epochs: 0` to `num_epochs: 15` in `./cover.yml`). It will require more memory cost and more storage cost for the weights (with full parameters) saved, but will result in optimal accuracy.
140
-
141
- Fine-tuning curves by authors can be found here: [Official Curves](https://wandb.ai/timothyhwu/COVER) for reference.
142
-
143
-
144
- ## Visualization
145
-
146
- ### WandB Training and Evaluation Curves
147
-
148
- You can be monitoring your results on WandB!
149
-
150
- ## Acknowledgement
151
-
152
- Thanks for every participant of the subjective studies!
153
-
154
- ## Citation
155
-
156
- Should you find our work interesting and would like to cite it, please feel free to add these in your references!
157
-
158
-
159
- # Todo, add bibtex of cover below
160
- ```bibtex
161
- %cover
162
-
163
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -11,3 +11,167 @@ license: mit
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
+
15
+ # COVER
16
+
17
+ Official Code for [CVPR Workshop2024] Paper *"COVER: A Comprehensive Video Quality Evaluator"*.
18
+ Official Code, Demo, Weights for the [Comprehensive Video Quality Evaluator (COVER)].
19
+
20
+ # Todo:: update date, hugging face model below
21
+ - xx xxx, 2024: We upload weights of [COVER](https://github.com/vztu/COVER/release/Model/COVER.pth) and [COVER++](TobeContinue) to Hugging Face models.
22
+ - xx xxx, 2024: We upload Code of [COVER](https://github.com/vztu/COVER)
23
+ - 12 Apr, 2024: COVER has been accepted by CVPR Workshop2024.
24
+
25
+
26
+ # Todo:: update [visitors](link) below
27
+ ![visitors](https://visitor-badge.laobi.icu/badge?page_id=teowu/TobeContinue) [![](https://img.shields.io/github/stars/vztu/COVER)](https://github.com/vztu/COVER)
28
+ [![State-of-the-Art](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/QualityAssessment/COVER)
29
+ <a href="https://colab.research.google.com/github/taskswithcode/COVER/blob/master/TWCCOVER.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
30
+
31
+
32
+ # Todo:: update predicted score for YT-UGC challenge dataset specified by AIS
33
+ **COVER** Pseudo-labelled Quality scores of [YT-UGC](https://www.deepmind.com/open-source/kinetics): [CSV](https://github.com/QualityAssessment/COVER/raw/master/cover_predictions/kinetics_400_1.csv)
34
+
35
+
36
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/disentangling-aesthetic-and-technical-effects/video-quality-assessment-on-youtube-ugc)](https://paperswithcode.com/sota/video-quality-assessment-on-youtube-ugc?p=disentangling-aesthetic-and-technical-effects)
37
+
38
+
39
+ ## Introduction
40
+ # Todo:: Add Introduction here
41
+
42
+ ### the proposed COVER
43
+
44
+ *This inspires us to*
45
+
46
+ ![Fig](figs/approach.png)
47
+
48
+ ## Install
49
+
50
+ The repository can be installed via the following commands:
51
+ ```shell
52
+ git clone https://github.com/vztu/COVER
53
+ cd COVER
54
+ pip install -e .
55
+ mkdir pretrained_weights
56
+ cd pretrained_weights
57
+ wget https://github.com/vztu/COVER/release/Model/COVER.pth
58
+ cd ..
59
+ ```
60
+
61
+
62
+ ## Evaluation: Judge the Quality of Any Video
63
+
64
+ ### Try on Demos
65
+ You can run a single command to judge the quality of the demo videos in comparison with videos in VQA datasets.
66
+
67
+ ```shell
68
+ python evaluate_one_video.py -v ./demo/video_1.mp4
69
+ ```
70
+
71
+ or
72
+
73
+ ```shell
74
+ python evaluate_one_video.py -v ./demo/video_2.mp4
75
+ ```
76
+
77
+ Or choose any video you like to predict its quality:
78
+
79
+
80
+ ```shell
81
+ python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$
82
+ ```
83
+
84
+ ### Outputs
85
+
86
+ #### ITU-Standarized Overall Video Quality Score
87
+
88
+ The script can directly score the video's overall quality (considering all perspectives).
89
+
90
+ ```shell
91
+ python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$
92
+ ```
93
+
94
+ The final output score is averaged among all perspectives.
95
+
96
+
97
+ ## Evaluate on a Exsiting Video Dataset
98
+
99
+
100
+ ```shell
101
+ python evaluate_one_dataset.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$
102
+ ```
103
+
104
+ ## Evaluate on a Set of Unlabelled Videos
105
+
106
+
107
+ ```shell
108
+ python evaluate_a_set_of_videos.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$
109
+ ```
110
+
111
+ The results are stored as `.csv` files in cover_predictions in your `OUTPUT_CSV_PATH`.
112
+
113
+ Please feel free to use COVER to pseudo-label your non-quality video datasets.
114
+
115
+
116
+ ## Data Preparation
117
+
118
+ We have already converted the labels for most popular datasets you will need for Blind Video Quality Assessment,
119
+ and the download links for the **videos** are as follows:
120
+
121
+ :book: LSVQ: [Github](https://github.com/baidut/PatchVQ)
122
+
123
+ :book: KoNViD-1k: [Official Site](http://database.mmsp-kn.de/konvid-1k-database.html)
124
+
125
+ :book: LIVE-VQC: [Official Site](http://live.ece.utexas.edu/research/LIVEVQC)
126
+
127
+ :book: YouTube-UGC: [Official Site](https://media.withyoutube.com)
128
+
129
+ *(Please contact the original authors if the download links were unavailable.)*
130
+
131
+ After downloading, kindly put them under the `../datasets` or anywhere but remember to change the `data_prefix` respectively in the [config file](cover.yml).
132
+
133
+ # Training: Adapt COVER to your video quality dataset!
134
+
135
+ Now you can employ ***head-only/end-to-end transfer*** of COVER to get dataset-specific VQA prediction heads.
136
+
137
+ We still recommend **head-only** transfer. As we have evaluated in the paper, this method has very similar performance with *end-to-end transfer* (usually 1%~2% difference), but will require **much less** GPU memory, as follows:
138
+
139
+ ```shell
140
+ python transfer_learning.py -t $YOUR_SPECIFIED_DATASET_NAME$
141
+ ```
142
+
143
+ For existing public datasets, type the following commands for respective ones:
144
+
145
+ - `python transfer_learning.py -t val-kv1k` for KoNViD-1k.
146
+ - `python transfer_learning.py -t val-ytugc` for YouTube-UGC.
147
+ - `python transfer_learning.py -t val-cvd2014` for CVD2014.
148
+ - `python transfer_learning.py -t val-livevqc` for LIVE-VQC.
149
+
150
+
151
+ As the backbone will not be updated here, the checkpoint saving process will only save the regression heads with only `398KB` file size (compared with `200+MB` size of the full model). To use it, simply replace the head weights with the official weights [COVER.pth](https://github.com/vztu/COVER/release/Model/COVER.pth).
152
+
153
+ We also support ***end-to-end*** fine-tune right now (by modifying the `num_epochs: 0` to `num_epochs: 15` in `./cover.yml`). It will require more memory cost and more storage cost for the weights (with full parameters) saved, but will result in optimal accuracy.
154
+
155
+ Fine-tuning curves by authors can be found here: [Official Curves](https://wandb.ai/timothyhwu/COVER) for reference.
156
+
157
+
158
+ ## Visualization
159
+
160
+ ### WandB Training and Evaluation Curves
161
+
162
+ You can be monitoring your results on WandB!
163
+
164
+ ## Acknowledgement
165
+
166
+ Thanks for every participant of the subjective studies!
167
+
168
+ ## Citation
169
+
170
+ Should you find our work interesting and would like to cite it, please feel free to add these in your references!
171
+
172
+
173
+ # Todo, add bibtex of cover below
174
+ ```bibtex
175
+ %cover
176
+
177
+ ```