Phips commited on
Commit
3f349ac
·
1 Parent(s): 590cae9

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +21 -7
app.py CHANGED
@@ -100,17 +100,13 @@ def realesrgan(img, model_name, face_enhance):
100
  print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
101
  else:
102
  # Save restored image and return it to the output Image component
103
- if img_mode == 'RGBA': # RGBA images should be saved in png format
104
- extension = 'jpg'
105
- else:
106
- extension = 'jpg'
107
-
108
  out_filename = f"output_{rnd_string(16)}.{extension}"
109
  cv2.imwrite(out_filename, output)
110
  global last_file
111
  last_file = out_filename
112
  return out_filename
113
-
114
 
115
  def rnd_string(x):
116
  """Returns a string of 'x' random characters
@@ -176,7 +172,7 @@ def main():
176
 
177
  gr.Markdown(
178
  """# <div align="center"> Upscale image </div>
179
- Here I demo my self-trained models. The models with their corresponding infos can be found on [my github repo](https://github.com/phhofm/models).
180
  """
181
  )
182
 
@@ -217,7 +213,25 @@ def main():
217
  4xLSDIRplusC - upscale a jpg compressed photo 4x
218
  4xLSDIRplusR - upscale a degraded photo 4x (too strong, best used for interpolation like 4xLSDIRplusN (or C) 75% 4xLSDIRplusR 25% to add little degradation handling to the previous one)
219
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
220
  *The following are not models I had trained, but rather interpolations I had created, they are available on my [repo](https://github.com/phhofm/models) and can be tried out locally with chaiNNer:*
 
221
  4xLSDIRCompact3 (4xLSDIRCompactC3 + 4xLSDIRCompactR3)
222
  4xLSDIRCompact2 (4xLSDIRCompactC2 + 4xLSDIRCompactR2)
223
  4xInt-Ultracri (UltraSharp + Remacri)
 
100
  print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
101
  else:
102
  # Save restored image and return it to the output Image component
103
+ extension = 'jpg'
 
 
 
 
104
  out_filename = f"output_{rnd_string(16)}.{extension}"
105
  cv2.imwrite(out_filename, output)
106
  global last_file
107
  last_file = out_filename
108
  return out_filename
109
+
110
 
111
  def rnd_string(x):
112
  """Returns a string of 'x' random characters
 
172
 
173
  gr.Markdown(
174
  """# <div align="center"> Upscale image </div>
175
+ Here I demo some of my self-trained models (only those trained on the SRVGGNet or RRDBNet archs). All my self-trained models can be found on the [openmodeldb](https://openmodeldb.info/?q=Helaman&sort=date-desc) or on [my github repo](https://github.com/phhofm/models).
176
  """
177
  )
178
 
 
213
  4xLSDIRplusC - upscale a jpg compressed photo 4x
214
  4xLSDIRplusR - upscale a degraded photo 4x (too strong, best used for interpolation like 4xLSDIRplusN (or C) 75% 4xLSDIRplusR 25% to add little degradation handling to the previous one)
215
 
216
+ *Models that I trained that are not featured here, but available on [openmodeldb](https://openmodeldb.info/?q=Helaman&sort=date-desc) or on [github](https://github.com/phhofm/models):*
217
+ 4xNomos8kSCHAT-L - Photo upscaler (handles little bit of jpg compression and blur), [HAT-L](https://github.com/XPixelGroup/HAT) model (good output but very slow since huge)
218
+ 4xNomos8kSCHAT-S - Photo upscaler (handles little bit of jpg compression and blur), [HAT-S](https://github.com/XPixelGroup/HAT) model
219
+ 4xNomos8kSCSRFormer - Photo upscaler (handles little bit of jpg compression and blur), [SRFormer](https://github.com/HVision-NKU/SRFormer) base model (also good and slow since also big model)
220
+ 2xHFA2kAVCOmniSR - Anime frame upscaler that handles AVC (h264) video compression, [OmniSR](https://github.com/Francis0625/Omni-SR) model
221
+ 4xHFA2kAVCSRFormer_light - Anime frame upscaler that handles AVC (h264) video compression, [SRFormer](https://github.com/HVision-NKU/SRFormer) lightweight model
222
+ 2xHFA2kAVCEDSR_M - Anime frame upscaler that handles AVC (h264) video compression, [EDSR-M](https://github.com/LimBee/NTIRE2017) model
223
+ 2xHFA2kAVCCompact - Anime frame upscaler that handles AVC (h264) video compression, [SRVGGNet](https://github.com/xinntao/Real-ESRGAN) (also called Real-ESRGAN Compact) model
224
+ 4xHFA2kLUDVAESwinIR_light - Anime image upscaler that handles various realistic degradations, [SwinIR](https://github.com/JingyunLiang/SwinIR) light model
225
+ 4xHFA2kLUDVAEGRL_small - Anime image upscaler that handles various realistic degradations, [GRL](https://github.com/ofsoundof/GRL-Image-Restoration) small model
226
+ 4xHFA2kLUDVAESRFormer_light - Anime image upscaler that handles various realistic degradations, [SRFormer](https://github.com/HVision-NKU/SRFormer) light model
227
+ 4xLexicaHAT - An AI generated image upscaler, does not handle any degradations, [HAT](https://github.com/XPixelGroup/HAT) base model
228
+ 2xLexicaSwinIR - An AI generated image upscaler, does not handle any degradations, [SwinIR](https://github.com/JingyunLiang/SwinIR) base model
229
+ 2xLexicaRRDBNet - An AI generated image upscaler, does not handle any degradations, RRDBNet base model
230
+ 2xLexicaRRDBNet_Sharp - An AI generated image upscaler with sharper outputs, does not handle any degradations, RRDBNet base model
231
+ 4xHFA2kLUDVAESAFMN - dropped model since there were artifacts on the outputs when training with SAFMN arch
232
+
233
  *The following are not models I had trained, but rather interpolations I had created, they are available on my [repo](https://github.com/phhofm/models) and can be tried out locally with chaiNNer:*
234
+ 4xLSDIRplus (4xLSDIRplusC + 4xLSDIRplusR)
235
  4xLSDIRCompact3 (4xLSDIRCompactC3 + 4xLSDIRCompactR3)
236
  4xLSDIRCompact2 (4xLSDIRCompactC2 + 4xLSDIRCompactR2)
237
  4xInt-Ultracri (UltraSharp + Remacri)