实用的图像/视频修复算法。

Xintao 4c05b20cb3 update Readme and teaser 3 years ago
.github 4dc033d62b Initial commit 4 years ago
assets 4c05b20cb3 update Readme and teaser 3 years ago
inputs 9a1dd23287 add inference codes 3 years ago
.gitignore 4dc033d62b Initial commit 4 years ago
.pre-commit-config.yaml 4dc033d62b Initial commit 4 years ago
README.md 4c05b20cb3 update Readme and teaser 3 years ago
inference_realesrgan.py 9a1dd23287 add inference codes 3 years ago
requirements.txt 9a1dd23287 add inference codes 3 years ago
setup.cfg 9a1dd23287 add inference codes 3 years ago

README.md

Real-ESRGAN

Paper

:book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

[Paper]   [Project Page]   [Demo]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Applied Research Center (ARC), Tencent PCG; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences

Abstract

Though many attempts have been made in blind super-resolution to restore low-resolution images with unknown and complex degradations, they are still far from addressing general real-world degraded images. In this work, we extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data. Specifically, a high-order degradation modeling process is introduced to better simulate complex real-world degradations. We also consider the common ringing and overshoot artifacts in the synthesis process. In addition, we employ a U-Net discriminator with spectral normalization to increase discriminator capability and stabilize the training dynamics. Extensive comparisons have shown its superior visual performance than prior works on various real datasets. We also provide efficient implementations to synthesize training pairs on the fly.

BibTeX

@Article{wang2021realesrgan,
    title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
    author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
    journal={arXiv:2107.xxxxx},
    year={2021}
}

We are cleaning the training codes. It will be finished on 23 or 24, July.


You can download Windows executable files from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN-ncnn-vulkan.zip

You can simply run the following command:

./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png

Note that it may introduce block artifacts (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.

This executable file is based on the wonderful ncnn project and realsr-ncnn-vulkan.


:wrench: Dependencies and Installation

Installation

  1. Clone repo

    git clone https://github.com/xinntao/Real-ESRGAN.git
    cd Real-ESRGAN
    
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    pip install -r requirements.txt
    

:zap: Quick Inference

Download pre-trained models: RealESRGAN_x4plus.pth

Download pretrained models:

wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models

Inference!

python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs

Results are in the results folder

:e-mail: Contact

If you have any question, please email xintao.wang@outlook.com or xintaowang@tencent.com.