实用的图像/视频修复算法。

Xintao f18f613acc fix file_url bug 2 years ago
.github 35ee6f781e improve codes comments 3 years ago
.vscode 571b89257a add no-response workflow, vscode format setting, update requirements 3 years ago
assets 2b4e485eb0 Update ReadMe (#259) 3 years ago
docs e5763af574 Add Replicate demo (#428) 2 years ago
experiments 248cbedbce add readme for training 3 years ago
inputs e5763af574 Add Replicate demo (#428) 2 years ago
options 35ee6f781e improve codes comments 3 years ago
realesrgan 576aaddfaf support denoise strength for realesr-general-x4v3 2 years ago
scripts 7dd860a881 catch more specific errors 3 years ago
tests 42110857ef add unittest for model and utils 3 years ago
.gitignore 571b89257a add no-response workflow, vscode format setting, update requirements 3 years ago
.pre-commit-config.yaml 772923e207 add codespell to pre-commit hook 3 years ago
CODE_OF_CONDUCT.md 01aeba2f7a Add CODE_OF_CONDUCT.md 3 years ago
LICENSE 0573f32dd0 Create LICENSE 3 years ago
MANIFEST.in 32a4fa1772 add publish-pip action 3 years ago
README.md e5763af574 Add Replicate demo (#428) 2 years ago
README_CN.md bc77ca5666 add link to a realsrgan-gui (#310) 3 years ago
VERSION f18f613acc fix file_url bug 2 years ago
cog.yaml e5763af574 Add Replicate demo (#428) 2 years ago
cog_predict.py e5763af574 Add Replicate demo (#428) 2 years ago
inference_realesrgan.py f18f613acc fix file_url bug 2 years ago
inference_realesrgan_video.py e5e79fbde3 deal with flv format 2 years ago
requirements.txt e5763af574 Add Replicate demo (#428) 2 years ago
setup.cfg 1d180efaf3 add unittest for dataset and archs 3 years ago
setup.py 3338b31f48 update setup.py, V0.2.2.5 3 years ago

README.md

👀[**Demos**](#-demos-videos) **|** 🚩[**Updates**](#-updates) **|** ⚡[**Usage**](#-quick-inference) **|** 🏰[**Model Zoo**](docs/model_zoo.md) **|** 🔧[Install](#-dependencies-and-installation) **|** 💻[Train](docs/Training.md) **|** ❓[FAQ](docs/FAQ.md) **|** 🎨[Contribution](docs/CONTRIBUTING.md) [![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases) [![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/) [![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues) [![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues) [![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE) [![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml) [![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)

🔥 AnimeVideo-v3 model (动漫视频小模型). Please see [anime video models] and [comparisons]
🔥 RealESRGAN_x4plus_anime_6B for anime images (动漫插图模型). Please see [anime_model]

  1. :boom: Add online demo: Replicate.
  2. Colab Demo for Real-ESRGAN | Colab Demo for Real-ESRGAN (anime videos)
  3. Portable Windows / Linux / MacOS executable files for Intel/AMD/Nvidia GPU. You can find more information here. The ncnn implementation is in Real-ESRGAN-ncnn-vulkan
  4. You can watch enhanced animations in Tencent Video. 欢迎观看腾讯视频动漫修复

Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.

🌌 Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in feedback.md.


If Real-ESRGAN is helpful, please help to ⭐ this repo or recommend it to your friends 😊
Other recommended projects:
▶️ GFPGAN: A practical algorithm for real-world face restoration
▶️ BasicSR: An open-source image and video restoration toolbox
▶️ facexlib: A collection that provides useful face-relation functions.
▶️ HandyView: A PyQt5-based image viewer that is handy for view and comparison
▶️ HandyFigure: Open source of paper figures


📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

[Paper]   [YouTube Video]   [B站讲解]   [Poster]   [PPT slides]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences


🚩 Updates


👀 Demos Videos

Bilibili

YouTube

🔧 Dependencies and Installation

Installation

  1. Clone repo

    git clone https://github.com/xinntao/Real-ESRGAN.git
    cd Real-ESRGAN
    
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    # facexlib and gfpgan are for face enhancement
    pip install facexlib
    pip install gfpgan
    pip install -r requirements.txt
    python setup.py develop
    

⚡ Quick Inference

There are usually three ways to inference Real-ESRGAN.

  1. Online inference
  2. Portable executable files (NCNN)
  3. Python script

Online inference

  1. You can try in our website: ARC Demo (now only support RealESRGAN_x4plus_anime_6B)
  2. Colab Demo for Real-ESRGAN | Colab Demo for Real-ESRGAN (anime videos).

Portable executable files (NCNN)

You can download Windows / Linux / MacOS executable files for Intel/AMD/Nvidia GPU.

This executable file is portable and includes all the binaries and models required. No CUDA or PyTorch environment is needed.

You can simply run the following command (the Windows example, more information is in the README.md of each executable files):

./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name

We have provided five models:

  1. realesrgan-x4plus (default)
  2. realesrnet-x4plus
  3. realesrgan-x4plus-anime (optimized for anime images, small model size)
  4. realesr-animevideov3 (animation video)

You can use the -n argument for other models, for example, ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus

Usage of portable executable files

  1. Please refer to Real-ESRGAN-ncnn-vulkan for more details.
  2. Note that it does not support all the functions (such as outscale) as the python script inference_realesrgan.py.

    Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
    
    -h                   show this help
    -i input-path        input image path (jpg/png/webp) or directory
    -o output-path       output image path (jpg/png/webp) or directory
    -s scale             upscale ratio (can be 2, 3, 4. default=4)
    -t tile-size         tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
    -m model-path        folder path to the pre-trained models. default=models
    -n model-name        model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
    -g gpu-id            gpu device to use (default=auto) can be 0,1,2 for multi-gpu
    -j load:proc:save    thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
    -x                   enable tta mode"
    -f format            output image format (jpg/png/webp, default=ext/png)
    -v                   verbose output
    

Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.

Python script

Usage of python script

  1. You can use X4 model for arbitrary output size with the argument outscale. The program will further perform cheap resize operation after the Real-ESRGAN output.

    Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
    
    A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
    
    -h                   show this help
    -i --input           Input image or folder. Default: inputs
    -o --output          Output folder. Default: results
    -n --model_name      Model name. Default: RealESRGAN_x4plus
    -s, --outscale       The final upsampling scale of the image. Default: 4
    --suffix             Suffix of the restored image. Default: out
    -t, --tile           Tile size, 0 for no tile during testing. Default: 0
    --face_enhance       Whether to use GFPGAN to enhance face. Default: False
    --fp32               Use fp32 precision during inference. Default: fp16 (half precision).
    --ext                Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
    

Inference general images

Download pre-trained models: RealESRGAN_x4plus.pth

wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models

Inference!

python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance

Results are in the results folder

Inference anime images

Pre-trained models: RealESRGAN_x4plus_anime_6B
More details and comparisons with waifu2x are in anime_model.md

# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
# inference
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs

Results are in the results folder


BibTeX

@InProceedings{wang2021realesrgan,
    author    = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
    title     = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
    booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
    date      = {2021}
}

📧 Contact

If you have any question, please email xintao.wang@outlook.com or xintaowang@tencent.com.

🧩 Projects that use Real-ESRGAN

If you develop/use Real-ESRGAN in your projects, welcome to let me know.

    GUI

🤗 Acknowledgement

Thanks for all the contributors.