|
@@ -1,4 +1,8 @@
|
|
|
-# Real-ESRGAN
|
|
|
+<p align="center">
|
|
|
+ <img src="assets/realesrgan_logo.png" height=100>
|
|
|
+</p>
|
|
|
+
|
|
|
+## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>
|
|
|
|
|
|
[](https://github.com/xinntao/Real-ESRGAN/releases)
|
|
|
[](https://pypi.org/project/realesrgan/)
|
|
@@ -8,15 +12,13 @@
|
|
|
[](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
|
|
|
[](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)
|
|
|
|
|
|
-[English](README.md) **|** [简体中文](README_CN.md)
|
|
|
-
|
|
|
:fire: :fire: :fire: Add **small video models** for anime videos (**针对动漫视频的小模型**). Please see [anime video models](docs/anime_video_model.md).
|
|
|
|
|
|
1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN <a href="https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
|
|
|
2. [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**) <a href="https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>.
|
|
|
3. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/realesrgan-ncnn-vulkan-20211212-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/realesrgan-ncnn-vulkan-20211212-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/realesrgan-ncnn-vulkan-20211212-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#Portable-executable-files). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
|
|
|
|
|
|
-Real-ESRGAN aims at developing **Practical Algorithms for General Image Restoration**.<br>
|
|
|
+Real-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.<br>
|
|
|
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
|
|
|
|
|
|
:art: Real-ESRGAN needs your contributions. Any contributions are welcome, such as new features/models/typo fixes/suggestions/maintenance, *etc*. See [CONTRIBUTING.md](CONTRIBUTING.md). All contributors are list [here](README.md#hugs-acknowledgement).
|
|
@@ -25,17 +27,6 @@ We extend the powerful ESRGAN to a practical restoration application (namely, Re
|
|
|
|
|
|
:milky_way: Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](feedback.md).
|
|
|
|
|
|
-:triangular_flag_on_post: **Updates**
|
|
|
-- :white_check_mark: Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).
|
|
|
-- :white_check_mark: Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
|
|
|
-- :white_check_mark: Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
|
|
|
-- :white_check_mark: Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
|
|
|
-- :white_check_mark: Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.
|
|
|
-- :white_check_mark: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)
|
|
|
-- :white_check_mark: Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.
|
|
|
-- :white_check_mark: [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
|
|
|
-- :white_check_mark: The training codes have been released. A detailed guide can be found in [Training.md](Training.md).
|
|
|
-
|
|
|
---
|
|
|
|
|
|
If Real-ESRGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush: <br>
|
|
@@ -47,6 +38,51 @@ Other recommended projects:<br>
|
|
|
|
|
|
---
|
|
|
|
|
|
+<!---------------------------------- Updates --------------------------->
|
|
|
+<details open>
|
|
|
+<summary>🚩<b>Updates</b></summary>
|
|
|
+
|
|
|
+- ✅ Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).
|
|
|
+- ✅ Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
|
|
|
+- ✅ Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
|
|
|
+- ✅ Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
|
|
|
+- ✅ Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.
|
|
|
+- ✅ Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)
|
|
|
+- ✅ Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.
|
|
|
+- ✅ [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
|
|
|
+- ✅ The training codes have been released. A detailed guide can be found in [Training.md](Training.md).
|
|
|
+
|
|
|
+</details>
|
|
|
+
|
|
|
+<!---------------------------------- Projects that use Real-ESRGAN --------------------------->
|
|
|
+<details open>
|
|
|
+<summary>🧩<b>Projects that use Real-ESRGAN</b></summary>
|
|
|
+
|
|
|
+ If you develop/use Real-ESRGAN in your projects, welcome to let me know 👋
|
|
|
+
|
|
|
+- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
|
|
|
+- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
|
|
|
+- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
|
|
|
+
|
|
|
+ **GUI**
|
|
|
+
|
|
|
+- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
|
|
|
+- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
|
|
|
+- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
|
|
|
+- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
|
|
|
+- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
|
|
|
+- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
|
|
|
+
|
|
|
+</details>
|
|
|
+
|
|
|
+<!---------------------------------- Demo videos --------------------------->
|
|
|
+<details open>
|
|
|
+<summary>👀<b>Demo videos</b>👀</summary>
|
|
|
+
|
|
|
+- [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb)
|
|
|
+
|
|
|
+</details>
|
|
|
+
|
|
|
### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
|
|
|
|
|
|
> [[Paper](https://arxiv.org/abs/2107.10833)]   [Project Page]   [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)]   [[B站讲解](https://www.bilibili.com/video/BV1H34y1m7sS/)]   [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)]   [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>
|