# Anime Video Models :white_check_mark: We add small models that are optimized for anime videos :-) | Models | Scale | Description | | ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- | | [RealESRGANv2-animevideo-xsx2](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/RealESRGANv2-animevideo-xsx2.pth) | X2 | Anime video model with XS size | | [RealESRGANv2-animevideo-xsx4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/RealESRGANv2-animevideo-xsx4.pth) | X4 | Anime video model with XS size | - [Anime Video Models](#anime-video-models) - [How to Use](#how-to-use) - [PyTorch Inference](#pytorch-inference) - [ncnn Executable File](#ncnn-executable-file) - [Step 1: Use ffmpeg to extract frames from video](#step-1-use-ffmpeg-to-extract-frames-from-video) - [Step 2: Inference with Real-ESRGAN executable file](#step-2-inference-with-real-esrgan-executable-file) - [Step 3: Merge the enhanced frames back into a video](#step-3-merge-the-enhanced-frames-back-into-a-video) - [More Demos](#more-demos) --- The following are some demos (best view in the full screen mode). https://user-images.githubusercontent.com/17445847/145706977-98bc64a4-af27-481c-8abe-c475e15db7ff.MP4 https://user-images.githubusercontent.com/17445847/145707055-6a4b79cb-3d9d-477f-8610-c6be43797133.MP4 https://user-images.githubusercontent.com/17445847/145783523-f4553729-9f03-44a8-a7cc-782aadf67b50.MP4 ## How to Use ### PyTorch Inference ```bash # download model wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/RealESRGANv2-animevideo-xsx2.pth -P experiments/pretrained_models # inference python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n RealESRGANv2-animevideo-xsx2 -s 2 -v -a --half --suffix outx2 ``` ### ncnn Executable File #### Step 1: Use ffmpeg to extract frames from video ```bash ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png ``` - Remember to create the folder `tmp_frames` ahead #### Step 2: Inference with Real-ESRGAN executable file 1. Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/realesrgan-ncnn-vulkan-20211212-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/realesrgan-ncnn-vulkan-20211212-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/realesrgan-ncnn-vulkan-20211212-macos.zip) **executable files for Intel/AMD/Nvidia GPU** 1. Taking the Windows as example, run: ```bash ./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n RealESRGANv2-animevideo-xsx2 -s 2 -f jpg ``` - Remember to create the folder `out_frames` ahead #### Step 3: Merge the enhanced frames back into a video 1. First obtain fps from input videos by ```bash ffmpeg -i onepiece_demo.mp4 ``` ```console Usage: -i input video path ``` You will get the output similar to the following screenshot.