:white_check_mark: We add small models that are optimized for anime videos :-)
Models | Scale | Description |
---|---|---|
RealESRGANv2-animevideo-xsx2 | X2 | Anime video model with XS size |
RealESRGANv2-animevideo-xsx4 | X4 | Anime video model with XS size |
The following are some demos (best view in the full screen mode).
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.3.0/RealESRGANv2-animevideo-xsx2.pth -P experiments/pretrained_models
# inference
python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n RealESRGANv2-animevideo-xsx2 -s 2 -v -a --half --suffix outx2
ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png
tmp_frames
aheadDownload the latest portable Windows / Linux / MacOS executable files for Intel/AMD/Nvidia GPU
Taking the Windows as example, run:
./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n RealESRGANv2-animevideo-xsx2 -s 2 -f jpg
out_frames
aheadFirst obtain fps from input videos by
ffmpeg -i onepiece_demo.mp4
Usage:
-i input video path
You will get the output similar to the following screenshot.
Merge frames
ffmpeg -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4
Usage:
-i input video path
-c:v video encoder (usually we use libx264)
-r fps, remember to modify it to meet your needs
-pix_fmt pixel format in video
If you also want to copy audio from the input videos, run:
ffmpeg -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4
Usage:
-i input video path, here we use two input streams
-c:v video encoder (usually we use libx264)
-r fps, remember to modify it to meet your needs
-pix_fmt pixel format in video
Input video for One Piece:
Out video for One Piece
More comparisons