Browse Source

add trainining with one gpu

Xintao 3 years ago
parent
commit
b525d1793b
1 changed files with 30 additions and 0 deletions
  1. 30 0
      Training.md

+ 30 - 0
Training.md

@@ -114,12 +114,22 @@ You can merge several folders into one meta_info txt. Here is the example:
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
     ```
     ```
+
+    Train with **a single GPU**:
+    ```bash
+    python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug
+    ```
 1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
 1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
     ```bash
     ```bash
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
     ```
     ```
 
 
+    Train with **a single GPU**:
+    ```bash
+    python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume
+    ```
+
 ### Train Real-ESRGAN
 ### Train Real-ESRGAN
 
 
 1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
 1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
@@ -129,12 +139,22 @@ You can merge several folders into one meta_info txt. Here is the example:
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
     ```
     ```
+
+    Train with **a single GPU**:
+    ```bash
+    python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug
+    ```
 1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
 1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
     ```bash
     ```bash
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     CUDA_VISIBLE_DEVICES=0,1,2,3 \
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
     python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
     ```
     ```
 
 
+    Train with **a single GPU**:
+    ```bash
+    python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume
+    ```
+
 ## Finetune Real-ESRGAN on your own dataset
 ## Finetune Real-ESRGAN on your own dataset
 
 
 You can finetune Real-ESRGAN on your own dataset. Typically, the fine-tuning process can be divided into two cases:
 You can finetune Real-ESRGAN on your own dataset. Typically, the fine-tuning process can be divided into two cases:
@@ -185,6 +205,11 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 \
 python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume
 python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume
 ```
 ```
 
 
+Train with **a single GPU**:
+```bash
+python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume
+```
+
 ### Use your own paired data
 ### Use your own paired data
 
 
 You can also finetune RealESRGAN with your own paired data. It is more similar to fine-tuning ESRGAN.
 You can also finetune RealESRGAN with your own paired data. It is more similar to fine-tuning ESRGAN.
@@ -237,3 +262,8 @@ We use four GPUs for training. We use the `--auto_resume` argument to automatica
 CUDA_VISIBLE_DEVICES=0,1,2,3 \
 CUDA_VISIBLE_DEVICES=0,1,2,3 \
 python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume
 python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume
 ```
 ```
+
+Train with **a single GPU**:
+```bash
+python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume
+```