Deep Idempotent Network for Efficient Single Image Blind Deblurring

IEEE Transactions on Circuits and Systems for Video Technology (TCSVT 2022)


Yuxin Mao1*, Zhexiong Wan1*, Yuchao Dai1#, Yu Xin2

1Northwestern Polytechnical University    2University of Technology Sydney
* denotes equal contribution # corresponding author

Abstract


Single image blind deblurring is highly ill-posed as neither the latent sharp image nor the blur kernel is known. Even though considerable progress has been made, several major difficulties remain for blind deblurring, including the trade-off between high-performance deblurring and real-time processing. Besides, we observe that current single image blind deblurring networks cannot further improve or stabilize the performance but significantly degrades the performance when re-deblurring is repeatedly applied. This implies the limitation of these networks in modeling an ideal deblurring process. In this work, we make two contributions to tackle the above difficulties:(1) We introduce the idempotent constraint into the deblurring framework and present a deep idempotent network to achieve improved blind non-uniform deblurring performance with stable re-deblurring. (2) We propose a simple yet efficient deblurring network with lightweight encoder-decoder units and a recurrent structure that can deblur images in a progressive residual fashion. Extensive experiments on synthetic and realistic datasets prove the superiority of our proposed framework. Remarkably, our proposed network is nearly 6.5× smaller and 6.4× faster than the state-of-the-art while achieving comparable high performance.


Speed vs Performance


Architecture

Each circle represents the performance of a model in terms of FPS and PSNR on the GoPro dataset with 1280x720 images using an RTX 2080Ti GPU. The radius of each circle denotes the model’s number of parameters. Our method achieves high performance with real-time runtime and small parameters compared with state-of-the-art blind deblurring methods.

Note: About the inference time, we follow the setting of the previous method, which does not enable CUDA synchronization. More details in https://github.com/swz30/MPRNet/issues/83 and https://github.com/chosj95/MIMO-UNet/issues/9.


Repeatedly Re-Deblurring


Architecture

We repeatedly input the deblurred image to the network by multiple times and report the deblurring results on the GoPro dataset. Our proposed deep idempotent network achieves very stable deblurring results while the performance of all other state-of-the-art methods decreases as the repeating times increase. Note that, to keep the training settings consistent with our results without idempotent constraint, we re-trained MT-RNN without their multi-temporal data augmentation.


Network Architecture


Architecture

The overall framework of our idempotent deblurring network and idempotent constraint. The deblurring network takes blurry images as input and outputs deblurred images by an iterative recurrent lightweight encoder-decoder structure. In all iterations, we only use the same basic model with shared weights and connect them by residual connection. The idempotent loss makes the outputs by repeating re-deblurring consistently in the training phase.


Progressively Deblurring Results


Architecture

Same as MTRNN, our model can achieve progressive iterative deblurring over multiple iterations, but without their temporal data augmentation in the training process.


Thermal Images Deblurring


Architecture

Visual comparisons on the motion blurred thermal samples using GOPro pretrained models. Column (a) is the original blurred thermal images, (b), (c), (d) are the deblurring results from MT-RNN, Stack(4)-DMPHN and our method, respectively.


Extend to Derain and Dehaze


Architecture Architecture

Citation


@article{mao2022deepidemdeblur,
    author={Mao, Yuxin and Wan, Zhexiong and Dai, Yuchao and Yu, Xin},
    journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
    title={Deep Idempotent Network for Efficient Single Image Blind Deblurring}, 
    year={2022},
    doi={10.1109/TCSVT.2022.3202361}
}