Auto-Encoded Supervision for Perceptual Image Super-Resolution
Abstract
A novel loss function called AESOP is proposed for perceptual super-resolution that addresses the trade-off between pixel-level loss and perceptual quality by measuring distances in an autoencoder's feature space rather than raw pixel space.
This work tackles the fidelity objective in the perceptual super-resolution~(SR). Specifically, we address the shortcomings of pixel-level L_p loss (L_pix) in the GAN-based SR framework. Since L_pix is known to have a trade-off relationship against perceptual quality, prior methods often multiply a small scale factor or utilize low-pass filters. However, this work shows that these circumventions fail to address the fundamental factor that induces blurring. Accordingly, we focus on two points: 1) precisely discriminating the subcomponent of L_pix that contributes to blurring, and 2) only guiding based on the factor that is free from this trade-off relationship. We show that they can be achieved in a surprisingly simple manner, with an Auto-Encoder (AE) pretrained with L_pix. Accordingly, we propose the Auto-Encoded Supervision for Optimal Penalization loss (L_AESOP), a novel loss function that measures distance in the AE space, instead of the raw pixel space. Note that the AE space indicates the space after the decoder, not the bottleneck. By simply substituting L_pix with L_AESOP, we can provide effective reconstruction guidance without compromising perceptual quality. Designed for simplicity, our method enables easy integration into existing SR frameworks. Experimental results verify that AESOP can lead to favorable results in the perceptual SR task.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper