As a very common type of video, face videos often appear in movies, talk shows, live broadcasts, and other scenes. Real-world online videos are often plagued by degradations such as blurring and quantization noise, due to the high compression ratio caused by high communication costs and limited transmission bandwidth. These degradations have a particularly serious impact on face videos because the human visual system is highly sensitive to facial details. Despite the significant advancement in video face enhancement, current methods still suffer from i) long processing time and ii) inconsistent spatial-temporal visual effects (e.g., flickering). This study proposes a novel and efficient blind video face enhancement method to overcome the above two challenges, restoring high-quality videos from their compressed low-quality versions with an effective deflickering mechanism. In particular, the proposed method develops upon a 3D-VQGAN backbone associated with spatial-temporal codebooks recording high-quality portrait features and residual-based temporal information. We develop a two-stage learning framework for the model. In Stage I, we learn the model with a regularizer mitigating the codebook collapse problem. In Stage II, we learn two transformers to lookup code from the codebooks and further update the encoder of low-quality videos. Experiments conducted on the VFHQ-Test dataset demonstrate that our method surpasses the current state-of-the-art blind face video restoration and de-flickering methods on both efficiency and effectiveness.
Network architecture of Stage I. Stage I uses HQ face videos to train HQ 3D-VQGAN and spatial and temporal codebooks. (a) illustrates the quantization operation STLookUp through two codebooks in our proposed framework. (b) and (c) display the computation process of temporal attention and motion residual, respectively. (d) We leverage a pre-trained feature network DINOv2 and trainable multi-scale discriminator heads to construct a more powerful discriminator for stable training.
Network architecture of Stage II. Stage II uses HQ-LQ face video pairs to train LQ encoder and LookUp Transformers. The weights of Dh are pre-trained in Stage I and fixed in Stage II.
@article{wang2024efficient,
title={Efficient Video Face Enhancement with Enhanced Spatial-Temporal Consistency},
author={Yutong Wang and Jiajie Teng and Jiajiong Cao and Yuming Li and Chenguang Ma and Hongteng Xu and Dixin Luo},
journal={arXiv preprint arXiv:2411.16468},
year={2024}
}