2019-09-05, 12:00–12:30, Track 1 (Mitxelena)
Video compression algorithms used to stream videos are lossy, and when compression rates increase they result in strong degradation of visual quality. We show how deep neural networks can eliminate compression artefacts and restore lost details.
Video compression algorithms result in a reduction of image quality, because of their lossy approach to reduce the required bandwidth. This affects commercial streaming services such as Netflix, or Amazon Prime Video, but affects also video conferencing and video surveillance systems. In all these cases it is possible to improve the video quality, both for human view and for automatic video analysis, without changing the compression pipeline, through a post-processing that eliminates the visual artefacts created by the compression algorithms. In this presentation we show how deep convolutional neural networks implemented in Python using TensorFlow, Scikit-Learn and Scipy can be used to reduce compression artefacts and reconstruct missing high frequency details that were eliminated by the compression algorithm.
In particular, we follow an approach based on Generative Adversarial Networks, that in the scientific literature have obtained extremely high quality results in image enhancement tasks. However, to obtain these results, typically, large generators are employed, resulting in high computational costs and processing time, and thus the method can be implemented using GPUs usually available only on desktop machines.
In this presentation we show also an architecture that can be used to reduce the computational cost and that can be implemented also on mobile devices.
A possible application is to improve video conferencing, or live streaming. Since in these cases there is no original uncompressed video stream available, we report results using no-reference video quality metric showing high naturalness and quality even for efficient networks.