paxnation.blogg.se

Yuv video sequences
Yuv video sequences






For example, in a variable CNN (VCNN) based in-loop filter was proposed for VVC, which can process video frames coded with different QPs and frame types via a single model. During the development of VVC, there were some CNN based in-loop filters proposed. In, a hierarchical CNN structure was proposed with multiple models trained for different quantization parameters (QPs) and frame types. For more general coding scenarios, in a deep residual highway convolutional neural network (RHCNN) based in-loop filter was introduced on top of HEVC for both intra and inter coding modes. One early work proposed a variable-filter-size residual learning CNN (VRCNN) to replace the original in-loop filters DBF and SAO in HEVC for intra frames. Particularly, various convolutional neural network (CNN) based in-loop filters have been proposed to further alleviate coding artifacts and improve coding efficiency on top of the traditional in-loop filters. Motivated by the promising advances of deep-learning based image/video processing, such as super resolution and image denoising, learning based techniques have also been investigated in the video coding domain. deblocking filter (DBF), sample adaptive offset (SAO) and adaptive loop filter (ALF), etc.

yuv video sequences yuv video sequences

To mitigate visual artifacts and improve coding efficiency, some in-loops filters are included and sequentially applied in VVC, e.g. The state-of-the-art video coding standard VVC/H.266, similar to its predecessor HEVC/H.265, still uses a block-based hybrid coding architecture. In recent years, the new emerging video requirements, such as 4k, 8k, virtual reality and immersive 360° videos, keep driving the evolution of video compression standards with better coding efficiency.








Yuv video sequences