HVS and Contrast Sensitivity to Assess Image Quality: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , ,

The human visual system (HVS) has many characteristics, such as the dual-pathway feature, in which visual information is transmitted through the ventral pathway and dorsal pathway in the visual cortex. The contrast sensitivity characteristic of the HVS reflects the different sensitivity of the human eye to different spatial frequencies. This characteristic is similar to the widely used spatial attention mechanism and image saliency.

  • no-reference image quality assessment
  • dual-stream networks

1. Introduction

With the rapid development of digital multimedia technology and the popularity of various photography devices, image information has become an important source of human visual information. However, in the process of going from obtaining digital images to arriving at the human visual system, there is an inevitable degradation in image quality. Therefore, it is meaningful to research image quality assessment (IQA) methods that are highly consistent with human visual perception [1].
According to the degree of participation of the original image information, objective IQA methods can be classified into the following categories: full-reference IQA, reduced-reference IQA, and no-reference IQA [2]. No-reference IQA is also called blind IQA (BIQA). Because BIQA methods do not require the use of reference image information and are more closely related to actual application scenarios, they have become a focus of research in recent years [3].
Traditional BIQA methods (e.g., NIQE [4], BRISQUE [5], DIIVINE [6], and BIQI [7]) typically extract low-level features from images and then use regression models to map them to image quality scores. The extracted features are often manually designed and are often inadequate to fully characterize the quality of images. With the development of deep learning, many deep-learning-based BIQA methods (e.g., IQA-CNN [8], DIQaM-NR [9], DIQA [10], HyperIQA [11], DB-CNN [12], and TS-CNN [13]) have been proposed. With their powerful learning abilities, these methods can extract the high-level features of distorted images, and their performance is greatly improved compared to the traditional methods. Although most existing deep-learning-based IQA methods enhance the feature-extraction ability by proposing new network structures to improve the model’s performance, they overlook the important influence of HVS characteristics and the guiding role they may play.
The goal of BIQA is to judge the degree of image distortion with high consistency to human visual perception. It is natural to combine the characteristics of the human visual system (HVS) with powerful deep learning methods. Moreover, based on HVS characteristics, research on BIQA can provide new research perspectives for the study of IQA. This can help to develop evaluation metrics that are more in line with HVS characteristics and provide useful references for understanding how the HVS perceives image degradation mechanisms, making it a valuable scientific problem.
The HVS has many characteristics, such as the dual-pathway feature [14,15], in which visual information is transmitted through the ventral pathway and dorsal pathway in the visual cortex. The former is involved in image-content recognition and long-term memory and is also known as the “what” pathway. The latter is involved in processing spatial-location information of objects and is also known as the “where” pathway. Inspired by the ventral and dorsal pathways of the HVS, Karen and Andrew [16] proposed a dual-stream convolutional neural network (CNN) structure and successfully applied it to the field of video action recognition. They used a spatial stream to take video frames as input to learn scene information and a temporal stream to take optical flow images as input to learn object motion information. Optical flow images explicitly describe the motion between video frames, eliminating the need for CNNs to implicitly predict object motion information, simplifying the learning process, and significantly improving the model accuracy. The contrast sensitivity characteristic of the HVS reflects the different sensitivity of the human eye to different spatial frequencies [17]. This characteristic is similar to the widely used spatial attention mechanism [18] and image saliency [19]. Campbell et al. [20] proposed a contrast sensitivity function to explicitly calculate the sensitivity of the HVS to different spatial frequencies. Some traditional IQA methods [21,22] use the contrast sensitivity function to weight the extracted features to achieve better results. In addition, when perceiving images, the HVS simultaneously pays attention to both global and local features [23]. This characteristic is particularly important for IQA because the degree of distortion of authentically distorted images is often not uniformly distributed [24]. Some IQA methods [25,26] are designed for extracting multi-scale features based on this characteristic, and the results show that using multi-scale features can effectively improve the algorithm’s performance. The aforementioned HVS characteristics have been directly or indirectly applied to computer-vision-related tasks and have been experimentally proven to be effective.

2. Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality

According to the method for feature extraction, BIQA methods can be generally divided into two categories: handcrafted feature-extraction methods and learning-based methods. Handcrafted feature-extraction methods typically extract the natural scene statistics (NSS) features of distorted images. Researchers have found that the NSS features vary with the degree of distortion. Therefore, NSS features can be mapped to image quality scores through regression models.
Early NSS methods extracted features in the transform domain of the image. For example, the BIQI method proposed by Moorthy and Bovik [7] performs a wavelet transform on the distorted image and fits the wavelet decomposition coefficients using the generalized Gaussian distribution (GGD). They first determine the type of distortion and then predict the quality score of the image based on the specific distortion type. Later, they extend the features of BIQI to obtain the DIIVINE [6], which more comprehensively describes scene statistics by considering the correlation of sub-bands, scales, and directions. The BLIINDS method proposed by Saad et al. [27] performs a discrete cosine transform (DCT) on distorted images to extract contrast and structural features based on DCT, which are then mapped to quality scores through a probabilistic prediction model. It is computationally expensive for all of these methods to extract features in the transform domain of the image. To avoid transforming the image, many researchers have proposed methods to directly extract NSS features in the spatial domain. The BRISQUE method proposed by Mittal et al. [5] extracts the local normalized luminance coefficients of distorted images in the spatial domain and quantifies the loss of “naturalness” of distorted images. This method has very low computational complexity. Based on the BRISQUE, Mittal et al. proposed NIQE [4], which uses multivariate Gaussian models (MVGs) to fit the NSS features of distorted and natural images and defines the distance between the two models as the quality of the distorted image. The handcrafted feature-extraction methods achieve a good performance on small databases (such as LIVE [28]), but the designed features can only extract low-level features of images, and their expressive power is limited. Therefore, their performance on large-scale synthetically distorted databases (such as TID2013 [29] and KADID-10k [30]) and authentically distorted databases (such as LIVE Challenge [31]) is relatively poor.
With the successful applications of deep learning methods to other visual tasks [32,33], more and more researchers have applied deep learning to BIQA. Kang et al. [8] first used CNNs for no-reference image quality assessment. To solve the problem of insufficient data, they segmented the distorted images into non-overlapping 32 × 32 patches and assigned each patch a quality score as its source image’s score. Bosse et al. [9] proposed DIQaM-NR and WaDIQaM-NR based on the VGG [32]. This method uses a deeper CNN and simultaneously predicts the quality scores and weights of image patches, and weighting summation is used to obtain the quality score of the image. Kim et al. [33] proposed BIECON. It uses the FR-IQA method to predict the quality scores of distorted image patches, utilizes these scores as intermediate results to train the model, and subsequently finely tunes the model using ground truth scores of images. Kim et al. [10] subsequently proposed DIQI. The framework is similar to BIECON but uses error maps as intermediate training targets to avoid overfitting. Su et al. [11] proposed HyperIQA for authentically distorted images. This method predicts the image quality score based on the perceived image content and also increases the multi-scale features so that the model can capture local distortions. Some researchers have introduced multitask learning into BIQA, which integrates multiple tasks into one model for training and promotes each other based on the correlation between tasks. Kang et al. [34] proposed IQA-CNN++, which integrates image quality assessment and image distortion type classification tasks and improves the model’s distortion type classification performance through multitask training. Ma et al. [35] proposed MEON, which simultaneously performs distortion-type classification and quality score prediction. Unlike other multitask models, the authors first pre-train the distortion-type classification sub-network and then perform joint training of the quality score prediction network. The experimental results show that this pre-training mechanism is effective. Sun et al. [36] proposed a Distortion Graph Representation (DGR) learning framework called GraphIQA. GraphIQA enables the distinction of distortion types by learning the contrast relationship between different DGRs and inferring the ranking distribution of samples from various levels within a DGR. Experimental results show that GraphIQA achieves state-of-the-art performance on both synthetic and authentic distortions. Zhu et al. [37] proposed a meta-learning-based NR-IQA method named MeataIQA. The method collects a diverse set of NR-IQA tasks for different distortions and employs meta-learning to capture prior knowledge. The quality prior-knowledge model is then fine-tuned for a target NR-IQA task, achieving superior performance compared to state-of-the-art methods. Wang and Ma [38] proposed an active learning method to improve the NR-IQA methods by leveraging group maximum differentiation (gMAD) examples. The method involves pre-training a DNN-based BIQA model, identifying weaknesses through gMAD comparisons, and fine-tuning the model using human-rated images. Li et al. [39] proposed a normalization-based loss function, called “Norm-in-Norm” for NR-IQA. The loss function utilizes the normalization of predicted and subjective quality scores and is defined based on the norm of the differences between these normalized values. Theoretical analysis and experimental results show that the embedded normalization enhances the stability and predictability of gradients, leading to faster convergence. Zhang et al. [40] conducted the first study on the perceptual robustness of NR-IQA models. The study identifies that conventional, knowledge-driven NR-IQA models and modern DNN-based methods lack inherent robustness against imperceptible perturbations. Furthermore, the counter-examples generated by one NR-IQA model do not efficiently transfer to falsify other models, highlighting valuable insights into the design flaws of individual models.
In recent years, continual learning has achieved significant success in the field of image classification, and some researchers have also applied it to IQA. Zhang et al. [41] formulated continual learning for NR-IQA to handle novel distortions. The method allows the model to learn from a stream of IQA datasets, preventing catastrophic forgetting and adapting to new data. Experimental results show the effectiveness of the proposed method compared to standard training techniques for BIQA. Liu et al. [42] proposed a lifelong IQA (LIQA) method to address the challenge of adapting to unseen distortion types by mitigating catastrophic forgetting and learning new knowledge without accessing previous training data. It utilizes the Split-and-Merge distillation strategy to train a single-head network for task-agnostic predictions. To enhance the model’s feature extraction ability, some researchers have proposed a dual-stream CNN structure. Zhang et al. [12] proposed a DB-CNN, which uses VGG-16, pre-trained on ImageNet [43], to extract authentic distortion features and uses CNN, pre-trained on Waterloo Exploration Database [44] and PASCAL VOC 2012 [45], to extract synthetic distortion features. Yan et al. [13] also proposed a dual-stream method. The two streams take the distorted image and its gradient image as input, respectively, so that the gradient stream focuses more on the details of the distorted image.
Although the aforementioned deep-learning-based BIQA methods have achieved good results, there is still room for further improvement.

This entry is adapted from the peer-reviewed paper 10.3390/s23104974

This entry is offline, you can click here to edit this entry!
Video Production Service