Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2355 2023-06-08 03:20:57 |
2 only format change Meta information modification 2355 2023-06-08 05:00:52 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Choi, D.; Wee, J.; Song, S.; Lee, H.; Lim, J.; Bok, K.; Yoo, J. k-NN Query for High-Dimensional Index Using Machine Learning. Encyclopedia. Available online: (accessed on 16 June 2024).
Choi D, Wee J, Song S, Lee H, Lim J, Bok K, et al. k-NN Query for High-Dimensional Index Using Machine Learning. Encyclopedia. Available at: Accessed June 16, 2024.
Choi, Dojin, Jiwon Wee, Sangho Song, Hyeonbyeong Lee, Jongtae Lim, Kyoungsoo Bok, Jaesoo Yoo. "k-NN Query for High-Dimensional Index Using Machine Learning" Encyclopedia, (accessed June 16, 2024).
Choi, D., Wee, J., Song, S., Lee, H., Lim, J., Bok, K., & Yoo, J. (2023, June 08). k-NN Query for High-Dimensional Index Using Machine Learning. In Encyclopedia.
Choi, Dojin, et al. "k-NN Query for High-Dimensional Index Using Machine Learning." Encyclopedia. Web. 08 June, 2023.
k-NN Query for High-Dimensional Index Using Machine Learning

The three k-nearest neighbor (k-NN) optimization techniques for a distributed, in-memory-based, high-dimensional indexing method to speed up content-based image retrieval. The techniques perform distributed, in-memory, high-dimensional indexing-based k-NN query optimization: a density-based optimization technique that performs k-NN optimization using data distribution; a cost-based optimization technique using query processing cost statistics; and a learning-based optimization technique using a deep learning model, based on query logs.

query optimization data distribution k-NN high-dimensional index machine learning

1. Introduction

With the recent development of real-time image processing, technologies for object recognition and object retrieval in images extracted from real-time operating devices, such as closed-circuit television (CCTV), are being actively implemented [1][2][3][4][5][6][7]. These technologies can be used in various fields, such as crime prevention, monitoring systems, and analyzing traffic information.
Content-based image retrieval (CBIR) involves the retrieving of images by using features extracted from objects in video images. CBIR involves vectorizing the features extracted from images and determining the similarities between the vectors to retrieve similar images. In sum, CBIR is a way of retrieving images from a database. In CBIR, a user specifies a query image and obtains images from the database that are similar to the query. To find the most similar images, CBIR compares the content of the query image with the database images. Owing to the development of new technologies, such as artificial intelligence and machine learning, researchers have conducted studies into extracting various features from images [8][9][10][11]. Data becomes increasingly high-dimensional as the features provided by images become more varied. Therefore, similarity retrieval techniques that use high-dimensional data are needed to retrieve and compare similar images or objects. Additionally, the indexing structure for high-dimensional data similarity retrieval must be appropriately constructed, in accordance with the characteristics of high-dimensional data.
Nearest neighbor search (NNS) deals with the problem of finding the closest or most similar item to a given point. Closeness is typically expressed in terms of a dissimilarity function, such as Euclidean distance, Manhattan distance, and other distance metrics. Formally, the nearest-neighbor (NN) search problem is defined as follows: given a set S of points in a space M and a query point q ∈ M, find the closest point in S to q. k-NN search is a generalized NN problem. k-NN needs to find the k closest points.
Generally, in the distributed processing environment, a distributed k-NN query is processed as follows. First, each distributed node indexes data points. Depending on the indexing scheme, the k- NN query processing methodology is different. In general, when a k-NN query is inputted, all of the nodes are requested to process the k-NN query. Each node generates k closest data points and k data points generated in n nodes are merged. Finally, the result includes the k closest points to the query from the merged result.
Various studies have been conducted recently to address these problems [12][13][14][15][16][17]. A distributed, in-memory-based, high-dimensional indexing technique was proposed to efficiently perform image retrieval using high-dimensional feature data [12]. The authors of [12] utilized big data analysis platforms, such as Spark, to implement distributed, in-memory-based, high-dimensional indexing. In addition, the authors of [13] proposed an M-tree [13] indexing algorithm on Spark. Since all distributed servers participate in query processing [12], the load on all the servers can increase when there are many retrieval requests from users. In another study, a master/slave model for distributed index query processing was used to perform efficient image retrieval in airport video monitoring systems [14]. The researchers proposed a distributed MVP (multi-vantage-point) tree, which was based on the MVP tree. However, it had an inherent flaw: it was difficult to load large-scale, high-dimensional data into its memory. Moreover, backtracking operations frequently occurred when performing k-NN query processing in the tree. Backtracking is a commonly used algorithm for searching. When processing k-NN queries in a tree structure, it explores a specific node and returns to the parent node if there are no results. It generates query results by repeating this process. In another study, a distributed k-d tree [16] was proposed to enhance the performance of k-NN processing [15]. However, depending on the amount of distributed data, the height of the k-d tree could increase, which would increase the search time [15]. Using a k-d tree also results in frequent backtracking operations when performing k-NN processing [14].
Contrary to conventional hash functions [17], LSH (locality-sensitive hashing) aims to maximize hash collisions when indexing high-dimensional data. It stores similar data in the same bucket to improve the efficiency of the search and indexing. Using random vectors, LSH transforms high-dimensional data into low-dimensional bucket indexes. Following a query request, the system searches for the bucket that contains the query result using the query location, measures the actual distance between the data within the bucket, and performs k-NN query processing. However, in LSH, the k-NN results may include false positives, depending on the index creation parameters. More buckets can be searched to ensure accurate results; however, this increases the search cost because it requires distance comparisons among all the data in the adjacent buckets. Because k-NN query processing involves finding the closest k items, distance-based indexing is efficient as it can pre-calculate the distance values to the items and index them.
The authors of [18] used a deep learning technique based on a combination of CNNs to classify images, and used RNNs to analyze natural language queries. They also used CNNs to take advantage of the deep learning technology in image content classification. The RNN model helps users to make search queries more efficiently. The authors of [19] identified occupied and vacant parking lots using a hybrid deep learning model. The model proposed in that study combined the superior features of CNNs and LSTM deep learning methods. The authors of [20] proposed a CBIR system that was based on multi deep neural networks, which combined convolutional neural networks (CNNs) and k-NN methodologies. The feature extraction of the user-supplied image was performed based on the CNNs, and the image similarity was calculated based on the k-NN methodologies in order to return a list of images. The authors of [21] compared the image retrieval of various machine learning models, such as SVMs (support vector machines), k-NNs, and CNNs.

2. k-NN Query Optimization for High-Dimensional Index Using Machine Learning

To efficiently perform CBIR, researchers have studied high-dimensional indexing techniques for retrieval, using the high-dimensional feature vectors of objects within images [12][13][14][15][16][22][23][24][25][26].
A method to address the challenges in quickly and efficiently indexing large-scale multimedia data was proposed in [12]. The proposed technique used Spark to build a distributed M-tree, enabling fast and cost-effective multimedia database retrieval. However, each node (or executor) in Spark did not have a specified indexing area and performed partitioning and indexing using the data partitioning policy. Consequently, the nodes were not filtered, because all of them must be visited when processing a k-NN query; therefore, when many queries occur, the load on all of the nodes increases, because they have to be visited when processing the queries. As a result, when the search is concentrated on a single node, the overall query processing time can be delayed until the result for that node is returned.
A new indexing method to address the scalability issue of k-d trees for k-NN query processing was proposed in [16]. The researchers constructed a distributed k-d tree to index multi-dimensional data, which is often used in this manner [16]. The distributed k-d tree consists of a global k-d tree and local k-d trees, which can serve as both masters and as slaves at each terminal node of the global k-d tree. The global k-d tree is used to divide the entire data area for processing, and local k-d trees are constructed for each partitioned area to build the index. The master/slave indexing structure is built and processed to perform distributed processing. Because the distributed k-d tree divides the area, it is easy to identify nodes that do not participate in query processing. Therefore, in [1], a filtering feature was added, which enabled more efficient query processing. However, a disadvantage of the k-d tree are the frequent backtracking operations. Moreover, because the distributed k-d tree is rebuilt as a set of local k-d trees, in addition to the global k-d tree, the query processing time increases as the tree height increases, depending on the data distribution.
A distributed MVP tree, distributed-MVP (D-MVP), was implemented to perform high-dimensional indexing for image retrieval in an airport video monitoring system [14]. The MVP tree is an improved version of the VP tree. To improve the effectiveness of later tree searches, the MVP tree divides the data using multiple partition points and stores them in each node. It also stores the distance to the partition points. The D-MVP tree uses a master/slave model for query processing, wherein the master node divides the area. The master monitors the overall system load and maintains balance. To avoid overloading a slave node, when input data are concentrated on it, the master node increases the height of the tree—a method called “hot spot load balancing”—to dynamically add partition areas. However, the M-tree family constantly calculates distances at higher nodes to find terminal nodes that can conduct query processing, which increases the load on the master node and delays the processing time in real-time query processing environments.
iDistance indexing [22] is a distance-based indexing technique that represents high-dimensional data as one-dimensional distances and indexes the data in a B+-tree [22][23]. The distance is calculated between the reference point and the corresponding data when they are displayed in the distance space. Key-values are generated in order of proximity to the reference point, and these key-values are indexed in the B+-tree. Because indexing is performed using only distance, the exact location of the object cannot be determined, and there is a possibility of generating the same key-value. Since iDistance adds a constant to the distance, there is almost no probability that one data point will be included in two reference points. However, if the distance exceeds the constant, an incorrect key-value may be generated. Therefore, an appropriate constant must be assigned. K-NN queries are converted to range queries for processing in iDistance because the index is the distance to the reference point. It is not possible to precisely locate all the data. The distance value between the query and reference points is calculated in order to compute the actual key-value to be inserted into the B+ tree, and only the data within the search range in which the k-NN query is converted is searched to generate the k-NN result. The search range converted to iDistance is initially set to 1% of the total index area, which may lead to frequent search range expansions. Therefore, an optimization technique is needed to convert these k-NN queries into range queries.
The authors of [24] proposed an optimization technique based on product quantization. They applied the optimization technique in different similarity search scenarios, such as brute-force, approximate, and compressed-domain. They also presented a near-optimal algorithmic layout for exact and approximate k-nearest neighbor searches on GPU.
The authors of [25] proposed a distributed image retrieval framework, based on location-sensitive hash (LSH) on Spark. A distributed, K-means-based bag of visual words (BoVW) algorithm and an LSH algorithm were proposed to build LSH index vectors on Spark in parallel. It performs a K-means-based BoVW distributed algorithm by extracting SIFT feature data point sets. However, it takes more than 95% of the time for the whole process of constructing the index, and the effect of clustering is very uncertain.
The authors of [26] proposed a fast CBIR system using Spark (CBIR-S), which targets large-scale images. It uses a memory-centric distributed storage system called Tachyon to enhance the write operation. It can speed up by using a parallel k-NN search method, which is based on the MapReduce model on Spark. In addition, it can exploit the cache method of the Spark framework.
A distributed, in-memory, high-dimensional indexing technique was presented in [17]. It aims at efficient CBIR in large-scale, high-dimensional data, using a high-dimensional indexing technique with a master/slave structure. Additionally, Spark was used to build an index for high-dimensional vector data. A hybrid distributed high-dimensional index was implemented to address the issue of k-d trees and iDistance. Combining the advantages of both indexes, the overall structure consists of a master/slave structure, in which the master is responsible for data distribution and for selecting slaves for query processing, which reduces the system load. The slave nodes process the query and conduct the data indexing.
The k-NN algorithm has a wide range of uses because it finds k neighbors for a given value of k. However, it has a drawback: the processing cost increases proportionally to the amount of data or the number of dimensions. Several studies have been conducted to improve the throughput of k-NN [26][27][28].
The “jump method” was proposed to increase the speed of k-NN query processing [28]. To achieve this, the k-means algorithm is applied to the data, and clusters are generated using the formula proposed in the study. When the user provides a value for k, k centers are created, and the k-means algorithm allocates each data point to the nearest center.
Reference [29] overcomes the limitation of the original k-NN algorithm’s ignoring of the influence of the neighboring points, which directly affects localization accuracy. The researchers improved the indoor location identification using Wi-Fi-received, signal strength indicator (RSSI)-based fingerprints. The RSSI is highly dependent on the access point (AP) and the fingerprint learning stage, thus DNN is used in conventional k-NN to resolve this dependency. Additionally, DNN is used to classify fingerprint data sets. Then these possible locations in a certain class are classified by the improved k-NN algorithm to determine the final position. The improved k-NN is conducted by boosting the weights on K-nearest neighbors according to the number of matching AP.k-NN, and DNN were used to address the intrusion detection accuracy of existing intrusion detection systems [30]. Network attack data are classified by performing classification and labeling tasks on a “CICIDS-2017” dataset, which includes network attacks. The k-NN algorithm is used for machine learning, whereas DNN is used for deep learning, and the outputs of the two methods are then compared. A comparison with the general k-NN queries using the results demonstrates the superiority of DNN.


  1. Hu, Y.; Huang, J.; Schwing, A.G. VideoMatch: Matching based video object segmentation. Comput. Vis. ECCV 2018, 2018, 56–73.
  2. Zhao, L.; He, Z.; Cao, W.; Zhao, D. Real-time moving object segmentation and classification from HEVC compressed surveillance video. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 1346–1357.
  3. Joshi, K.A.; Thakore, D.G. A survey on moving object detection and tracking in video surveillance system. J. Soft Comput. Eng. 2012, 2, 44–48.
  4. Cheng, J.; Yuan, Y.; Li, Y.; Wang, J.; Wang, S. Learning to Segment Video Object With Accurate Boundaries. IEEE Trans. Multimed. 2020, 23, 3112–3123.
  5. Matiolański, A.; Maksimova, A.; Dziech, A. CCTV object detection with fuzzy classification and image enhancement. Multimed. Tool. Appl. 2016, 75, 10513–10528.
  6. Kakadiya, R.; Lemos, R.; Mangalan, S.; Pillai, M.; Nikam, S. AI based automatic robbery/theft detection using smart surveillance in banks. In Proceedings of the International Conference on Electronics, Communication and Aerospace Technology, Coimbatore, India, 12–14 June 2019; pp. 201–204.
  7. Sukhia, K.N.; Riaz, M.M.; Ghafoor, A. Content-based retinal image retrieval. IET Image Process. 2019, 13, 1525–1534.
  8. Yu, J.; Liu, H.; Zheng, X. Two-dimensional joint local and nonlocal discriminant analysis-based 2D image feature extraction for deep learning. Neural Comput. Applic. 2020, 32, 6009–6024.
  9. Sharif, U.; Mehmood, Z.; Mahmood, T.; Javid, M.A.; Rehman, A.; Saba, T. Scene analysis and search using local features and support vector machine for effective content-based image retrieval. Artif. Intell. Rev. 2019, 52, 901–925.
  10. Saritha, R.R.; Paul, V.; Kumar, P.G. Content based image retrieval using deep learning process. Cluster Comput. 2019, 22, 4187–4200.
  11. Tadi Bani, N.T.; Fekri-Ershad, S. Content-based image retrieval based on combination of texture and colour information extracted in spatial and frequency domains. Electron. Libr. 2019, 37, 650–666.
  12. Ma, Y.; Liu, D.; Scott, G.; Uhlmann, J.; Shyu, C.R. In-memory distributed indexing for large-scale media data retrieval. In Proceedings of the International Symposium on Multimedia, Taichung, Taiwan, 11–13 December 2017; pp. 232–239.
  13. Skopal, T.; Lokoč, J. New dynamic construction techniques for M-tree. J. Discrete Algor. 2009, 7, 62–77.
  14. Cheng, H.; Yang, W.; Tang, R.; Mao, J.; Luo, Q.; Li, C.; Wang, A. Distributed Indexes Design to Accelerate Similarity based Images Retrieval in Airport Video Monitoring Systems. In Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery, Zhangjiajie, China, 15–17 August 2015; pp. 1908–1912.
  15. Patwary, M.M.A.; Satish, N.R.; Sundaram, N.; Liu, J.; Sadowski, P.J.; Racah, E.; Byna, S.; Tull, C.; Bhimji, W.; Prabhat; et al. PANDA: Extreme Scale Parallel k-Nearest Neighbor on Distributed Architectures. In Proceedings of the International Parallel and Distributed Processing Symposium, Chicago, IL, USA, 23–27 May 2016; pp. 494–503.
  16. Wei, H.; Du, Y.; Liang, F.; Zhou, C.; Liu, Z.; Yi, J.; Xu, K.; Wu, D. A k-d tree-based algorithm to parallelize kriging interpolation of big spatial data. GI Sci.Remote. Sens. 2015, 52, 40–57.
  17. Lee, H.; Lee, H.; Wee, J.; Song, S.; Kang, T.; Choi, D.; Bok, K. Distance-based high-dimensional index structure for efficient query processing in spark environments. In Proceedings of the ICCC 2020, Busan, Korea, 12–14 November 2020; pp. 321–322.
  18. Yang, M.; He, D.; Fan, M.; Shi, B.; Xue, X.; Li, F.; Huang, J. Dolg: Single-stage image retrieval with deep orthogonal fusion of local and global features. In Proceedings of the IEEE/CVF International conference on Computer Vision, Montreal, BC, USA, 11–17 October 2021; pp. 11772–11781.
  19. Hung, B.T.; Chakrabarti, P. Parking lot occupancy detection using hybrid deep learning CNN-LSTM approach. In Proceedings of the 2nd International Conference on Artificial Intelligence: Advances and Applications: ICAIAA 2021, Jaipur, India, 27–28 March 2021; pp. 501–509.
  20. Hung, B.T.; Pramanik, S. Content-Based Image Retrieval using Multi Deep Neural Networks and K-Nearest Neighbor Approaches. 2023. Available online: (accessed on 10 March 2023).
  21. Yenigalla, S.C.; Srinivas Rao, K.; Ngangbam, P.S. Implementation of content-based image retrieval 3 using artificial neural networks 4. Hologr. Meets Adv. Manuf. 2023, 15, 10.
  22. Jagadish, H.V.; Ooi, B.C.; Tan, K.L.; Yu, C.; Zhang, R. iDistance: An Adaptive B+-tree based indexing method for nearest neighbor search. ACM Trans. Database Syst. 2005, 30, 364–397.
  23. Huynh, C.V.; Huh, J.H. B+-Tree construction on massive Data with Hadoop. Cluster Comput. 2019, 22, 1011–1021.
  24. Johnson, J.; Douze, M.; Jégou, H. Billion-scale similarity search with gpus. IEEE Trans. Big Data 2019, 7, 535–547.
  25. Hou, Z.; Huang, C.; Wu, J.; Liu, L. Distributed Image Retrieval Base on LSH Indexing on Spark. In Proceedings of the Big Data and Security: First International Conference, ICBDS 2019, Nanjing, China, 20–22 December 2019; pp. 429–441.
  26. Mezzoudj, S.; Behloul, A.; Seghir, R.; Saadna, Y. A parallel content-based image retrieval system using spark and tachyon frameworks. J. King Saud. Univ. Comput. Inf. Sci. 2021, 33, 141–149.
  27. Yan, Z.; Lin, Y.; Peng, L.; Zhang, W. Harmonia: A high throughput B+-tree for GPUs. In Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, New York, NY, USA, 16–20 February 2019; pp. 133–144.
  28. Vajda, S.; Santosh, K.C. A fast k-nearest neighbor classifier using unsupervised clustering. In Proceedings of the International Conference on Recent Trends in Image Processing and Pattern Recognition, Kingsville, TX, USA, 22–23 December 2022; Springer: Singapore, 2016.
  29. Dai, P.; Yang, Y.; Wang, M.; Yan, R. Combination of DNN and improved KNN for indoor location fingerprinting. Wirel. Commun. Mob. Comput. 2019, 2019, 4283857.
  30. Atefi, K.; Hashim, H.; Kassim, M. Anomaly analysis for the classification purpose of intrusion detection system with K-nearest neighbors and deep neural network. In Proceedings of the IEEE 7th Conference on Systems, Process and Control (ICSPC), Melaka, Malaysia, 13–14 December 2019.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : , , , , , ,
View Times: 239
Revisions: 2 times (View History)
Update Date: 08 Jun 2023
Video Production Service