A Vision-Based Micro-Manipulation System: Comparison
Please note this is a comparison between Version 1 by Vytautas Bucinskas and Version 2 by Camila Xu.

Micro-manipulation systems play a critical role in various scientific and industrial applications, enabling precise handling and control of microscopic objects. Precise micro-manipulation is essential, whether it involves the assembly of microelectronic components, the handling of biological samples, or the fabrication of micro-scale structures. However, achieving accurate manipulation at the micro-scale poses unique challenges, primarily due to the minuscule size of the objects involved.

  • object detection
  • micro-manipulation
  • image enhancement

1. Introduction

Micro-manipulation systems play a critical role in various scientific and industrial applications, enabling precise handling and control of microscopic objects. These systems have gained increasing importance in diverse fields such as biotechnology, materials science, and microelectronics, where the need for manipulating microscopic objects is constantly growing [1][2][3][1,2,3]. Precise micro-manipulation is essential, whether it involves the assembly of microelectronic components, the handling of biological samples, or the fabrication of micro-scale structures. However, achieving accurate manipulation at the micro-scale poses unique challenges, primarily due to the minuscule size of the objects involved.
In recent years, the integration of vision-based technologies into micro-manipulation systems has opened new paths for enhancing their capabilities, enabling more efficient and versatile manipulation of micro-scale objects. While micro-manipulation has traditionally relied on manual control or open-loop control systems, these methods are limited in their precision and adaptability. In particular, when dealing with living cells, a key challenge arises in their detection and recognition. Living cells are often immersed in non-transparent growth media, making visual recognition and precise manipulation a difficult task. Additionally, variations in image quality can be influenced by factors such as the nature of the object material, calibration parameters, and ambient light conditions.
Existing systems for biological object manipulation are frequently manually controlled, and many of these systems are tested not with biological specimens, but with polymer or metal micro-objects [4]. User participation is currently required to determine the position of the object and the distance to the object being manipulated. This limitation not only hampers efficiency, but also introduces potential inaccuracies in the manipulation process.

2. Micro-Manipulation Systems

In the field of micro-manipulation, recent developments have contributed to the refinement and advancement of techniques for handling microscopic objects. These developments reflect the ongoing efforts to improve precision, adaptability, and automation in micro-manipulation systems. For example, Riegel et al. explored the possibilities of vision-based manipulation of micro-parts through simulation-based experiments. The study resulted in successful grasp, hold, and release manipulations of micro-parts (400–600 μm size) with a force-sensing resolution of less than 6 μN, even when softness variation was introduced on the micro-object (±20% around the average value) [5][7]. Chen et al. mechanically stimulated muscle cell structures using a vision-based micro-robotic manipulation system, emphasizing the importance of vision-based force measurement and correction techniques to enhance precision [6][8]. In the domain of biomedical microelectrode implantation, Qin et al. automated the hooking of flexible microelectrode probes with micro-needles, employing visual guidance and a robotic hooking control system that operated under varying microscope magnifications [7][9]. Another contribution came from a robotic framework for obtaining single cells from tissue sections, incorporating an attention mechanism improved (AMI) tip localization neural network, a transformation matrix for camera-to-robot coordination, and model predictive control (MPC) to enable precise single-cell dissection from tissues, with the error of autonomous single-cell dissection being no more than 0.61 μm [8][10]. Additionally, a 3D-printed soft robotic hand with computer-vision-based feedback control provided a novel approach to micro-manipulation, offering a remarkable degree of accuracy and precision in micro-scale object manipulation [9][11].

3. Positioning Accuracy of Micro-Manipulators

In response to the escalating demand for advanced positioning systems, ball screw-based mechanisms, driven by either servo drives or stepper motors, have emerged as an important choice, due to their capacity to deliver exceptional levels of positioning accuracy and repeatability. A pivotal approach to mitigating the transient effects associated with these systems is the formulation of mathematical models, as emphasized in prior works [10][11][12,13]. Mathematical modeling encompasses the empirical characterization of the system behavior across a diverse spectrum of operational scenarios, thereby streamlining development efforts and facilitating the integration of contemporary control methodologies. Leveraging the potency of machine learning techniques is recognized as a way to attain the utmost precision in system positioning. Bejar et al. discuss the development of a deep reinforcement learning-based neuro-control system for magnetic positioning system applications. Two neuro-controllers were trained to control the X and Y-axis motions of the positioning system. The training was based on the Q-learning method with an actor-critic architecture, and the parameters were updated using gradient descent techniques to maximize a reward function defined in terms of positioning accuracy. The performance of the control system was verified for different setpoints and working conditions, demonstrating the effectiveness of the proposed method [12][14]. Another paper addresses the issue of hysteresis nonlinearity in a piezoelectric micro-positioning platform (PMP), which limits its positioning accuracy. It introduces a Krasnosel’skii–Pokrovskii (KP) model to describe the hysteresis behavior, and involves an adaptive linear neural network for real-time model identification. To compensate for hysteresis, the paper presents a feed-forward control method and a hybrid approach combining iterative learning control and fractional order Proportional–Integral–Derivative (PID) control, which is validated through experiments and significantly enhances control accuracy [13][15]. Xu et al. introduce a method that combines machine vision and machine learning to determine the correctness of reed positioning and estimate adjusting displacements. The back propagation neural network (BPNN) achieved 100% accuracy in assessing correctness and a measuring precision of ±0.025 mm for displacement estimation, providing an effective solution for improving the manufacturing process of aerophones [14][16]. Leroux et al. explore the challenge of manually controlling manipulator robots with more degrees of freedom than can be managed through traditional input methods. They propose a solution that combines eye tracking and computer vision to automate robot positioning toward a 3D target point. This approach significantly improves precision, reducing the average error and making it a valuable tool for robot control and rehabilitation engineering [15][17]. Contemporary scanning probe microscopes, including scanning and tunneling electron microscopes, have increasingly incorporated visual recognition and machine learning techniques for the extraction of intricate data from acquired images [16][18]. However, the utilization of these data has hitherto not extended to the autonomous localization of target locations, precise selection of measurement positions, and the rectification of inherent inaccuracies.

4. Image Enhancement

In the process of capturing microbial cell images, the quality of these images can be influenced by various factors, including fluctuations in illumination [17][19]. Moreover, microbial images may also be susceptible to distortions arising from camera lenses and exposure time settings. To address the illumination issue, image enhancement techniques are employed as a preprocessing step. Numerous methods for enhancing illumination and contrast have been explored in the literature [18][19][20][20,21,22]. Contrast enhancement is a technique used to improve image quality and reveal subtle details in low-contrast areas. Various intensity adjustments, often determined by users, are used to enhance visual contrast. However, the effectiveness of these adjustments depends on user-defined function coefficients, and different transformations can produce distinct patterns. In the realm of biomedical image enhancement, various techniques have been explored. Shirazi et al. harnessed Wiener filtering and Curvelet transforms to enhance red blood cell images and reduce noise [21][23]. Plissiti et al. [22][24] introduced Contrast-Limited Adaptive Histogram Equalization (CLAHE) for detecting cell nuclei boundaries in Pap smear images, optimizing image backgrounds and regions of interest through CLAHE and global thresholding in preprocessing. Rejintal et al. [23][25] opted for histogram equalization in the preprocessing stage, aiming to enrich contrast in leukemia microscopic images to facilitate cell segmentation and cancer detection. Tyagi et al. [24][26] turned to histogram equalization for image enhancement, striving to classify normal RBC and poikilocyte cells using Artificial Neural Networks, fueled by a dataset of 100 images from diverse blood samples. Somasekar et al. [25][27] uncovered the potential of Gamma Equalization (GE) to enhance images. Additionally, Sparavigana [26][28] highlighted the versatile use of the Retinex filter in GIMP for enhancing both panoramic radiographic and microscopy images, offering valuable applications in medical, biological, and other scientific imaging. Bhateja et al. [27][29] proposed an enhanced Multi-scale Retinex (MSR) technique with chromaticity preservation (MSRCP) for enhancing bacterial microscopy images, leading to improved contrast and visibility of bacterial cells, as confirmed by Image Quality Assessment (IQA) parameters. Lin et al. [28][5] presented a Fuzzy Automatic Contrast Enhancement (FACE) method that utilizes fuzzy clustering to improve image contrast automatically, avoiding visual artifacts and retaining original colors.

5. Visual Recognition at Microscopic Level

Machine-learning- and deep-learning-based computer-assisted solutions offer significant improvements in clinical microbiology research, particularly in image analysis and bacterial species recognition [29][30]. Ronneberger et al. [30][31] present a U-Net: Convolutional Networks approach, leveraging data augmentation and a contracting–expanding architecture, showcasing remarkable performance in neuronal structure segmentation and cell tracking tasks, all while maintaining high-speed processing on modern GPUs. Hollandi et al. [31][32] present nucleAIzer, a deep learning approach that adapts its nucleus-style model through image style transfer, enabling efficient, annotation-free cell nucleus localization across diverse microscopy experiments and providing a user-friendly solution for biological light microscopy. Greenwald et al. [32][33] present TissueNet, a dataset for segmentation model training, and Mesmer, a deep learning-based algorithm that outperforms previous methods in cell boundary identification, demonstrating adaptability, human-level performance, and application to cell feature extraction and lineage analysis during human gestation. Haberl et al. [33][34] introduced CDeep3M, a cloud-based deep convolutional neural network solution designed for biomedical image segmentation. The system has been successfully benchmarked on various imaging datasets, demonstrating its accessibility benefits. Lalit et al. [34][35] introduce EmbedSeg, a method for precise instance segmentation in 2D and 3D biomedical images, offering top-tier performance, open-source availability, and user-friendly tools for broad accessibility. Nitta et al. [35][36] introduce a groundbreaking technology, intelligent image-activated cell sorting, enabling real-time, automated sorting of cells by combining high-throughput microscopy, data processing, and decision-making for various biological studies. Fujita et al. [36][37] present an innovative approach for simultaneous cell detection and segmentation using Mask R-CNN, enhancing detection performance by incorporating focal loss and achieving promising results on benchmark datasets, particularly DSB2018. Whipp et al. [37][38] present a study in which deep learning models for automated microbial colony counting are developed and evaluated, utilizing the You Only Look Once (YOLO) framework. Sebastián et al. [38][39] introduced a YOLOv5-based model for automating cell recognition and counting, and compared it to the current segmentation-based U-Net and OpenCV model, achieving high accuracy, precision, recall, and F1 scores. Huang et al. [39][40] introduce a novel approach that combines contrast enhancement and the YOLOv5 framework for automated yeast cell detection, achieving exceptional accuracy and performance in contrast to conventional methods, with additional precision through OpenCV for cell contour delineation.