Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1720 2023-12-26 15:48:46 |
2 layout Meta information modification 1720 2023-12-27 02:28:42 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Vismanis, O.; Arents, J.; Subačiūtė-Žemaitienė, J.; Bučinskas, V.; Dzedzickis, A.; Patel, B.; Tung, W.; Lin, P.; Greitans, M. A Vision-Based Micro-Manipulation System. Encyclopedia. Available online: https://encyclopedia.pub/entry/53143 (accessed on 06 May 2024).
Vismanis O, Arents J, Subačiūtė-Žemaitienė J, Bučinskas V, Dzedzickis A, Patel B, et al. A Vision-Based Micro-Manipulation System. Encyclopedia. Available at: https://encyclopedia.pub/entry/53143. Accessed May 06, 2024.
Vismanis, Oskars, Janis Arents, Jurga Subačiūtė-Žemaitienė, Vytautas Bučinskas, Andrius Dzedzickis, Brijesh Patel, Wei-Cheng Tung, Po-Ting Lin, Modris Greitans. "A Vision-Based Micro-Manipulation System" Encyclopedia, https://encyclopedia.pub/entry/53143 (accessed May 06, 2024).
Vismanis, O., Arents, J., Subačiūtė-Žemaitienė, J., Bučinskas, V., Dzedzickis, A., Patel, B., Tung, W., Lin, P., & Greitans, M. (2023, December 26). A Vision-Based Micro-Manipulation System. In Encyclopedia. https://encyclopedia.pub/entry/53143
Vismanis, Oskars, et al. "A Vision-Based Micro-Manipulation System." Encyclopedia. Web. 26 December, 2023.
A Vision-Based Micro-Manipulation System
Edit

Micro-manipulation systems play a critical role in various scientific and industrial applications, enabling precise handling and control of microscopic objects. Precise micro-manipulation is essential, whether it involves the assembly of microelectronic components, the handling of biological samples, or the fabrication of micro-scale structures. However, achieving accurate manipulation at the micro-scale poses unique challenges, primarily due to the minuscule size of the objects involved.

object detection micro-manipulation image enhancement

1. Introduction

Micro-manipulation systems play a critical role in various scientific and industrial applications, enabling precise handling and control of microscopic objects. These systems have gained increasing importance in diverse fields such as biotechnology, materials science, and microelectronics, where the need for manipulating microscopic objects is constantly growing [1][2][3]. Precise micro-manipulation is essential, whether it involves the assembly of microelectronic components, the handling of biological samples, or the fabrication of micro-scale structures. However, achieving accurate manipulation at the micro-scale poses unique challenges, primarily due to the minuscule size of the objects involved.
In recent years, the integration of vision-based technologies into micro-manipulation systems has opened new paths for enhancing their capabilities, enabling more efficient and versatile manipulation of micro-scale objects. While micro-manipulation has traditionally relied on manual control or open-loop control systems, these methods are limited in their precision and adaptability. In particular, when dealing with living cells, a key challenge arises in their detection and recognition. Living cells are often immersed in non-transparent growth media, making visual recognition and precise manipulation a difficult task. Additionally, variations in image quality can be influenced by factors such as the nature of the object material, calibration parameters, and ambient light conditions.
Existing systems for biological object manipulation are frequently manually controlled, and many of these systems are tested not with biological specimens, but with polymer or metal micro-objects [4]. User participation is currently required to determine the position of the object and the distance to the object being manipulated. This limitation not only hampers efficiency, but also introduces potential inaccuracies in the manipulation process.

2. Micro-Manipulation Systems

In the field of micro-manipulation, recent developments have contributed to the refinement and advancement of techniques for handling microscopic objects. These developments reflect the ongoing efforts to improve precision, adaptability, and automation in micro-manipulation systems. For example, Riegel et al. explored the possibilities of vision-based manipulation of micro-parts through simulation-based experiments. The study resulted in successful grasp, hold, and release manipulations of micro-parts (400–600 μm size) with a force-sensing resolution of less than 6 μN, even when softness variation was introduced on the micro-object (±20% around the average value) [5]. Chen et al. mechanically stimulated muscle cell structures using a vision-based micro-robotic manipulation system, emphasizing the importance of vision-based force measurement and correction techniques to enhance precision [6]. In the domain of biomedical microelectrode implantation, Qin et al. automated the hooking of flexible microelectrode probes with micro-needles, employing visual guidance and a robotic hooking control system that operated under varying microscope magnifications [7]. Another contribution came from a robotic framework for obtaining single cells from tissue sections, incorporating an attention mechanism improved (AMI) tip localization neural network, a transformation matrix for camera-to-robot coordination, and model predictive control (MPC) to enable precise single-cell dissection from tissues, with the error of autonomous single-cell dissection being no more than 0.61 μm [8]. Additionally, a 3D-printed soft robotic hand with computer-vision-based feedback control provided a novel approach to micro-manipulation, offering a remarkable degree of accuracy and precision in micro-scale object manipulation [9].

3. Positioning Accuracy of Micro-Manipulators

In response to the escalating demand for advanced positioning systems, ball screw-based mechanisms, driven by either servo drives or stepper motors, have emerged as an important choice, due to their capacity to deliver exceptional levels of positioning accuracy and repeatability. A pivotal approach to mitigating the transient effects associated with these systems is the formulation of mathematical models, as emphasized in prior works [10][11]. Mathematical modeling encompasses the empirical characterization of the system behavior across a diverse spectrum of operational scenarios, thereby streamlining development efforts and facilitating the integration of contemporary control methodologies.
Leveraging the potency of machine learning techniques is recognized as a way to attain the utmost precision in system positioning. Bejar et al. discuss the development of a deep reinforcement learning-based neuro-control system for magnetic positioning system applications. Two neuro-controllers were trained to control the X and Y-axis motions of the positioning system. The training was based on the Q-learning method with an actor-critic architecture, and the parameters were updated using gradient descent techniques to maximize a reward function defined in terms of positioning accuracy. The performance of the control system was verified for different setpoints and working conditions, demonstrating the effectiveness of the proposed method [12]. Another paper addresses the issue of hysteresis nonlinearity in a piezoelectric micro-positioning platform (PMP), which limits its positioning accuracy. It introduces a Krasnosel’skii–Pokrovskii (KP) model to describe the hysteresis behavior, and involves an adaptive linear neural network for real-time model identification. To compensate for hysteresis, the paper presents a feed-forward control method and a hybrid approach combining iterative learning control and fractional order Proportional–Integral–Derivative (PID) control, which is validated through experiments and significantly enhances control accuracy [13].
Xu et al. introduce a method that combines machine vision and machine learning to determine the correctness of reed positioning and estimate adjusting displacements. The back propagation neural network (BPNN) achieved 100% accuracy in assessing correctness and a measuring precision of ±0.025 mm for displacement estimation, providing an effective solution for improving the manufacturing process of aerophones [14]. Leroux et al. explore the challenge of manually controlling manipulator robots with more degrees of freedom than can be managed through traditional input methods. They propose a solution that combines eye tracking and computer vision to automate robot positioning toward a 3D target point. This approach significantly improves precision, reducing the average error and making it a valuable tool for robot control and rehabilitation engineering [15]. Contemporary scanning probe microscopes, including scanning and tunneling electron microscopes, have increasingly incorporated visual recognition and machine learning techniques for the extraction of intricate data from acquired images [16]. However, the utilization of these data has hitherto not extended to the autonomous localization of target locations, precise selection of measurement positions, and the rectification of inherent inaccuracies.

4. Image Enhancement

In the process of capturing microbial cell images, the quality of these images can be influenced by various factors, including fluctuations in illumination [17]. Moreover, microbial images may also be susceptible to distortions arising from camera lenses and exposure time settings. To address the illumination issue, image enhancement techniques are employed as a preprocessing step. Numerous methods for enhancing illumination and contrast have been explored in the literature [18][19][20]. Contrast enhancement is a technique used to improve image quality and reveal subtle details in low-contrast areas. Various intensity adjustments, often determined by users, are used to enhance visual contrast. However, the effectiveness of these adjustments depends on user-defined function coefficients, and different transformations can produce distinct patterns.
In the realm of biomedical image enhancement, various techniques have been explored. Shirazi et al. harnessed Wiener filtering and Curvelet transforms to enhance red blood cell images and reduce noise [21]. Plissiti et al. [22] introduced Contrast-Limited Adaptive Histogram Equalization (CLAHE) for detecting cell nuclei boundaries in Pap smear images, optimizing image backgrounds and regions of interest through CLAHE and global thresholding in preprocessing. Rejintal et al. [23] opted for histogram equalization in the preprocessing stage, aiming to enrich contrast in leukemia microscopic images to facilitate cell segmentation and cancer detection. Tyagi et al. [24] turned to histogram equalization for image enhancement, striving to classify normal RBC and poikilocyte cells using Artificial Neural Networks, fueled by a dataset of 100 images from diverse blood samples. Somasekar et al. [25] uncovered the potential of Gamma Equalization (GE) to enhance images. Additionally, Sparavigana [26] highlighted the versatile use of the Retinex filter in GIMP for enhancing both panoramic radiographic and microscopy images, offering valuable applications in medical, biological, and other scientific imaging. Bhateja et al. [27] proposed an enhanced Multi-scale Retinex (MSR) technique with chromaticity preservation (MSRCP) for enhancing bacterial microscopy images, leading to improved contrast and visibility of bacterial cells, as confirmed by Image Quality Assessment (IQA) parameters. Lin et al. [28] presented a Fuzzy Automatic Contrast Enhancement (FACE) method that utilizes fuzzy clustering to improve image contrast automatically, avoiding visual artifacts and retaining original colors.

5. Visual Recognition at Microscopic Level

Machine-learning- and deep-learning-based computer-assisted solutions offer significant improvements in clinical microbiology research, particularly in image analysis and bacterial species recognition [29]. Ronneberger et al. [30] present a U-Net: Convolutional Networks approach, leveraging data augmentation and a contracting–expanding architecture, showcasing remarkable performance in neuronal structure segmentation and cell tracking tasks, all while maintaining high-speed processing on modern GPUs. Hollandi et al. [31] present nucleAIzer, a deep learning approach that adapts its nucleus-style model through image style transfer, enabling efficient, annotation-free cell nucleus localization across diverse microscopy experiments and providing a user-friendly solution for biological light microscopy.
Greenwald et al. [32] present TissueNet, a dataset for segmentation model training, and Mesmer, a deep learning-based algorithm that outperforms previous methods in cell boundary identification, demonstrating adaptability, human-level performance, and application to cell feature extraction and lineage analysis during human gestation. Haberl et al. [33] introduced CDeep3M, a cloud-based deep convolutional neural network solution designed for biomedical image segmentation. The system has been successfully benchmarked on various imaging datasets, demonstrating its accessibility benefits. Lalit et al. [34] introduce EmbedSeg, a method for precise instance segmentation in 2D and 3D biomedical images, offering top-tier performance, open-source availability, and user-friendly tools for broad accessibility. Nitta et al. [35] introduce a groundbreaking technology, intelligent image-activated cell sorting, enabling real-time, automated sorting of cells by combining high-throughput microscopy, data processing, and decision-making for various biological studies.
Fujita et al. [36] present an innovative approach for simultaneous cell detection and segmentation using Mask R-CNN, enhancing detection performance by incorporating focal loss and achieving promising results on benchmark datasets, particularly DSB2018. Whipp et al. [37] present a study in which deep learning models for automated microbial colony counting are developed and evaluated, utilizing the You Only Look Once (YOLO) framework. Sebastián et al. [38] introduced a YOLOv5-based model for automating cell recognition and counting, and compared it to the current segmentation-based U-Net and OpenCV model, achieving high accuracy, precision, recall, and F1 scores. Huang et al. [39] introduce a novel approach that combines contrast enhancement and the YOLOv5 framework for automated yeast cell detection, achieving exceptional accuracy and performance in contrast to conventional methods, with additional precision through OpenCV for cell contour delineation.

References

  1. Safari, A.; Akdoğan, E.K. Piezoelectric and Acoustic Materials for Transducer Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008.
  2. Yin, P.; Li, R.; Wang, Z.; Lin, S.; Tian, T.; Zhang, L.; Wu, L.; Zhao, J.; Duan, C.; Huang, P.; et al. Manipulation of a Micro-Object Using Topological Hydrodynamic Tweezers. Phys. Rev. Appl. 2019, 12, 044017.
  3. Kumar, P. Development and Analysis of a Path Planner for Dexterous In-Hand Manipulation of Micro-Objects in 3D. Ph.D. Thesis, Université Bourgogne Franche-Comté, Dahmouche, Besançon, France, 2021.
  4. Zhang, D.; Ren, Y.; Barbot, A.; Seichepine, F.; Lo, B.; Ma, Z.C.; Yang, G.Z. Fabrication and optical manipulation of micro-robots for biomedical applications. Matter 2022, 5, 3135–3160.
  5. Riegel, L.; Hao, G.; Renaud, P. Vision-based micro-manipulations in simulation. Microsyst. Technol. 2021, 27, 3183–3191.
  6. Chen, X.; Shi, Q.; Shimoda, S.; Sun, T.; Wang, H.; Huang, Q.; Fukuda, T. Micro Robotic Manipulation System for the Force Stimulation of Muscle Fiber-like Cell Structure. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 7249–7254.
  7. Qin, F.; Xu, D.; Zhang, D.; Pei, W.; Han, X.; Yu, S. Automated Hooking of Biomedical Microelectrode Guided by Intelligent Microscopic Vision. IEEE/ASME Trans. Mechatron. 2023, 28, 2786–2798.
  8. Zhang, Y.; Guo, X.; Wang, Q.; Wang, F.; Liu, C.; Zhou, M.; Ying, Y. Automated Dissection of Intact Single Cell From Tissue Using Robotic Micromanipulation System. IEEE Robot. Autom. Lett. 2023, 8, 4705–4712.
  9. Zhou, A.; Zhang, Y. Intelligent 3D-Printed Magnetic Micro Soft Robotic Hand with Visual Feedback for Droplet Manipulation. In Proceedings of the 2023 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 19 August 2023; pp. 200–205.
  10. Nguyen, X.H.; Mau, T.H.; Meyer, I.; Dang, B.L.; Pham, H.P. Improvements of Piezo-Actuated Stick–Slip Micro-Drives: Modeling and Driving Waveform. Coatings 2018, 8, 62.
  11. Najar, F.; Choura, S.; El-Borgi, S.; Abdel-Rahman, E.M.; Nayfeh, A.H. Modeling and design of variable-geometry electrostatic microactuators. J. Micromech. Microeng. 2005, 15, 419–429.
  12. Bejar, E.; Morán, A. Deep reinforcement learning based neuro-control for a two-dimensional magnetic positioning system. In Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018; pp. 268–273.
  13. Zhou, M.; Yu, Y.; Zhang, J.; Gao, W. Iterative Learning and Fractional Order PID Hybrid Control for a Piezoelectric Micro-Positioning Platform. IEEE Access 2020, 8, 144654–144664.
  14. Jie, X.; Kailin, Q.; Yuanhao, X.; Weixi, J. Method Combining Machine Vision and Machine Learning for Reed Positioning in Automatic Aerophone Manufacturing. In Proceedings of the 2019 4th International Conference on Robotics and Automation Engineering (ICRAE), Singapore, 22–24 November 2019; pp. 140–147.
  15. Leroux, M.; Raison, M.; Adadja, T.; Achiche, S. Combination of eyetracking and computer vision for robotics control. In Proceedings of the 2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, USA, 11–12 May 2015; pp. 1–6.
  16. Kharin, A.Y. Deep learning for scanning electron microscopy: Synthetic data for the nanoparticles detection. Ultramicroscopy 2020, 219, 113125.
  17. Nahrawi, N.; Mustafa, W.A.; Kanafiah, S.N.A.M.; Jamlos, M.A.; Khairunizam, W. Contrast enhancement approaches on medical microscopic images: A review. In Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019: NUSYS’19, Gambang, Malaysia, 2–3 December 2019; Springer: Singapore, 2021; pp. 715–726.
  18. Qi, Y.; Yang, Z.; Sun, W.; Lou, M.; Lian, J.; Zhao, W.; Deng, X.; Ma, Y. A comprehensive overview of image enhancement techniques. Arch. Comput. Methods Eng. 2022, 29, 583–607.
  19. Dewey, B.E.; Zhao, C.; Reinhold, J.C.; Carass, A.; Fitzgerald, K.C.; Sotirchos, E.S.; Saidha, S.; Oh, J.; Pham, D.L.; Calabresi, P.A.; et al. DeepHarmony: A deep learning approach to contrast harmonization across scanner changes. Magn. Reson. Imaging 2019, 64, 160–170.
  20. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037.
  21. Shirazi, S.H.; Umar, A.I.; Haq, N.U.; Naz, S.; Razzak, M.I. Accurate microscopic red blood cell image enhancement and segmentation. In Proceedings of the Bioinformatics and Biomedical Engineering: Third International Conference, IWBBIO 2015, Granada, Spain, 15–17 April 2015; Proceedings, Part I 3. Springer: Cham, Switzerland, 2015; pp. 183–192.
  22. Plissiti, M.E.; Nikou, C.; Charchanti, A. Accurate localization of cell nuclei in Pap smear images using gradient vector flow deformable models. In Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing, Valencia, Spain, 20–23 January 2010; SciTePress: Setúbal, Portugal, 2010; Volume 2, pp. 284–289.
  23. Rejintal, A.; Aswini, N. Image processing based leukemia cancer cell detection. In Proceedings of the 2016 IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 20–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 471–474.
  24. Tyagi, M.; Saini, L.M.; Dahyia, N. Detection of poikilocyte cells in iron deficiency anaemia using artificial neural network. In Proceedings of the 2016 International Conference on Computation of Power, Energy Information and Communication (ICCPEIC), Melmaruvathur, India, 20–21 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 108–112.
  25. Somasekar, J.; Reddy, B.E. Contrast-enhanced microscopic imaging of malaria parasites. In Proceedings of the 2014 IEEE International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 18–20 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4.
  26. Sparavigna, A.C. Gimp Retinex for enhancing images from microscopes. Int. J. Sci. 2015, 4, 72–79.
  27. Bhateja, V.; Yadav, A.; Singh, D.; Chauhan, B.K. Multi-scale Retinex with Chromacity Preservation (MSRCP)-Based Contrast Enhancement of Microscopy Images. In Proceedings of the Smart Intelligent Computing and Applications, Volume 2: Proceedings of Fifth International Conference on Smart Computing and Informatics (SCI 2021), Hyderabad, India, 17–18 September 2021; Springer: Singapore, 2022; pp. 313–321.
  28. Lin, P.T.; Lin, B.R. Fuzzy automatic contrast enhancement based on fuzzy C-means clustering in CIELAB color space. In Proceedings of the 2016 12th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Auckland, New Zealand, 29–31 August 2016; pp. 1–10.
  29. Kotwal, S.; Rani, P.; Arif, T.; Manhas, J. Machine Learning and Deep Learning Based Hybrid Feature Extraction and Classification Model Using Digital Microscopic Bacterial Images. SN Comput. Sci. 2023, 4, 687.
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241.
  31. Hollandi, R.; Szkalisity, A.; Toth, T.; Tasnadi, E.; Molnar, C.; Mathe, B.; Grexa, I.; Molnar, J.; Balind, A.; Gorbe, M.; et al. nucleAIzer: A parameter-free deep learning framework for nucleus segmentation using image style transfer. Cell Syst. 2020, 10, 453–458.
  32. Greenwald, N.F.; Miller, G.; Moen, E.; Kong, A.; Kagel, A.; Dougherty, T.; Fullaway, C.C.; McIntosh, B.J.; Leow, K.X.; Schwartz, M.S.; et al. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nat. Biotechnol. 2022, 40, 555–565.
  33. Haberl, M.G.; Churas, C.; Tindall, L.; Boassa, D.; Phan, S.; Bushong, E.A.; Madany, M.; Akay, R.; Deerinck, T.J.; Peltier, S.T.; et al. CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation. Nat. Methods 2018, 15, 677–680.
  34. Lalit, M.; Tomancak, P.; Jug, F. Embedseg: Embedding-based instance segmentation for biomedical microscopy data. Med. Image Anal. 2022, 81, 102523.
  35. Nitta, N.; Sugimura, T.; Isozaki, A.; Mikami, H.; Hiraki, K.; Sakuma, S.; Iino, T.; Arai, F.; Endo, T.; Fujiwaki, Y.; et al. Intelligent image-activated cell sorting. Cell 2018, 175, 266–276.
  36. Fujita, S.; Han, X.H. Cell Detection and Segmentation in Microscopy Images with Improved Mask R-CNN. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020; pp. 58–70.
  37. Whipp, J.; Dong, A. YOLO-based Deep Learning to Automated Bacterial Colony Counting. In Proceedings of the 2022 IEEE Eighth International Conference on Multimedia Big Data (BigMM), Naples, Italy, 5–7 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 120–124.
  38. López Flórez, S.; González-Briones, A.; Hernández, G.; Ramos, C.; de la Prieta, F. Automatic Cell Counting with YOLOv5: A Fluorescence Microscopy Approach. Int. J. Interact. Multimed. Artif. Intell. 2023, 8.
  39. Huang, Z.J.; Patel, B.; Lu, W.H.; Yang, T.Y.; Tung, W.C.; Bučinskas, V.; Greitans, M.; Wu, Y.W.; Lin, P.T. Yeast cell detection using fuzzy automatic contrast enhancement (FACE) and you only look once (YOLO). Sci. Rep. 2023, 13, 16222.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , , , ,
View Times: 103
Revisions: 2 times (View History)
Update Date: 27 Dec 2023
1000/1000