Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 1942 word(s) 1942 2022-03-01 05:01:54 |
2 format correct + 1 word(s) 1943 2022-03-02 02:41:48 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Dresp, B. Brain-Inspired Robot Control. Encyclopedia. Available online: https://encyclopedia.pub/entry/20051 (accessed on 01 September 2024).
Dresp B. Brain-Inspired Robot Control. Encyclopedia. Available at: https://encyclopedia.pub/entry/20051. Accessed September 01, 2024.
Dresp, Birgitta. "Brain-Inspired Robot Control" Encyclopedia, https://encyclopedia.pub/entry/20051 (accessed September 01, 2024).
Dresp, B. (2022, March 01). Brain-Inspired Robot Control. In Encyclopedia. https://encyclopedia.pub/entry/20051
Dresp, Birgitta. "Brain-Inspired Robot Control." Encyclopedia. Web. 01 March, 2022.
Brain-Inspired Robot Control
Edit

Multifunctional control in real time is a critical target in intelligent robotics. Combined with behavioral flexibility, such control enables real-time robot navigation and adaption to complex, often changing environments. Multifunctionality is observed across a wide range of living species and behaviors. As made clear above, even seemingly simple organisms such as invertebrates demonstrate multifunctional control. Living systems rely on the ability to shift from one behavior to another, and to vary a specific behavior for successful action under changing environmental conditions. Truly multifunctional control is a major challenge in robotics. A plausible approach is to develop a methodology that maps multifunctional biological system properties onto simulations to potentiate rapid prototyping and real-time simulation of solutions (control architectures). 

synaptic learning brain functional complexity neural networks self-organization robots

1. Repetitive or Rhythmic Behavior

Hybrid model frameworks combining synaptic plasticity-dependent neural firing with simple biomechanics at speeds faster than real time illustrate how invertebrate learning directly inspires “intelligent” robotics [1][2]. Such frameworks exploit a multifunctional model of Aplysia feeding rhythms, which are capable of repeatedly reproducing three types of behavior: biting, swallowing, and rejecting. These simulate behavioral switching in response to external sensory cues. Model approaches incorporate synaptic learning and neural connectivity in a simple mechanical model of the feeding apparatus [3], with testable hypotheses in the context of robot movement control. 
Learning-induced synaptic plasticity in such modular circuitry controls behavioral switching (Figure 1), as recently simulated in biologically inspired model approaches directly exploitable for multifunctional robot control. For the model equations, the reader is referred to [3][4][5][6][7]. This modeling framework can be extended to a variety of scenarios for multifunctional robot movement and rhythm control, and it has several advantages. It allows rapid simulation of multifunctional behavior and it includes the known functional circuitry and simplified biomechanics of peripheral anatomy. The direct relationship with the underlying neural circuitry makes it possible to both generate and test specific neurobiological hypotheses. The relative simplicity of the network (Figure 1) makes it attractive as a basis for robot control. Unlike other artificial neural network architectures, synthetic nervous systems are explainable in terms of structures directly informing the functional system output [8][9][10][11][12]. Although the connections and trained weights of other artificial neural networks may provide similar control capabilities, these, unlike synthetic nervous systems, have to be trained on large datasets. The very strength of synthetic nervous systems is that they use a restricted, functionally identified set of biological neuron dynamics, thereby generating robust control without the need for additional training [12]. Neural network learning-inspired robotics include reactive systems emulating reflexes, neural oscillators to generate movement patterns, and neural networks for filtering high-dimensional sensory information [13]. To such effect, biologically motivated neural-network-based robot controllers, inspired by control structures in the sensory brain, where information is routed through the network using facilitating dynamic synapses with short-term plasticity, have been proposed [10][11][12][13][14][15]. Learning occurs through long-term synaptic plasticity using temporal difference learning rules to enable the robot to learn to associate a given movement with the correct, i.e., appropriate as defined, input conditions. Self-organizing network dynamics [14][16] provide memory representations of the environments that the robot encounters.
Figure 1. Model circuitry, adapted from [3][6][7], for multifunctional robotic movement command/control based on the functional neuroanatomy and synaptic plasticity of Aplysia motor and interneurons.

2. Sensorimotor Integration

Recent progress in neuromorphic sensory systems which mimic the biological receptor functions and sensorial processing [16][17][18][19] trends toward sensors and sensory systems that communicate through asynchronous digital signals analogous to neural spikes [14], improving the performance of sensors and suggesting novel sensory processing principles that exploit spike timing [15], leading to experiments in robotics and human–robot interaction that can impact how the researchers think the brain processes sensory information. Sensory memory is formed at the earliest stages of neural processing (Figure 2), underlying perception and interaction of an agent with the environment. Sensory memory is based on the same plasticity principles as all true learning, and it is, therefore, an important source of intelligence in a more general sense. Sensory memory is consolidated while perceiving and interacting with the environment, and a primary source of intelligence in all living species. Transferring such biological concepts into electronic implementation aims at achieving perceptual intelligence, which would profoundly advance a broad spectrum of applications, such as prosthetics, robotics, and cyborg systems [16]. Moreover, transferring biologically intelligent sensory processing into electronic implementations [17][18][19] achieves new forms of perceptual intelligence (Figure 2). These have the potential to profoundly advance a broader spectrum of applications in robotics, artificial intelligence, and control systems.
Figure 2. Biomimetic sensory systems: from biological synapses to artificial neural networks for novel forms of “perceptual intelligence”.
These new, bioinspired systems offer unprecedented opportunities for hardware architecture solutions coupled with artificial intelligence, with the potential of extending the capabilities of current digital systems to psychological attributes such as sensations and emotions. Challenges to be met in this field concern integration levels, energy efficiency, and functionality to shed light on the translational potential of such implementations. Neuronal activity and the development of functionally specific neural networks in the brain are continuously modulated by sensory signals. The somatosensory cortical network [20] in the primate brain refers to a neocortical area that responds primarily to tactile stimulation of skin or hair. This cortical area is conceptualized in the current state of the art [20][21][22][23] as containing a single map of the receptor periphery, connected to a cortical neural network with modular functional architecture and connectivity binding functionally distinct neuronal subpopulations from other cortical areas into motor circuit modules at several hierarchical levels [20][21][22][23]. These functional modules display a hierarchy of interleaved circuits connecting, via interneurons in the spinal cord, to visual and auditory sensory areas, and to the motor cortex, with feedback loops and bilateral communication with the supraspinal centers [22][23][24]. This enables ’from-local-to-global’ functional organization [21], a ground condition for self-organization [25][26], with plastic connectivity patterns that are correlated with specific behavioral variations such as variations in motor output or grip force, which fulfills an important adaptive role and ensures that humans are able to reliably grasp and manipulate objects in the physical world under constantly changing conditions in their immediate sensory environment. Neuroscience-inspired biosensor technology for the development of robot-assisted minimally invasive surgical training [27][28][29][30][31][32] is a currently relevant field of application here as it has direct clinical, ergonomic, and functional implications, with clearly identified advantages over traditional surgical procedures [33][34]. Individual grip force profiling using wireless wearable (gloves or glove-like assemblies) sensor systems for the monitoring of task skill parameters [27][28][29][30] and their evolution in real time on robotic surgery platforms [30][31][32][35][36][37] permits studying the learning curves [29][30][31] of experienced robotic surgeons, surgeons with experience as robotic platform tableside assistants, surgeons with laparoscopic experience, surgeons without laparoscopic experience, and complete novices. Grip force monitoring in robotic surgery [35][36][37] is a highly useful means of tracking the evolution of the surgeon’s individual force profile during task execution. Multimodal feedback systems may represent a slight advantage over the not very effective traditional feedback solutions, and the monitoring of individual grip forces of a surgeon or a trainee in robotic task execution through wearable multisensory systems is by far the superior solution, as real-time grip force profiling by such wearable systems can directly help prevent incidents [35][36] because it includes the possibility of sending a signal (sound or light) to the surgeon before their grip force exceeds a critical limit, and damage occurs. Proficiency, or expertise, in the control of a robotic system for minimally invasive surgery is reflected by a lesser grip force during task execution, as well as by a shorter task execution times [35][36][37]. Grip forces self-organize progressively in a way that is similar to the self-organization of neural oscillations during task learning, and, in surgical human–robot interaction, a self-organizing neural network model was found to reliably account for grip force expertise [38].

3. Movement Planning

To move neural processing models for robotics beyond reactive behavior, the capacity to selectively filter relevant sensory input and to autonomously generate sequences of processing steps is critical, as in cases where a robot has to search for specific visual objects in the environment, and then reach for these objects in a specific, instructed serial order [39][40]. In robotic tasks where the simultaneous control of object dynamics and internal forces exerted by the robot limb(s) to follow a trajectory with the object attached to it is required, plasticity and adaptation permit to deal with external perturbations acting on the robot–object system. On the basis of mere feedback through the internal dynamics of an object, a robot is, like a human, able to relate to specific objects with a very specific sensorimotor pattern. When the object-specific dynamical patterns are combined with hand coordinates obtained from a camera, dedicated hand-eye coordination self-organizes spontaneously [41][42][43] without any higher-order cognitive control. Robots are currently not capable of any form of genuine cognition. Cognition controls behavior in living brains, where sensing and acting are no longer linked directly to ensure control, as is the case for any robot currently, including humanoids. When an action is based on sensory information that is no longer directly available in the processing loop at the time where action is to ensue, the relevant information must be represented in a memory structure, as it is in any living brain. Information for the control of action then becomes abstracted from sensor data through the neural memory representations and mechanisms of memory-based decision making [39]. Plastic mechanisms in neural network-based control architectures (Figure 3) effectively contribute to the learning of dynamics of robot–object systems, enabling adaptive corrections and/or offset detection.
Figure 3. Biological learning-inspired processing model inspired by [39] for robotic control beyond reactive behavior, with a capacity to selectively filter relevant sensory input and to autonomously generate sequences of processing steps. The illustration here shows the dynamic neural network architecture for the control of perceptual learning (L), memory storage of objects/actions, their serial order, and recall (R) through node structures with plasticity enabled connections. Selectively gated feedback is enabled through computational nodes (outlined in yellow here) for the updating of sensory representations (such as offset detection) as a function of changes in input from the environment.
This allows for progressive error reduction by incorporating distributed synaptic plasticity according to feedback from actual movements in the given environment. It has been shown previously that such feedback processes are omnipresent in voluntary motor actions of human agents [43], where rapid corrective responses occur even for very small disturbances that approach the natural variability of limb movements. Robot control toward autonomy [44] ultimately implies that the robot generalizes across time and space, is capable of stopping when an element is missing, and updates a planned action sequence autonomously in real time when a scenario suddenly changes. Using biologically plausible neural learning, the flow of behavior generated can emerge new neural system dynamics through self-organization without any further control or supervision algorithm(s). In robotic control based on biologically inspired neural network learning, the universal training method is based on Hebbian synaptic learning. Several variants of the latter are discussed and compared, with detailed equations, in [39]. The neural network dynamics described therein can, in principle, be combined with other network structures that receive reward information as part of their input in an extended model approach based on known functional dynamics of the mammalian brain. As discussed in previous sections, the orbitofrontal cortex receives converging sensory and reward inputs, which makes the network as a whole capable of acquiring task structure and support reinforcement learning by encoding combinations of sensory and reward events [45][46].
Such networks possess self-organizing state-encoding dynamics of the type shown here above, based on the principle of multiple-stage decision tasks, where a human agent or robot has to choose between decisions (options) and their consequences. More knowledge from and interaction between the fields of cognitive neuroscience and robotics are needed here to further explore existing possibilities.

References

  1. Ren, L.; Li, B.; Wei, G.; Wang, K.; Song, Z.; Wei, Y.; Ren, L.; Liu, Q. Biology and bioinspiration of soft robotics: Actuation, sensing, and system integration. iScience 2021, 24, 103075.
  2. Ni, J.; Wu, L.; Fan, X.; Yang, S.X. Bioinspired Intelligent Algorithm and Its Applications for Mobile Robot Control: A Survey. Comput. Intell. Neurosci. 2016, 2016, 3810903.
  3. Webster-Wood, V.A.; Gill, J.P.; Thomas, P.J.; Chiel, H.J. Control for multifunctionality: Bioinspired control based on feeding in Aplysia californica. Biol. Cybern. 2020, 114, 557–588.
  4. Costa, R.M.; Baxter, D.A.; Byrne, J.H. Computational model of the distributed representation of operant reward memory: Combinatoric engagement of intrinsic and synaptic plasticity mechanisms. Learn. Mem. 2020, 27, 236–249.
  5. Jing, J.; Cropper, E.C.; Hurwitz, I.; Weiss, K.R. The Construction of Movement with Behavior-Specific and Behavior-Independent Modules. J. Neurosci. 2004, 24, 6315–6325.
  6. Jing, J.; Weiss, K.R. Neural Mechanisms of Motor Program Switching inAplysia. J. Neurosci. 2001, 21, 7349–7362.
  7. Jing, J.; Weiss, K.R. Interneuronal Basis of the Generation of Related but Distinct Motor Programs in Aplysia: Implications for Current Neuronal Models of Vertebrate Intralimb Coordination. J. Neurosci. 2002, 22, 6228–6238.
  8. Hunt, A.; Schmidt, M.; Fischer, M.; Quinn, R. A biologically based neural system coordinates the joints and legs of a tetrapod. Bioinspiration Biomim. 2015, 10, 55004.
  9. Hunt, A.J.; Szczecinski, N.S.; Quinn, R.D. Development and Training of a Neural Controller for Hind Leg Walking in a Dog Robot. Front. Neurorobotics 2017, 11, 18.
  10. Szczecinski, N.S.; Chrzanowski, D.M.; Cofer, D.W.; Terrasi, A.S.; Moore, D.R.; Martin, J.P.; Ritzmann, R.E.; Quinn, R.D. Introducing MantisBot: Hexapod robot controlled by a high-fidelity, real-time neural simulation. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 3875–3881.
  11. Szczecinski, N.S.; Hunt, A.J.; Quinn, R.D. A Functional Subnetwork Approach to Designing Synthetic Nervous Systems That Control Legged Robot Locomotion. Front. Neurorobotics 2017, 11, 37.
  12. Szczecinski, N.S.; Quinn, R.D. Leg-local neural mechanisms for searching and learning enhance robotic locomotion. Biol. Cybern. 2017, 112, 99–112.
  13. Capolei, M.C.; Angelidis, E.; Falotico, E.; Lund, H.H.; Tolu, S. A Biomimetic Control Method Increases the Adaptability of a Humanoid Robot Acting in a Dynamic Environment. Front. Neurorobotics 2019, 13, 70.
  14. Nichols, E.; McDaid, L.J.; Siddique, N. Biologically Inspired SNN for Robot Control. IEEE Trans. Cybern. 2013, 43, 115–128.
  15. Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks. Front. Neurorobotics 2018, 12, 35.
  16. Wan, C.; Chen, G.; Fu, Y.; Wang, M.; Matsuhisa, N.; Pan, S.; Pan, L.; Yang, H.; Wan, Q.; Zhu, L.; et al. An Artificial Sensory Neuron with Tactile Perceptual Learning. Adv. Mater. 2018, 30, e1801291.
  17. Wan, C.; Cai, P.; Guo, X.; Wang, M.; Matsuhisa, N.; Yang, L.; Lv, Z.; Luo, Y.; Loh, X.J.; Chen, X. An artificial sensory neuron with visual-haptic fusion. Nat. Commun. 2020, 11, 4602.
  18. Liu, S.C.; Delbruck, T. Neuromorphic sensory systems. Curr. Opin. Neurobiol. 2010, 20, 288–295.
  19. Wan, C.; Cai, P.; Wang, M.; Qian, Y.; Huang, W.; Chen, X. Artificial Sensory Memory. Adv. Mater. 2020, 32, e1902434.
  20. Wilson, S.; Moore, C. S1 somatotopic brain maps. Scholarpedia 2015, 10, 8574.
  21. Braun, C.; Heinz, U.; Schweizer, R.; Wiech, K.; Birbaumer, N.; Topka, H. Dynamic organization of the somatosensory cortex induced by motor activity. Brain 2001, 124, 2259–2267.
  22. Arber, S. Motor Circuits in Action: Specification, Connectivity, and Function. Neuron 2012, 74, 975–989.
  23. Weiss, T.; Miltner, W.H.; Huonker, R.; Friedel, R.; Schmidt, I.; Taub, E. Rapid functional plasticity of the somatosensory cortex after finger amputation. Exp. Brain Res. 2000, 134, 199–203.
  24. Tripodi, M.; Arber, S. Regulation of motor circuit assembly by spatial and temporal mechanisms. Curr. Opin. Neurobiol. 2012, 22, 615–623.
  25. Dresp-Langley, B. Seven Properties of Self-Organization in the Human Brain. Big Data Cogn. Comput. 2020, 4, 10.
  26. Kohonen, T. Self-Organizing Maps; Springer: New York, NY, USA, 2001.
  27. Dresp-Langley, B. Towards Expert-Based Speed–Precision Control in Early Simulator Training for Novice Surgeons. Information 2018, 9, 316.
  28. Batmaz, A.U.; De Mathelin, M.; Dresp-Langley, B. Getting nowhere fast: Trade-off between speed and precision in training to execute image-guided hand-tool movements. BMC Psychol. 2016, 4, 55.
  29. Batmaz, A.U.; De Mathelin, M.; Dresp-Langley, B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS ONE 2017, 12, e0183789.
  30. Batmaz, A.U.; de Mathelin, M.; Dresp-Langley, B. Effects of 2D and 3D image views on hand movement trajectories in the surgeon’s peri-personal space in a computer controlled simulator environment. Cogent Med. 2018, 5, 1426232.
  31. Dresp-Langley, B.; Nageotte, F.; Zanne, P.; De Mathelin, M. Correlating Grip Force Signals from Multiple Sensors Highlights Prehensile Control Strategies in a Complex Task-User System. Bioengineering 2020, 7, 143.
  32. De Mathelin, M.; Nageotte, F.; Zanne, P.; Dresp-Langley, B. Sensors for expert grip force profiling: Towards benchmarking manual control of a robotic device for surgical tool movements. Sensors 2019, 19, 4575.
  33. Staderini, F.; Foppa, C.; Minuzzo, A.; Badii, B.; Qirici, E.; Trallori, G.; Mallardi, B.; Lami, G.; Macrì, G.; Bonanomi, A.; et al. Robotic rectal surgery: State of the art. World J. Gastrointest. Oncol. 2016, 8, 757–771.
  34. Diana, M.; Marescaux, J. Robotic surgery. Br. J. Surg. 2015, 102, e15–e28.
  35. Liu, R.; Nageotte, F.; Zanne, P.; De Mathelin, M.; Dresp-Langley, B. Wearable Wireless Biosensors for Spatiotemporal Grip Force Profiling in Real Time. In Proceedings of the 7th International Electronic Conference on Sensors and Applications, Zürich, Switzerland, 20 November 2020; Volume 2, p. 40.
  36. Liu, R.; Dresp-Langley, B. Making Sense of Thousands of Sensor Data. Electronics 2021, 10, 1391.
  37. Dresp-Langley, B.; Liu, R.; Wandeto, J.M. Surgical task expertise detected by a self-organizing neural network map. arXiv 2016, arXiv:2106.08995.
  38. Liu, R.; Nageotte, F.; Zanne, P.; de Mathelin, M.; Dresp-Langley, B. Deep Reinforcement Learning for the Control of Robotic Manipulation: A Focussed Mini-Review. Robotics 2021, 10, 22.
  39. Tekülve, J.; Fois, A.; Sandamirskaya, Y.; Schöner, G. Autonomous Sequence Generation for a Neural Dynamic Robot: Scene Perception, Serial Order, and Object-Oriented Movement. Front. Neurorobotics 2019, 13.
  40. Scott, S.H.; Cluff, T.; Lowrey, C.R.; Takei, T. Feedback control during voluntary motor actions. Curr. Opin. Neurobiol. 2015, 33, 85–94.
  41. Marques, H.G.; Imtiaz, F.; Iida, F.; Pfeifer, R. Self-organization of reflexive behavior from spontaneous motor activity. Biol. Cybern. 2013, 107, 25–37.
  42. Der, R.; Martius, G. Self-Organized Behavior Generation for Musculoskeletal Robots. Front. Neurorobotics 2017, 11, 8.
  43. Martius, G.; Der, R.; Ay, N. Information Driven Self-Organization of Complex Robotic Behaviors. PLoS ONE 2013, 8, e63400.
  44. Alnajjar, F.; Murase, K. Self-organization of spiking neural network that generates autonomous behavior in a real mobile robot. Int. J. Neural. Syst. 2006, 16, 229–239.
  45. Zhang, Z.; Cheng, Z.; Lin, Z.; Nie, C.; Yang, T. A neural network model for the orbitofrontal cortex and task space acquisition during reinforcement learning. PLoS Comput. Biol. 2018, 14, e1005925.
  46. Weidel, P.; Duarte, R.; Morrison, A. Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks. Front. Comput. Neurosci. 2021, 15, 543872.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 539
Revisions: 2 times (View History)
Update Date: 02 Mar 2022
1000/1000
ScholarVision Creations