You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Reinforcement Learning for Dual-Arm Robotics
Academic Video Service
All videos are free for registered users. Please login to proceed.
  • View Times: 36
  • |
  • Release Date: 2024-10-14
  • reinforcement learning
  • curriculum learning
  • simulation-to-reality transfer
  • dual-arm robots
  • robotic assembly
  • peg-in-hole assembly
Video Introduction

This video is adapted from 10.3390/machines12100682

Robotic systems are crucial in modern manufacturing. Complex assembly tasks require the collaboration of multiple robots, and their orchestration is challenging due to tight tolerances and precision requirements. In this video, the authors set up two Franka Panda robots to perform a peg-in-hole insertion task with a 1 mm clearance. They structure the control system hierarchically, planning the robots’ feedback-based trajectories with a central policy trained using reinforcement learning. These trajectories are executed by a low-level impedance controller on each robot. To enhance training convergence, the authors employ reverse curriculum learning, a novel approach for such a two-armed control task, iteratively structured with a minimum requirements and fine-tuning phase. They incorporate domain randomization, varying initial joint configurations of the task to improve generalization. After training, the system is tested in a simulation, revealing the impact of curriculum parameters on the emerging process time and its variance. Finally, the trained model is transferred to the real world, resulting in a slight decrease in task duration. When comparing their approach to classical path planning and control, the authors observe a decrease in process time alongside higher robustness to calibration errors.

Full Transcript
Academic Video Service