Talk Title: Hybrid Machine Learning for Adaptive Robotic Welding Control
Abstract:
The first commercial use of robotics in manufacturing was for welding operations. Since then, industrial
robots have become more diverse and common for automating manufacturing processes. Even though
robotic design and control have rapidly progressed in the past half-century, many welding operations are still done manually. The skills needed for complex welding scenarios possessed by trained professionals have yet to be mimicked by current industrial robots. Existing robotic control is incapable of adaptively adjusting its robotic operation in response to a dynamic welding environment, whereas a skilled human welder can. To enable sophisticated and adaptive robotic control, three elements are needed: perception, prediction, and reaction. Perception can be easily realized through in-situ high-speed cameras, but real-time welding quality prediction (e.g., penetration and back-side bead width) and process control (e.g., adjustment of welding speed and current) to stabilize and maximize the welding quality are more difficult. Accurate prediction and real-time reaction rely on the effective and efficient processing of perception data and characterization of this highly dynamic system. Emerging machine learning and deep learning techniques have the potential to realize adaptive robotic control mirroring human capabilities.
This presentation presents a preliminary study on developing a hybrid Machine Learning (ML) framework for real-time welding quality prediction and adaptive welding speed adjustment for GTAW welding at a constant current. The hybrid ML framework includes three elements: Convolutional Neural Network (CNN)-based welding quality prediction, Multiple Layer Perceptron (MLP)-based process modeling, and Gradient Descent (GD)-based controller. With the CNN, in-situ imaging of top-side weld pool can be analyzed to predict the back-side bead width during active welding control. With the MLP, the effect of welding speed on bead width can be quantitively modeled. Through the trained MLP, a computationally efficient GD algorithm has been developed to adjust the travel speed accordingly to achieve an optimal bead width with full material penetration. Because of the nature of gradient descent, the robot would change faster when the quality is further away and then fine-tune the speed when it was close to the goal. Experimental studies have shown promising results on real-time bead width prediction and adaptive speed adjustment to realize ideal bead width.
Bio:
Dr. Peng (Edward) Wang joined the Department of Electrical and Computer Engineering at the University
of Kentucky since August, 2019. He his Ph.D. degree in Mechanical and Aerospace Engineering from Case Western Reserve University in 2017, respectively. His research interests are in the areas of stochastic modeling and machine learning for machine condition monitoring and performance prediction, manufacturing process modeling and optimization, and human-robot collaboration. Dr. Wang has published over 30 peer-reviewed papers in journals such as CIRP Annals-Manufacturing Technology, IEEE Transactions of Automation Science and Engineering, SME Journal of Manufacturing Systems, and gets over 3,000 citations according to Google Scholar. He is the recipient of the Outstanding Young Manufacturing Engineer Award from the Society of Manufacturing Engineers in 2022, Best Student Paper Award from the IEEE Conference on Automation Science and Engineering (CASE) in 2015, the Outstanding Technical Paper Award from the SME North American Manufacturing Research Conference in 2017, 2020, and 2021, the Best Paper Award from CIRP Conference on Manufacturing Systems in 2020. He also received the First Prize in the Digital Manufacturing Commons (DMC) Hackathon, organized by DMDII in 2016.