In order to improve the everyday lives of humans and help with domestic tasks, robots must possess the ability to effectively handle common objects like utensils and cleaning tools. However, certain objects pose a challenge for robotic hands due to their shape, flexibility, or unique characteristics. Among these challenging objects are textile-based cloths, commonly used for cleaning surfaces, polishing windows, mirrors, and even mopping floors. While these are tasks that robots could potentially excel at, the key lies in enabling robots to manipulate cloth efficiently.
Researchers at ETH Zurich have recently introduced an innovative computational technique designed to generate visual representations of crumpled cloths. This breakthrough promises to pave the way for more effective cloth manipulation strategies by robots in various tasks. The research, presented in a paper on arXiv, demonstrates the technique’s versatility across cloths with different physical properties, shapes, sizes, and materials.
The research team, including Wenbo Wang, Gen Li, Miguel Zamora, and Stelian Coros, stated in their paper, “Precisely reconstructing and manipulating one crumpled cloth is challenging due to the high dimensionality of the cloth model, as well as the limited observation at self-occluded regions. We leverage recent advancements in single-view human body reconstruction to template-based reconstruct crumpled cloths from top-view depth observations only, using our proposed sim-real registration protocols.”
To reconstruct complete meshes of crumpled cloths, the team employed a model based on graph neural networks (GNNs), specialized algorithms for processing graph-like data structures. To train this model, they assembled a dataset comprising over 120,000 synthetic images from cloth mesh simulations, rendered top-view RGBD cloth images, and more than 3,000 labeled images of real-world cloths. After extensive training on these datasets, their model demonstrated the ability to accurately predict the positions and visibility of cloth vertices when viewed from above.
To assess the model’s performance, the researchers conducted a series of tests, both in simulation and real-world experiments, using the ABB YuMi robot—a humanoid robot with two arms and hands. In both scenarios, their model produced mesh representations of cloths that facilitated the ABB YuMi robot’s ability to grasp and manipulate various cloths effectively, whether using a single hand or both.
“Experiments demonstrate that our template-based reconstruction and target-oriented manipulation (TRTM) system can be applied to daily cloths with similar topologies as our template mesh, but have different shapes, sizes, patterns, and physical properties,” the researchers noted.
The researchers have made their datasets and model code open-source and accessible on GitHub. This work holds the promise of advancing the field of robotics, particularly in improving the capabilities of mobile robots designed to assist humans with household chores. These advancements may enhance robots’ ability to handle various types of clothes, including tablecloths and cleaning materials, ultimately making them more versatile and valuable assistants in our daily lives.