
Training a robot to identify and pour water
Researchers at Carnegie Mellon University were able to teach a robot to recognise water and pour it into a glass with the help of a horse, a zebra, and artificial intelligence.
Robots face a difficult issue since water is transparent. Robots have previously learned to pour water, but earlier methods like heating the water and using a thermal camera or setting the glass in front of a checkerboard background don’t work well in real-world situations. Robot waitresses might refill water glasses, robot pharmacists could measure and mix medications, and robot gardeners could water plants as part of a simpler approach.
In the Robots Perceiving and Doing Lab of the Robotics Institute, Gautham Narasimhan collaborated with a group to apply AI and image translation to resolve the issue. Gautham Narasimhan graduated from the Robotics Institute in 2020 with a master’s degree.
Image translation algorithms educate artificial intelligence to convert images from one style to another using libraries of images. Examples include turning a photograph into a Monet-style painting or changing a picture of a horse into a zebra. Contrastive learning for unpaired image-to-image translation was the technique employed by the team for this study (CUT, for short).
According to Held, “we can train a model to translate an image of coloured liquid into an image of transparent liquid, much like we can train a model to translate an image of a horse to seem like a zebra.” “We used this approach to give the robot the ability to comprehend clear liquids.”
A transparent liquid like water is challenging for a robot to see because the background affects how the liquid reflects, refracts, and absorbs light. The group played YouTube videos behind a transparent glass of water to train the computer to recognise various backgrounds through a glass of water. The robot will be able to pour water against a variety of backgrounds in the actual world after undergoing this training.
Even for humans, Narasimhan noted, “there are moments when it can be challenging to accurately define the boundary between water and air.”
Their technique allowed the robot to fill a glass with water to a specific height. The experiment was then carried out once more using glasses of various sizes and shapes.
Future study could improve upon this approach, according to Narasimhan, by incorporating varying lighting conditions, having the robot attempt to pour water from one container to another, or assessing both the height and amount of the water.
The study was presented last month in Philadelphia at the IEEE International Conference on Robotics and Automation. According to Narasimhan, the effort has received favourable feedback.
As Narasimhan, a computer vision engineer currently employed by Path Robotics in Columbus, Ohio, put it: “People in robotics really appreciate it when research works in the actual world and not only in simulation.” “We wanted to do something that was straightforward yet still had impact.”