The new tool gives everyone the opportunity to train a robot Myth news

Teaching a work of new skills used to coding specialist knowledge. But the new generation of robots can potentially learn from everyone.

Engineers design robotic helpers who can “learn from demonstration.” This more natural training strategy allows a person to do a job by task, usually in one of the three ways: by remote control, such as joystick service for remote maneuvering of the robot; through physical movement of the robot by movements; or performing the task themselves, while the robot observes and imitates.

Learning robots usually only train in one of these three demonstration approaches. But the MIT engineers have now developed a three -in -one training interface that allows the robot to learn the task of any of the three training methods. The interface has a manual form, equipped with a tool sensor that can attach to many common robotic arms. A person can use attachment to teaching a robot to perform the task, remotely controlling the robot, physical manipulation of it or an independent task – in the necessary style or best suits the task.

The MIT team tested a new tool that they call “a versatile demonstration interface” on a standard robotic shoulder. Volunteers with production knowledge have used an interface to perform two hand tasks that are widely performed on factory floors.

Scientists say that the new interface offers increased flexibility of training, which can expand the type of users and “teachers” who interact with robots. This may also allow robots to learn a wider set of skills. For example, a person can remotely train the robot to support toxic substances, while another production person another person can physically transfer the robot through the movements of the product, and at the end of the line someone else can use the attachment to draw the company's logo, because the robot observes and learns the same.

“We try to create highly intelligent and qualified teammates who can effectively cooperate with people to perform a complex job,” says Mike Hagenow, Postdoc in a myth in the Aeronautics and Astronautics department. “We believe that flexible demonstration tools can help far beyond the production floor, in other domains, in which we hope that increased job acceptance, such as home or caring.”

Hagenow will present Paper describing the new interfaceat the IEEE Intelligent Robots and Systems (IROS) conference in October. Co -authors of the myth of paper are Dimosthenis Konogiorgos, Postdoc in Mit Computer Science and Artificial Intelligence Lab (CSAIL); Dr Yanwei Wang '25, who recently obtained a doctorate in electrical engineering and computer science; and Julie Shah, Professor MIT and head of the Aeronautics and Astronautics Department.

Train together

The Shah group in MIT designs robots that can cooperate with people in the workplace, in hospitals and at home. The main purpose of its research is to develop systems that allow people to teach robots new tasks or skills “at work”. Such systems, for example, would help a factory factory employee quickly and naturally adapt the robot maneuvers at the moment, and not stop to reprogram the robot software from scratch – a skill that an employee may not necessarily have.

The new work of the team is based on the emerging strategy of learning robots called “Learning from demonstration” or LFD, in which robots are designed to be trained in a more natural, intuitive way. Looking through the literature of LFD, Hagenow and Shah, they said that LFD training methods have been developed so far, they belong to the three main categories of teleoplation, kinesthetic training and natural teaching.

One method of training can work better than the other two for a specific person or task. Shah and Hagenow wondered if they could design a tool that combines all three methods to allow the robot to learn more tasks than more people.

“If we could combine these three different ways in which someone could interact with the robot, they can bring benefits for various tasks and different people,” says Hagenow.

Tasks at hand

Bearing in mind this goal, the team designed a new versatile demonstration interface (VDI). The interface is a manual fastening that can fit on the shoulder of a typical common robotic arm. The attachment is equipped with camera and markers that follow the position and movement of the tool in time, along with force sensors to measure the amount of pressure applied during a given task.

When the interface is attached to the robot, the whole robot can be controlled remotely, and the interface camera records the robot movements that the robot can use as training data to learn about the task. Similarly, a person can physically transfer the robot through the task, with the interface attached. VDI can also be detached and physically kept by a person to perform the desired task. The camera registers VDI movements, which the robot can also use to imitate the task after again again.

To test the usefulness of the attachment, the team brought an interface along with a cooperating robotic arm, to the local Innovation Center, in which production experts learn and test technology that can improve the factory processes. Scientists founded an experiment in which they asked volunteers in the center to use the robot and all three interface training methods to perform two joint production tasks: pressing the press and forming. In matching the press, the user trained the robot to press and match the dowels to the holes, as in many fixing tasks. For forming, a volunteer trained a robot so that he evenly pushes a rubber, reminiscent of a substance around the surface of the middle rod, similar to some thermomolding tasks.

For each of the two tasks, volunteers were asked to use each of the three training methods, first a robot TV with a joystick, and then kinesthetically manipulating the robot, and finally detaching the attachment of the robot and using it “naturally” to perform the task when the robot wrote strength and movement of attachment.

Scientists have found that volunteers generally preferred a natural method than kinesthetic training. Users who were experts in production offered scenarios in which each method can have advantages to others. For example, teleoperation can be better when training a robot to service hazardous or toxic substances. Kinesthetic training can help employees adjust the positioning of the robot, whose task is to move heavy packages. And natural teaching can be beneficial in demonstrating tasks including delicate and precise maneuvers.

“We imagine that we use our demonstration interface in flexible production environments, in which one robot can help in various tasks that use specific types of demonstrations,” says Hagenow, who plans to improve the design of the attachment based on feedback from users and uses a new project to test the robot. “We consider this study as showing how to achieve greater flexibility in cooperation works through interfaces that expand the ways of interaction of end users with robots during teaching.”

These works were partly supported by the Postdoctorral Fellowship program program for Engineering Excellence and Wallenberg Foundation Postdoctoral Research Fellowship.

LEAVE A REPLY

Please enter your comment!
Please enter your name here