Zero robotic system on objects most important for help people Myth news

In the case of a robot, the world world is much to accept. Understanding each data point in the scene may require a huge amount of computing effort and time. Using this information, then to decide how to best help man, is still suffering.

Now the MIT robotists have a way to cut the noise of data to help robots focus on functions in the scene that is most important for helping people.

Their approach, which they accurately call “meaning”, allows the robot to use tips in the scene such as audio and visual information, setting a human goal, and then quickly identify objects that will most likely be important in fulfilling this goal. Then the robot performs a set of maneuvers to safely offer a person appropriate objects or activities.

Scientists showed an approach with an experiment that simulated a breakfast buffet. They set up a table with various fruits, drinks, snacks and tableware, along with a robotic arm equipped with a microphone and camera. Using a new approach to accuracy, they showed that the robot was able to correctly identify the human goal and help in various scenarios.

In one case, the robot accepted the visual tips of a man reaching for a can of prepared coffee and quickly served milk and stick. In another scenario, the robot received a conversation between two people talking about coffee and offered them a can of coffee and cream.

In general, the robot was able to predict the goal of a man with 90 percent accuracy and identify the relevant objects with 96 percent accuracy. This method also improved the safety of the robot, reducing the number of collisions by over 60 percent, compared to performing the same tasks without using a new method.

“This approach to enabling significance can significantly facilitate the robot interaction with people,” says Kamal Youcef-Toumi, a professor of mechanical engineering in myth. “The robot would not have to ask a man so many questions about what they need. He would simply actively accept information from the stage to find out how to help.”

The Youcef-Toumi group is investigating how the works programmed with meaning can help in intelligent production and warehouse settings in which they imagine robots cooperating with people and intuitively helping people.

Youcef-toii together with doctoral students Xiaotong Zhang and Dingcheng Huang will present their new method at the IEEE international conference on robotics and automation (ICRA) in May. The work is based on Another paper Presented in ICRA last year.

Finding field

The band's approach is inspired by our own ability to assess what is important in everyday life. People can filter interference and focus on what is important thanks to the brain region known as the activating network system (RA). Ras is a beam of neurons in the brain trunk, which subconsciously works to cut unnecessary stimuli so that a person can consciously perceive the appropriate stimuli. Ras helps prevent sensory overload, preventing us from determining each item on a kitchen countertop, and instead helps us focus on pouring a cup of coffee.

“It is amazing that these groups of neurons filter everything that is not important, and then the brain focuses on what is important,” explains Youcef-Toumi. “Basically, this is our proposal.”

He and his team have developed a robotic system that basically imitates the ability of breeds to selectively process and filter information. The approach consists of four main phases. The first is the stage of “perception” the watch, during which the robot takes audio and visual guidelines, for example from a microphone and camera, which are constantly powered in “Toolkit”. This set of tools may contain a large language model (LLM), which processes audio conversations to identify keywords and phrases, as well as various algorithms that detect and classify objects, people, physical activities and tasks. The AI ​​set has been designed for continuous background action, as is subconsciously filtering that the brain breeds are made.

The second stage is the “trigger control” phase, which is a periodic control that the system performs to assess whether something important is happening, for example, whether a person is present or not. If a man entered the environment, the third phase of the system will begin. This phase is the heart of the team system that works to determine the features in the environment that are most likely important to help man.

To determine the importance, scientists have developed an algorithm that adopts forecasts in real time made by AI Toolkit. For example, LLM tools can pick up the “coffee” keyword, and the action classification algorithm can mark a person reaching a cup as a “coffee” target. The team's accuracy method would include this information to first determine the “class” of objects that have the highest probability of significance to “make coffee”. This can automatically filter classes such as “fruit” and “snacks” in favor of “cups” and “cream”. The algorithm would then further filter in the appropriate classes to determine the most appropriate “elements”. For example, based on visual instructions of the environment, the system can mean a mug to the closest person as more important – and helpful – than a mug that is further.

In the fourth and final phase, the robot would then take the identified appropriate objects and plan the path to physical access and offers objects to man.

Auxiliary mode

Scientists tested a new system in experiments that simulate a breakfast buffet. They chose this scenario based on a available data set from breakfast actions, which includes films and images of typical classes that people perform during breakfast, such as preparing coffee, cooking pancakes, creation of petals and frying eggs. Actions in each film and image are marked with a general goal (frying eggs, compared to making coffee).

Using this set of data, the team tested various algorithms in their set of AI tools, so that when receiving the activities of a person in the new stage, algorithms can carefully mark and classify human tasks and goals as well as related relevant objects.

In their experiments, they set up a robotic arm and a gripper and instructed the system to help people when they approached the table filled with various drinks, snacks and tableware. They discovered that when there were no people present, a set of tools AI robot operated constantly in the background, labeling and classifying objects on the table.

When the robot detected a man during the trigger inspection, he drew attention, including the phase of accuracy and quickly identifying objects on the stage, which would most likely be important, based on the purpose of a man who was determined by AI Toolkit.

“The robot can lead to generating trouble -free, intelligent, safe and efficient help in a highly dynamic environment,” says co -author Zhang.

Going further, the team hopes to apply the system to the scenarios that resemble work environments and storage environments, as well as to other tasks and goals usually performed in household settings.

“I would like to test this system in my house to see, for example, if I read paper, it can bring me coffee. If I do washing, it can bring me a laundry. If I fix it, it can bring me a screwdriver,” says Zhang. “Our vision is to enable human-robot interaction, which can be much more natural and smooth.”

Research was possible thanks to the support and partnership of King Abdulaziz City for Science and Technology (KACST) through the Center for Engineering Systems in MIT and KACST.

LEAVE A REPLY

Please enter your comment!
Please enter your name here