MIT engineers design an aerial microrobot that can fly at the speed of a bumblebee | MIT News

In the future, small flying robots could help search for survivors trapped under rubble after a devastating earthquake. Like real insects, these robots can sneak through tight spaces that larger robots can't reach, while avoiding stationary obstacles and pieces of falling debris.

Until now, aerial microrobots could only fly slowly in smooth trajectories, a far cry from the fast and agile flight of real insects – until now.

MIT researchers have demonstrated aerial microrobots that can fly with speed and agility comparable to their biological counterparts. The joint team designed a new artificial intelligence-based controller for the bug robot that allowed it to follow gymnastic flight paths, such as performing continuous body rolls.

Thanks to a two-part control scheme that combines high performance with computational efficiency, the robot's speed and acceleration increased by approximately 450 percent and 250 percent, respectively, compared to the researchers' best previous demonstrations.

The fast robot was agile enough to complete 10 consecutive somersaults in 11 seconds, even as wind disturbances threatened to push it off course.

The microrobot turns 10 times in 11 seconds.

Source: Courtesy of the Soft and Microrobotics Laboratory

“We want to be able to use these robots in scenarios where more traditional quadcopter robots would struggle to fly, but insects could navigate. Now, thanks to our bio-inspired control framework, our robot's flight performance is comparable to insect flight performance in terms of speed, acceleration and pitch angle. This is quite an exciting step towards a future goal,” says Kevin Chen, associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft Robotics Laboratory and Microrobotics at the Research Laboratory of Electronics (RLE) and co-author, among others, paper on the robot.

Chen is joined in the article by co-authors Yi-Hsuan Hsiao, MIT EECS graduate student; Dr. Andrea Tagliabue '24; and Owen Matteson, graduate of the Department of Aeronautics and Astronautics (AeroAstro); and EECS alumna Suhan Kim; Tong Zhao MEng '23; and co-author Jonathan P. How, Ford Professor of Engineering in the Department of Aeronautics and Astronautics and principal investigator at the Laboratory for Information and Decision Systems (LIDS). The research appears today in Progress of science.

AI controller

Chen's group has been building robotic insects for more than five years.

They recently developed a more durable version of their tiny robot, a micro-cassette-sized device that weighs less than a paper clip. The new version uses larger, flapping wings that allow for more agile movements. They are powered by a set of soft, artificial muscles that flap their wings at extremely high speeds.

However, the controller – the robot's “brain” that determines its position and tells it where to go – has been manually tuned by a human, limiting the robot's performance.

For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and quickly perform complex optimizations.

Such a controller would require too much computational effort to be implemented in real time, especially due to the complex aerodynamics of a lightweight robot.

To address this challenge, Chen's group teamed up with How's team to develop a two-stage AI-based control scheme that provides the reliability needed to perform complex, high-speed maneuvers and the computational performance needed for real-time implementation.

“Advances in hardware meant that the controller could do more on the software side, but at the same time, as the controller evolved, more could be done with the hardware. Once Kevin's team demonstrated the new capabilities, we showed that we could leverage them,” says How.

In the first step, the team built a so-called model predictive controller. This type of powerful controller uses a dynamic mathematical model to predict the robot's behavior and plan the optimal series of actions to safely follow its trajectory.

While computationally intensive, it can plan difficult maneuvers such as mid-air somersaults, quick turns, and aggressive body tilts. This high-performance planner is also designed to address limits on the force and torque the robot can apply, which is necessary to avoid collisions.

For example, to perform multiple flips in a row, the robot would have to slow down such that its initial conditions are exactly right for it to perform the flip again.

“If little mistakes creep in and you try to repeat that spin 10 times with those little mistakes, the robot will just crash. We need to have solid flight control,” How says.

They use this expert planner to teach “policies” based on a deep learning model to control the robot in real time in a process called imitation learning. Politics is the robot's decision engine that tells the robot where and how to fly.

Essentially, imitation learning compresses a powerful controller into a computationally efficient AI model that can run very fast.

The key was to cleverly create enough training data to teach police everything they needed to know to perform aggressive maneuvers.

“A strong training method is the secret of this technique,” ​​explains How.

The AI-based policy takes robot positions as real-time inputs and outputs in the form of control commands such as thrust and torque.

Insect-like performance

In their experiments, this two-step approach enabled the insect-shaped robot to fly 447 percent faster while increasing acceleration by 255 percent. The robot was able to perform 10 somersaults in 11 seconds, and the tiny robot never deviated from its planned trajectory by more than 4-5 centimeters.

“This work shows that soft and micro robots, traditionally limited by speed, can now use advanced control algorithms to achieve agility similar to natural insects and larger robots, opening up new possibilities for multimodal locomotion,” Hsiao says.

The researchers were also able to demonstrate saccade movement, which occurs when insects lean very aggressively, fly quickly to a certain position, and then lean the other way to stop. This rapid acceleration and deceleration helps the insects locate themselves and see clearly.

“This biomimicking flight behavior may help us in the future when we start putting cameras and sensors on board the robot,” says Chen.

A major area of ​​future work will be the addition of sensors and cameras that will allow microrobots to fly outdoors without having to be connected to a complex motion capture system.

Scientists also want to explore how onboard sensors can help robots avoid collisions or coordinate navigation.

“For the microrobotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is both efficient and effective,” says Chen.

“This work is particularly impressive because these robots continue to perform precise flips and fast turns despite the large uncertainties resulting from relatively large manufacturing tolerances in small-scale production, wind gusts exceeding 1 meter per second, and even the power cable wrapping around the robot while performing repeated flips,” says Sarah Bergbreiter, a professor of mechanical engineering at Carnegie Mellon University, who was not involved in the work.

“Although the controller currently operates on an external computer rather than on board the robot, the authors have demonstrated that similar but less precise control principles can be feasible even with the more limited computations available on an insect-scale robot. This is exciting because it points to future insect-scale robots that will have comparable agility to their biological counterparts,” he added.

This research is funded in part by the National Science Foundation (NSF), Office of Naval Research, Air Force Office of Scientific Research, MathWorks, and the Zakhartchenko Fellowship.

LEAVE A REPLY

Please enter your comment!
Please enter your name here