Scientists from the Duke University have presented a breakthrough progress in robotic technology, which can fundamentally change the way of interaction with their environment. Innovative system, called SonicsenseIt enables robots to interpret the environment through acoustic vibrations, meaning a significant transition from traditional robotic perception based on vision.
In robotics, the ability to accurately perceive and interact with objects remains a key challenge. While people naturally combine many senses to understand their environment, robots were primarily on visual data, limiting their ability to fully understand and manipulate objects in complex scenarios.
The development of Sonicsense is a significant leap to filling this gap. Taking into account the possibilities of acoustic detection, this new technology allows robots to collect detailed information about objects through physical interaction, just like people instinctively use touch and sound to understand their surroundings.
Breaking Sonicsense technology
The innovative system system is focused around the robotic hand equipped with four fingers, each of which contained a contact microphone set in your fingertips. These specialized sensors capture vibrations generated during various interactions with objects such as tapping, gripping or shaking.
What distinguishes Sonicsense is a sophisticated approach to acoustic detection. Contact microphones are specially designed for filtering the noise of the environment, ensuring clean data collection during the interaction of objects. As Jiaxun Liu explains, the main author of the study: “We wanted to create a solution that could work with complex and various objects occurring on a daily basis, giving robots a much richer ability to” feel “and understand the world.”
The availability of the system is particularly noteworthy. Built using components available in trade, including the same contact microphones used by musicians to record guitars, and containing 3D printed elements, the entire configuration costs just over 200 USD. This profitable approach means that technology is more accessible to universal acceptance and further development.
Apart from visual recognition
Traditional robotic systems based on vision are in the face of numerous restrictions, especially in the case of transparent or reflective surfaces or objects with complex geometries. As Professor Boyuan Chen notes: “Although the vision is necessary, the sound adds a layer of information that can reveal things that the eye can miss.”
Sonicsense overcomes these restrictions by approaching many fingers and advanced AI integration. The system can identify objects composed of various materials, understand complex geometric shapes, and even determine the contents of the containers – possibilities that have proved to be difficult for conventional visual recognition systems.
The ability to work with many contact points at the same time allows for a more comprehensive analysis of objects. By combining data from all four fingers, the system can build detailed 3D reconstructions of objects and accurately determine their material composition. In the case of new facilities, the system may require up to 20 different interactions to draw the application, but in the case of known elements, accurate identification can be achieved in only four interactions.
Applications and tests in the real world
Practical applications of Sonicsense go far beyond laboratory demonstrations. The system turned out to be particularly effective in scenarios that traditionally question robotic perception systems. Through systematic tests, scientists have shown the ability to perform complex tasks, such as determining the number and shape of the bones in the container, measuring liquid levels in bottles and creating thorough 3D reconstruction of objects by exploring the surface.
These possibilities relate to real challenges in production, quality control and automation. Unlike previous acoustic detection attempts, the Sonicsense approach and filtering of the noise of the environment make it particularly suitable for dynamic industrial environments, in which a lot of sensory input data is necessary for accurate manipulation of objects and assessments.
The research team actively expands the capabilities of Sonicsense to simultaneously support many objects interactions. “This is just the beginning,” says Professor Chen. “In the future, we imagine that Sonicsense is used in more advanced robotic hands with skillful manipulation skills, enabling robots to perform tasks requiring refined sense of touch.”
Currently, there is an integration of objects tracking algorithms, aimed at enabling navigation and interaction robots with objects in cluttered, dynamic environments. This development, combined with plans to include additional sensory methods, such as pressure and temperature detection, indicates more and more sophisticated human manipulation capabilities.
Lower line
The development of Sonicsense is a significant milestone in the perception of robots, showing how acoustic detection can supplement visual systems to create more capable and flexible works. As this technology evolutions, its profitable approach and comprehensive applications suggest a future in which robots may interact with their environment with unprecedented sophistication, bringing us to truly human robotic abilities.