Our new Perch model helps conservationists analyze sound faster to protect endangered species, from Hawaiian honeyeaters to coral reefs.
One way scientists protect the health of our planet's wild ecosystems is by using microphones (or underwater hydrophones) to collect vast amounts of sound dense with the vocalizations of birds, frogs, insects, whales, fish and more. These recordings can tell us a lot about the animals found in a given area, as well as many other clues about the health of that ecosystem. However, making sense of this large amount of data remains a huge undertaking.
Today we are sharing an update to Perchour artificial intelligence model to help conservationists analyze bioacoustic data. This new model provides better, state-of-the-art, out-of-the-box bird species predictions than the previous model. It can better adapt to new environments, especially underwater ones such as coral reefs. It is trained on a wider range of animals, including mammals, amphibians and anthropogenic noise – for a total of almost twice as much data from public sources such as Xeno-Canto AND iNarodnik. It can unravel complex acoustic scenes containing thousands or even millions of hours of audio data. It is also comprehensive and can answer many different questions, from “how many babies are born” to “how many individual animals live in a given area.”
To help scientists protect our planet's ecosystems, we are making the new version of Perch available as an open source model and making it available on Kaggle.

















