Google’s new AI model could someday let you understand and talk to dolphins

zeeforce
4 Min Read



  • Google and the Wild Dolphin Project have developed an AI model trained to understand dolphin vocalizations
  • DolphinGemma can run directly on Pixel smartphones
  • It will be open-sourced this summer

For most of human history, our relationship with dolphins has been a one-sided conversation: we talk, they squeak, and we nod like we understand each other before tossing them a fish. But now, Google has a plan to use AI to bridge that divide. Working with Georgia Tech and the Wild Dolphin Project (WDP), Google has created DolphinGemma, a new AI model trained to understand and even generate dolphin chatter.

The WDP has been collecting data on a specific group of wild Atlantic spotted dolphins since 1985. The Bahamas-based pod has provided huge amounts of audio, video, and behavioral notes as the researchers have observed them, documenting every squawk and buzz and trying to piece together what it all means. This treasure trove of audio is now being fed into DolphinGemma, which is based on Google’s open Gemma family of models. DolphinGemma takes dolphin sounds as input, processes them using audio tokenizers like SoundStream, and predicts what vocalization might come next. Imagine autocomplete, but for dolphins.



Source link

Share This Article
Leave a comment
Optimized by Optimole
Verified by MonsterInsights