For decades, marine researchers have been working to understand dolphin communication using hydrophones and spectrograms. Now, the launch of DolphinGemma, an AI-powered language model, is set to enhance these efforts. Developed by The Wild Dolphin Project along with Dr. Thad Starner from Google DeepMind and Dr. Denise Herzing, DolphinGemma analyzes vocalizations from a vast audio library collected over 40 years from a pod of wild Atlantic spotted dolphins in the Bahamas.
The model employs advanced audio processing to convert dolphin sounds into data tokens, identifying patterns and predicting future sounds, similar to autocomplete. Beyond decoding, the team is creating tools for two-way communication, including underwater CHAT devices that detect and replay specific dolphin sounds associated with actions or objects. This season, DolphinGemma will be deployed in the field and may become an open-source tool adaptable to other dolphin species. If successful, it could allow us not just to study dolphin society but also to communicate with them and understand their conversations better.