Change Started
Dolphins

Google’s AI Is Helping To Understand Dolphin Sounds

Google has developed a new AI model called DolphinGemma, designed to help scientists better understand how dolphins communicate. 

Developed in collaboration with US-based Georgia Institute of Technology and the Wild Dolphin Project, DolphinGemma uses AI to analyze and even generate dolphin-like sounds.

This could lay the groundwork for basic two-way communication between humans and dolphins — an exciting development for both tech and marine biology.

Dolphin Sounds

Dolphins generally live in small groups called pods. A pod can consist of up to a dozen dolphins, which can merge in areas where food is abundant, forming superpods. These pods may exceed 1,000 dolphins.

Dolphins are highly social and can establish strong bonds within their pods. They interact with each other by using a variety of sounds, including clicks, whistles, and squeaks, which can be heard from kilometers away.

Google’s AI model objective is to decode the meaning behind dolphin vocalizations and maybe even learn how to communicate back.

Dolphin

Dolphin Sound Data

This project builds on nearly 40 years of research from the Florida-based Wild Dolphin Project (WDP), which has been studying a specific group of wild Atlantic spotted dolphins in the Bahamas since 1985.

The WDP researchers have collected a massive, high-quality dataset of underwater audio and video, linked to individual dolphins, their behaviors, and life histories.

Researchers have already identified patterns — like signature whistles (kind of like names) that mothers use to call their calves, squawk sounds used in conflicts and click-buzzes used in courtship or while chasing prey. 

But analyzing these sounds manually is a slow, painstaking process. That’s where AI comes in.

DolphinGemma

DolphinGemma is a lightweight, efficient language model trained specifically on dolphin sounds and vocalisations. It builds on Google’s Gemma AI family and is designed to run directly on Pixel smartphones — the same ones researchers use in the field.

The model can identify repeating patterns, cluster similar sound types, and even predict what a dolphin might “say” next, just like a human language model predicts words in a sentence.

By recognizing these vocal structures faster and more accurately than before, DolphinGemma is giving scientists a clearer look into how dolphins’ sounds and communications work.

Talking Back: Project CHAT

Alongside DolphinGemma, the team is also testing a tool called CHAT (Cetacean Hearing Augmentation Telemetry), developed with Georgia Tech.

Rather than trying to translate all dolphin sounds, CHAT takes a simpler approach: it uses artificial whistles linked to specific objects that dolphins enjoy, like scarves or seaweed.

The idea is that dolphins, naturally curious, might mimic these sounds to “ask” for the objects. Using a Pixel phone, CHAT can recognize the whistles in real time and notify researchers underwater via bone-conducting headphones.

It’s a clever way to create a shared, symbolic language and explore two-way interaction.

Wrapping Up

Google plans to release DolphinGemma as an open-source model this summer. While it’s trained on Atlantic spotted dolphins, the AI can potentially be adapted to study other species, like bottlenose or spinner dolphins.

That means researchers worldwide will soon have powerful new tools to explore dolphin communication more deeply.

The dream of interspecies communication isn’t science fiction anymore. Thanks to the combination of AI, marine biology, and communication technology, it is not just listening to dolphins but understanding them. 

changeadmin

changeadmin

Add comment