Ocean Records - Visualizing The Marine Soundscape

My Role

Design, Development, Data Analytics, Machine Learning

Team Members

Ananmay Sharan

Timeline

6 weeks | 2025

Industry

Remote Sensing

Our Challenge

Ninety-two percent of global shipping traffic overlaps with blue, fin, and humpback whale ranges, yet strike data is sparse. From the surface we can’t see the ecosystem below - but we can hear it.

Passive acoustic monitoring reveals ecological health and could provide an early warning system for dangerous whale-ship encounters, however these enormous troves of raw data are difficult to parse through or gain insight from.

Our Goal

From ~80,000 Hours of Raw Recordings

How might we better understand the marine ecosystem and visualize the impact of human activity on marine life?

The Solution

We built an interactive experience and trained a custom machine-learning classifier to transforms raw audio and sparse detections into detailed, second-by-second identifications of marine and human sounds.


See the live experience

Explore from Multi-year to Month to Day view

Each animal has a unique call

However, noise from ships and seal bomb explosions (used for fishing) overlap with these frequencies, interfering with their ability to navigate, find food, and communicate.

By monitoring the soundscape we can better understand the marine ecosystem and the effects of anthrophony on it.

The Experience

Global View

Toggle across sensors to see four years of seasonal patterns in marine and human sounds, while live shipping tracks for each year appear on the left.

Monthly View

Explore monthly trends and hour-level detections showing where human made sounds (anthrophony) and marine sounds (biophony) directly overlapped.

Daily View

See and hear 24 hours underwater in just 14 minutes as a custom ML classifier turns raw audio into second-by-second soundscapes that reveal the beauty of the marine sounds and the overlap of human noise.

The Process

The recordings came from Monteray Bay CA

Globally, one of the most diverse marine mammal assemblages by species count and abundance.

From Raw audio to detections

We produced detailed detections by merging official labels with a classier we trained to interpret raw audio second by second.

Machine Learning Training Process

Built on Google DeepMind’s Perch model, we trained classifiers capable of second-by-second labeling of Monterey Bay’s marine soundscape.

Perch was initially trained on birdsong but with few-shot training it outperforms whale classifiers on accurate classification of marine sounds.

Machine Learning Training Process

Once we had detections we found patterns and refined the design language

Data and Tools

Technology Stack

The Impact and the Final Experience in Motion

Video Walkthrough

Audio on for the best experience.

1

Showed how passive acoustic monitoring offers a multi-dimensional look into ocean ecosystem patterns.

2

Created an emotional understanding of anthropogenic noise pollution even within sanctuary areas.

3

Demonstrated how machine learning models like Perch can scale up detections and unlock the full power of Passive Acoustic monitoring

Next Steps

This work could further

Reflection

From data to meaning

Working on Ocean Records made me fall in love with making big, complex data understandable and emotionally resonant. Turning 80,000 hours of underwater audio into clear, interactive insights pushed me to blend design craft with cutting-edge machine learning, training classifiers and shaping visual systems that reveal patterns people can’t see from the surface.

Through collaboration with ML scientists, iterative analysis, and careful storytelling, I saw how design and machine learning bring clarity to complex climate and ecological data.