Echoes of Whale Fall
Machine Learning–Driven Audiovisual Installation
Echoes of Whale Fall is an emotion-driven audiovisual system that uses machine learning as a translation layer to interpret affective signals, transforming deep-sea sounds and imagery into a generative narrative of loss, memory, and rebirth.
Personalized dream interpretations to help you reflect on emotions and subconscious patterns.
Dataset & Source Material
A curated dataset of deep-sea whale footage and acoustic recordings forms the foundation of the system.
These clips are segmented into narrative moments such as emergence, movement, and collective behavior, providing structured input for emotional annotation.
These clips are segmented into narrative moments such as emergence, movement, and collective behavior, providing structured input for emotional annotation.
Signal Processing & Abstraction
Raw audiovisual data is decomposed into multiple layers of information,
including motion patterns, intensity shifts, and temporal rhythms.
This abstraction process translates natural phenomena into analyzable signals
that can be computationally interpreted.
including motion patterns, intensity shifts, and temporal rhythms.
This abstraction process translates natural phenomena into analyzable signals
that can be computationally interpreted.
Emotional Mapping (ML Layer)
Each segment is annotated using a three-dimensional affective model—arousal, dominance, and valence—creating an emotional dataset.
This structured mapping enables machine learning to operate as a bridge between sensory input and generative output.
This structured mapping enables machine learning to operate as a bridge between sensory input and generative output.
Temporal Composition
The emotional data drives a non-linear sequencing system, where transitions between states (e.g., silence, emergence, loss) are dynamically arranged.
Rather than a fixed timeline, the narrative evolves through emotional continuity and variation.
Rather than a fixed timeline, the narrative evolves through emotional continuity and variation.
Generative Visual Output
Machine learning outputs are translated into visual forms,
where parameters such as density, motion, and color respond to emotional states.
This results in an evolving visual language that
reflects the underlying affective structure of the data.
where parameters such as density, motion, and color respond to emotional states.
This results in an evolving visual language that
reflects the underlying affective structure of the data.
System Implementation
The system is implemented in Python, integrating audio processing and rule-based generation.
Machine learning here functions not as prediction, but as a mediating framework that continuously maps emotional inputs to audiovisual behaviors.
Machine learning here functions not as prediction, but as a mediating framework that continuously maps emotional inputs to audiovisual behaviors.
Harvard GSD 2025