Droplets of Sati
Interactive Meditation Installation
Droplets of Sati explores liquid as a shared interface for nonverbal communication through an interactive two-person meditation system. By converting participants’ breathing rhythms into dripping liquid, the work enables gradual synchronization between bodies while generating a soundscape from narrated memories. Each session leaves a physical trace in liquid material, transforming embodied interaction into a permanent record of shared experience.
Problem
Shared memories are rarely symmetrical. Even when two people agree on facts, they often carry divergent emotional tempos. Most memory-sharing technologies assume negotiation must occur through narrative, where experiences are explained and aligned through words.
Shared memories are rarely symmetrical. Even when two people agree on facts, they often carry divergent emotional tempos. Most memory-sharing technologies assume negotiation must occur through narrative, where experiences are explained and aligned through words.
SolutionParticipants sit across from the liquid interface, wearing respiratory sensors that translate breathing into discrete droplet events. In the initial self-attunement phase, each participant hears only their own droplet rhythm, allowing them to ground attention in personal respiratory tempo through synchronized sound and liquid motion.
InteractionParticipants enter the experience with a shared memory and participate in a guided meditation while their narrated memories create a soundscape. Synchronization appears as a type of nonverbal negotiation between emotional states through listening to liquid instead of words.
We explore liquid as a shared interface for memory by manipulating the water surface through datalized bio-signals. This transforms the water surface into a responsive canvas and explores the technical and experiential potential of water as a visual medium.
The interaction concludes in shared silence as participants observe the final state of the liquid surface. Diffused ink and oil form an irreversible material trace shaped by the timing, duration, and convergence of their breathing rhythms. This material residue functions as a shared archive of the interaction, enabling reflection on the negotiated experience through synchronized sensation rather than verbal recollection.
The hardware system comprises three fluidic actuation channels orchestrated by an Arduino Nano R4, selected for its compact form factor and high-resolution analog input support. Physiological input from two participants is captured via custom wearable respiratory belts incorporating stretchable conductive rubber elements, whose resistance varies with chest expansion. Signals are sampled at 100 Hz using the microcontroller’s 14-bit ADC, smoothed with an exponential moving average (α = 0.15), and briefly calibrated at startup to establish adaptive baselines and detection thresholds.
The software architecture is split into two coordinated subsystems: a high level Narrative Engine that handles semantic analysis and audiovisual generation, and a low level Biofeedback Controller that manages real time respiration sensing and hydraulic actuation. This division prioritizes deterministic timing for physiological feedback while reserving large model inference for session initialization.
MIT Media Lab 2025