top of page
IMG_6765.heic

A Listening Field

DRAFT (2026)

(Scroll down for audio examples)

 

A Listening Field is a spatial musical ecology that takes Beethoven’s Heiliger Dankgesang eines Genesenen a die Gottheit (String Quartet Op. 132, Molto Adagio) as its conceptual and harmonic point of departure. Rather than quoting Beethoven’s music or reconstructing its surface features, the work asks a different question: what kind of musical organism does this movement describe? In Op. 132, Beethoven presents convalescence and gratitude not as a triumphant resolution, but as a slow, fragile re-entry into life. The music alternates between luminous, modal stillness and more animated passages marked “Neue Kraft fühlend” (“feeling new strength”). Yet even in those moments of renewed energy, there is restraint and doubt; the piece does not surge forward so much as it carefully tests its own vitality. This restraint— this sense that the music is relearning how to exist—is familiar to anyone who has journeyed through a serious illness. That experience is the primary inspiration for A Listening Field

 

A Listening Field avoids goal-directed harmonic progression in favor of evolving harmonic states. Each state defines a sonic condition — such as harmonic center or adjacency, in humility, illumination, or resignation — rather than a step in a sequence. These states are not ordered in a fixed form. Instead, they emerge within an ecology and influence one another through their proximity and harmonic characteristics over time. I think of such music - sonic entities interacting within a shared behavioral ecology - as "behavioral music". The impetus for this is listening to a forest - or here for an ecology of musical species identified by their repertoire of expressions interacting with each other in their different states and through their relations with each other.

 

Harmonic Language and Tuning

The work is grounded in Just Intonation, chosen not for historical reenactment but for its capacity to sustain sound without functional-harmonic pressure. Higher N-limit ratios — such as 11/8, 5/3, 17/12, and carefully inflected versions of each — are used to introduce color and motion without reintroducing functional tonality. Higher harmonic tonalities — such as 23/16, 29/16, 13/8 are used to create different colors of harmonic light or to convey local turbulence in the harmonic fields. The work avoids treating the major or minor third as a defining identity. When thirds appear, they are demoted from structural roles and treated instead as gestures or passing regions. This allows harmonic warmth to emerge without collapsing the field into triadic function. 

 

Since the Molto Adagio is in F Lydian - with a B-natural rather than B-flat -  the relationship of F and C is more of breathing than their usual tonic-dominant functionality. They function less as an engine of harmonic tension than as a polarity of respiration and harmonic adjacency. They are represented here with Just triads (1/1, 5/4, 3/2, 9/4) and occasional 17/12 for a glint of color. They form the foundation of A Listening Field. The Lydian raised 4th is represented here with 11/8, which gives it a more acoustically integrated harmony than the more harmonically abrasive 12-TET tritone. The 11/8 is used as a chord tone but also as an axis of transposition where groups of F and C harmonies shift their fundamental frequencies by exponents of 11/8. This is done to explore and imbue Lydian "11-ness" more prominently in the piece. Another axis of transposition comes from the Molto Adagio's use of d-minor and a-minor chords. They seem to bring an expression of tenderness, humility, vulnerability and even supplication to the piece. In A Listening Field, their harmonies can be transposed up and down by exponents of other ratios - so far, 11/7 and 6/5. Each of these tonal families of F, C, F Lydian, A minor, and D minor include variants with octave displacement or higher N-limit ratios in order to expand their color.

 

These sonic entities animate spatially, in flocks. Each flock's "species" carries harmonic and spectral identity expressed, sometimes boldly and sometimes with nuance through various states. Their state transitions form their behavior and are usually distinguished by registral spread, pitch N-limit, and also expressive touches such as vibrato, tremolo, and gain shaping. These "species" are related ecologically as well, in that one group can have an audible effect on another by nudging them into their other states, including into or out of silence. The resulting "ecology", its interactions, evolution and the little dramas that emerge within it are meant to feel "natural" as while listening to animals in a forest. That is the intended experience of this music. And though the Beethoven op. 132 Molto Adagio - a lifelong favorite piece - was the initial inspiration and model, the relations and sonorities defined and extended for A Listening Field, take on an evolving life of their own.

 

Improvisation and Performance

Since the field behaviors are real-time and event-oriented (sound events made by each entity are broadcast to all entities to affect potential interactions), they offer porosity to influences from outside themselves. The piece leverages that architecture to affect the ecology from live improvising performers. Performers interact with the system through companion custom mobile apps. With musical, gestural, or visual expression, external systems can interpret performance and associate its salient phenomena with a shared event model and forward such events into the ecology. Its behavior mechanisms are then leveraged in the ecology. 

 

For musical interaction, we use a custom app that used by each performer to listens to their audio input, analyzing features such as pitch stability, spectral character, intensity, and vibrato. Pitch history is used to assess nearest matches to histograms of states’ tones. Unpitched expressions are used to present “turbulence” to the ecology. These features , however do not directly trigger musical events; instead, they influence the evolving state of the system. Performers function as catalysts rather than controllers. Their actions are subject to the ecologies' behaviors and their probabilities. So they invite change, but the system retains its own momentum and characteristic behaviors. In this way, agency is shared between performers and sonic entities in the piece. 

 

We also use the architecture with custom AR apps for image recognition and with the accelerometer and AR body tracking for gestural interpretation and spatial expression. Our current focus with A Listening Field interaction is for use with a specific set of paintings, associated with the planet Venus, by Teresa Parod. This explores the recognition of their specific color palettes in keying events related to luminance, contrast, hue, and saturation.

 

Behavioral Music Software

To realize “behavioral music”, I use a software architecture, data model, and codebase that expresses, implements, and performs these pieces in real-time. It is an event-based software architecture that takes its key design model/principle from autonomous agent design, not for goal-oriented problem solving but for emergent expression. The musical entities are spatial, having position in 3D space and are typically animate, affecting each other through behaviors that are proximity sensitive. These behaviors are designed to express larger-scale stochastic tendencies as well as more localized mini-dramas, as emerge in biological communities and the emotions we experience within them. 

 

The piece can run on mobile phones, laptops, and game systems. Its rendering and spatialization can be realized directly on mobile devices or its events forwarded to an external device for rendering and multichannel spatialization or ambisonic encoding. This latter, distributed model is helpful when scaling the number of objects, their complexity, or speaker dimensions requires mode compute power, memory, or audio interface channels. For the latter, the software is formed on two platforms linked via OSC. Unity runs the behavioral object models and their spatial composition. Max/MSP with Ircam Spat5 is used for tone synthesis and spatialization for different speaker geometries and encodings. 

 

The work's Unity codebase is approximately 10K lines of code across ~70 C# classes. These classes implement the behavioral object models, their event propagation, timing, and OSC messaging. They run in the Unity scenes and object prefabs which form each piece. Execution is in real-time and leverages the 3D spatial foundation and physics engine of Unity, as well as third-party plugins - such as bird flocking and weighted random number generation. Using a game platform for composition sounds perhaps like an odd fit, but it makes every aspect of the work natively spatial and potentially uncoupled from chosen sound source technologies and dispersion technologies. It also affords consolidated implementation for release on personal mobile devices - apps for music that is alive. For multichannel environments though, we typically run on a single laptop. This has worked well for 8 channels (Ircam Forum, NYC and Paris), 16 channels (Elastic Arts, Chicago). For 30 channels (Jay Pritzker Pavilion, Millennium Park, Chicago) we used two computers communicating over OSC - one for Unity object models and a 2nd for Max/MSP rendering and spatialization. Binaural and Ambisonic encodings are also done this way. A mobile version of A Listening Field is in development for gestural (accelerometer) interaction and image perception for a set of paintings by Teresa Parod.

 

Data Model

The data/object model for this is very simple. There are three definitional relational tables: 

 

State table defines the different fields’ expressive states defined by vociferousness, state duration, and next state transition on expiration. 

Sounds table includes the repertoire of synthesis parameters for each state’s pitches. State-to-Sound records have one-to-many (1:N) cardinality. 

Behavior table defines potential interactions of entities in the field. These cross-entity effects express possible state transitions that entities of a particular state can cause to another entity, based on spatial and temporal proximity weights and scoring thresholds.

 

This simple model affords very expressive emergent behavior for a broad range of compositional interests across a nuanced stochastic range to completely deterministic and the many emergent sweet spots in between. To give a sense of the ensemble scale, A Listening Field contains 11 flocking groups of 110 entities, across 19 states, expressing ~100 (non-unique) pitches, while interacting through ~10 “behavior” records. Not all states are directly affected by others’ behaviors, so behavior relationships are relatively few compared with the number of sounds associated with each of the several states. 

The banner graphic above and the icons below are a detail from an early sketch of one of Teresa Parod's new paintings related to the planet VenusA Listening Field is being developed as a multichannel and mobile phone AR music companion to that planned exhibit. 

Stereo Captures

These are stereo captures of A Listening Field taken from the iOS version, which uses Unity for spatialization and Chunity ChucK instruments for synthesis. Multichannel versions use Max/MSP for synthesis and Ircam Spat5 for spatialization. The stereo tracks below use different levels of 11/8 transposition of F-major groups and 11/7 transposition of a-minor and d-minor groups. Also, these captures were taken at various points over a several hour session like with any community, it takes a while for subtle group dynamics to form, and how they do so is always evolving. 

bottom of page