Concepts + Theory

Explore the workings of the music engine in more depth.

 

Note-level Action

Much current music software works with recorded sound or "samples," Sampled audio loops – short, digitally recorded pieces of music – can be stacked, tuned and mixed together. With this sort of software, a user can mix and combine samples to make new combinations of prefab musical elements.

Tone Unit's music engine instead operates at a symbolic level. It analyzes and creates new patterns of notes (like a score or a chart) that are streamed to a software synthesizer during playback.

Chromat actively engages the rhythmic and harmonic patterns of particular melodies by searching for common "background" shapes from which the actual patterns might potentially have evolved. 

"Evolve" in this sense refers to the largely unconscious process by which musicians and composers create musical patterns as elaborations of more basic, archetypal musical figures. 

 

Musical Structures

The consistent meter of much groove-based music provides a regular grid upon which nested grammatical relations bear relation to simple mathematical structures (such as cellular automata). 

The algorithmic music engine behind Chromat makes use of this nested mapping of musical grammar onto simple mathematical formulas in order to analyze and generate musical material without resorting to explicit compositional strategies.     

More details on the background of the software music engine can be found in this draft paper, and here (though the algorithms described there are distinct from the approach used in Chromat and Replayer).

 

Real-Time Music Adaptability

In the case of Chromat, user control over real-time musical output is via a color interface. The relative weight of RBG values in the user-selected color determines the weight of each respective musical input, varying the musical output as the color selection changes.

Many potential applications exist for this kind of musical morphing. A producer or composer may want to extend and interbreed short musical grooves without laboriously grinding out variations by hand. Or a producer, musician or artist may want to make game music or a sound installation, where all the possible musical results are not knowable in advance.

 

Mulit-user Music and Social Networks

The music software engine is particularly suited to multi-user scenarios, because it automatically works out real-time relationships between musical elements that would otherwise require special musical abilities (as in an improvisational jazz ensemble) or entirely predictable elements (as in a fixed musical score).

This approach points toward musical settings where more than one person introduces and steers musical content, much as occurs with speech during normal conversation. Music as a collective activity (similar to speech) becomes more possible when software can supervise musical well-formedness while allowing the participants to engage on a more intuitive level.

Within a multi-user domain, individual music streams can algorithmically compete, reorganize, and shape each other, based on real-time musical analysis and stochastic musical output. Collective user or avatar actions can create evolving musical textures, which are in turn fed back in as inputs to the next generation of musical morphing.  Over time a music library could evolve as musical elements recombine and meet with approval or disapproval of the participants. The multi-user setting acts as a musical ecosystem.

Social groups themselves could form based on shared aesthetics applied to direct manipulation of music style, adding new dimensions to social networking applications.  Musical tags could broadcast a continuously adapting musical identity for each user as a real-time function of location, other users, or any other parameter.

 

Musical Mnemonics

Melody has a unique function as mnemonic. Recognizable streams of music can be memorable way to "tag" a certain gesture or user interaction.  When there are changes to a programmatic state, such as movement on a screen, this stream can be affected as well. Families of related music could be created for related gestures. The ability to hybridize musical tags raises the possibility of automatically generated tags for different combinations of users or elements.

 

Data Auralization

Music streams could be mapped onto various time-series or alert-based data.  The music would mutate in meaningful ways to convey changes in each monitored data stream. This could be useful in situations where visual attention is unnecessary, impossible or even dangerous (driving, walking). Automatically adaptable music could allow data to be monitored in multiple dimensions without the active attention required by visual monitoring or as a means to draw attention to visual data on an as-needed basis.