Roger Dannenberg

Roger Dannenberg

Professor

Office: 7003

Email: rbd@cs.cmu.edu

My work is focused on various aspects of computer music, a field which poses many challenges for computer science. A central problem in computer music is expressive control, that is, the detailed control of timing, gesture, nuance, and tone quality that is essential to music. This problem has many facets, resulting in a variety of research directions. The Computer Music Project has developed new languages, development tools for real-time systems, synthesis techniques, and music understanding systems. This research is more than intrinsically interesting. It can shed light on related problems in real-time systems, multimedia, human-computer interaction, and artificial intelligence. Moreover, new possibilities of control and interaction in music are changing the very nature of music composition, performance, and aesthetics.

One research example is the development of new languages for expressing temporal behavior. One of these is Nyquist, a language that provides a single abstraction mechanism for the seemingly different notions of "note," "instrument," and "musical score." Nyquist gives composers an elegant, uniform notation that spans the range from low-level digital signal processing to high-level music composition. Nyquist is not intended for interactive real-time sound generation, but concepts from Nyquist are incorporated in other systems, including one of our own named Aura.

Expressive control of musical tones is another topic of research. A violin is expressive because there are many parameters under continuous control by the player, including bow pressure, finger and bow positions, and bow velocity. These give rise to variations in the resulting sound. My colleagues and I have developed a synthesis technique, spectral interpolation, which allows us to synthesize tones with interesting variations in spectra. Spectral interpolation has been used to accurately synthesize a variety of instruments. In the future, we will use this technique to give composers and performers greater intuitive control over synthesized sound.

The main focus of my current research is to develop artificial computer musicians that can perform live with humans, especially in steady-beat or popular music. This task is difficult because even small synchronization errors are obvious, performers often improvise, the global structure and other details of the performance are not always known in advance, and even the choice of notes and rhythms is often left to the performer.