Vol. 28 Issue 4 Reviews
The Second International Symposium on Computer Music Modeling and Retrieval (CMMR 2004)

Aalborg University Esbjerg, Esbjerg, Denmark, 26-29 May 2004.

Reviewed by Marcus Pearce and David Meredith
London, UK

Introduction
The Second International Symposium on Computer Music Modeling and Retrieval (CMMR 2004) was held at Aalborg University Esbjerg on the southwest coast of Denmark on May 26–29, 2004. The symposium was chaired in an effective but informal style by Uffe Kock Wiil, with Stefania Serafin responsible for the proceedings, and Richard Kronland-Martinet chair of the program committee. Mr. Wiil and Ms. Serafin are both of Aalborg University Esbjerg, while Mr. Kronland-Martinet is at the Centre National de la Recherche Scientifique, Laboratoire de Mécanique et d'Acoustique (CNRS-LMA), Marseille, France.

The goal of this symposium series is to provide an opportunity to meet and interact with peers concerned with the cross-influence of the technological and the creative in computer music. The interdisciplinary nature of the field was reflected in the organization of the conference into session topics in areas such as the synthesis of instrument timbres, music analysis, music information retrieval, and computer music composition. This diversity was also reflected in the opening and closing keynote speeches given, respectively, by Cort Lippe (State University of NewYork at Buffalo), who spoke about tools for real-time interaction of traditional and computer instrumentalists, and Marc Leman, who summarized some of the research carried out at the Institute for Psychoacoustics and Electronic Music at the University of Ghent in Belgium.

Over three days, 25 paper presentations were made, and the limit of 20 pages for each meant that the conference proceedings (to be published in the Springer Lecture Notes in Computer Science series) are generally reasonably complete in their presentation of background and discussion of results. To complement the technical presentations in the conference sessions, CMMR 2004 also included a panel session on computer music composition and three concerts of computer music.

Music Information Retrieval
One recurring issue which arose during the conference was how to define a “ground truth” for the evaluation of proposed measures of perceived musical features. Many of the presentations at CMMR 2004 concerned the development of systems and techniques for Music Information Retrieval (MIR), and in this context, Stephan Baumann from the German Research Centre for Artificial Intelligence illustrated one method of obtaining a ground truth. The presentation concerned a system for musical artist recommendation based on clustering using Self-Organising Maps of artist reviews obtained from the Amazon Web site. The improved quality of recommendations using a modified scheme for weighting terms extracted from the reviews was demonstrated by comparing matches between the recommendations of the presented system with those of another system based on user playlists.

Music Analysis
Another topic represented at the conference was music analysis where the ground truth is often indicated in the score. For example, Elaine Chew from the University of Southern California presented work on separating voices in symbolic representations of polyphonic music. The approach described depends on assumptions derived from David Huron’s perceptual principles for voice leading. First, a piece is segmented into "contigs" at boundary points where the number of voices changes. In a second stage, starting from contigs with the maximal number of voices present, fragments in adjacent contigs are ordered by pitch height and reconnected according to pitch proximity. The implemented algorithm was tested on contrapuntal music composed by J. S. Bach and yielded promising results.

Other presentations were concerned with the analysis of performances rather than compositions. Maarten Grachten from Pompeu Fabra University in Barcelona, for example, gave an interesting presentation on the automatic annotation of live jazz saxophone performances. This approach uses edit operations such as insertion, deletion, consolidation, fragmentation, and transformation according to a cost function parameterized to control the influence of pitch, duration, and onset for each of the edit operations. An evolutionary approach was adopted for optimizing the parameter values for test sets of hand annotated performances. While the solutions significantly improved the annotation performance over random parameter settings, the solutions obtained did not converge to a single optimum across disjoint test sets.

Sound Synthesis and Timbre
As well as research concerning symbolic representations of music, many of the presentations at CMMR 2004 were concerned with the processing and manipulation of digital audio. One strand of this research concerned sound synthesis and the modeling of instrument timbre. Julien Bensa, from the Université Pierre et Marie Curie in Paris, presented a cognitive approach to modeling piano timbre. The goal of this research was to associate objective parameters of acoustic synthesis models of piano timbre with subjective experiential descriptions obtained in a free association task carried out with pianists. In another presentation, Phillipe Guillemain, from CNRS-LMA, analyzed the relationship between two control parameters of a physical model of the clarinet (blowing pressure and reed aperture) and four objective timbre descriptors of the generated audio signal. Both of these presentations were concerned with the association of intuitively meaningful domains (e.g., natural language descriptions of sound and physical properties of instruments) with low level audio descriptors. In his keynote speech, Marc Leman also discussed the problem of associating subjectively perceived musical qualities with objective acoustic descriptors of audio signals in the context of MIR.

Audio-to-Score Transcription and Feature Extraction
One topic represented strongly at CMMR 2004 was audio-to-score transcription and, in particular, the extraction of features for MIR. Indeed, about a third of the papers presented were concerned in one way or another with the extraction of perceived structure from musical audio signals. Rui Paiva, from the Universidade de Coimbra, Portugal, presented a method for extracting the melody from a polyphonic musical audio signal based on a model of the cochlea. Meinard Müller described a system developed at the University of Bonn, Germany, which automatically synchronizes a score or MIDI file of a work with an audio file. The system works by matching the source MIDI file with another MIDI file extracted from the waveform data using novelty curves for note onset detection and multi-rate filter banks in combination with note templates for pitch extraction. Kristoffer Jensen, from the University of Copenhagen, Denmark, also used a novelty measure, in this case extracted from a self-similarity matrix calculated from a beat histogram ("rhythmogram"), to automatically identify segment boundaries in rock and techno music.

A Dichotomy between "Composers" and "Scientists"
Another theme of the conference was the perceived cultural and methodological dichotomy between "composers" and "scientists." The "composers" seemed to be far more sensitive to this dichotomy than the "scientists". One of the 90-minute sessions was devoted to an open panel discussion of this issue, chaired by the composer Lars Graugaard (Aalborg University Esbjerg). Cort Lippe opened the discussion by proposing that today's electroacoustic composers are parasites who derive their techniques from the results of research in fields such as signal processing. Harvey Thornburg (Center for Computer Research in Music and Acoustics, Stanford University, USA) responded to this by claiming that he had on several occasions been motivated by the creative needs of composers to develop new signal processing techniques. Two of the panelists, Juraj Kojs (a composer from the University of Virginia, USA) and Stefania Serafin, insisted that their collaborative relationship was definitely symbiotic rather than parasitic.

David Meredith (City University, London) then asked the panel whether they ever found that their creativity was paralyzed rather than enhanced by the almost limitless power of the computer-based tools available to them which essentially afforded an infinite template of possibilities. Leonello Tarabella (Consiglio Nazionale delle Ricerche, Pisa) suggested that someone who is bewildered by the technology should probably not be engaged in computer-based composition! Lars Graugaard explained that he avoided this problem by always being motivated in his compositions by an initial idea of the sound structure that he desired which he would then attempt to realize using technology.

Conclusions
The conference attendance of CMMR 2004 was relatively small, consisting of not many more than the sum total of the conference organizers and those presenting or performing. This prompts one to ask whether there is a need for another conference on the computational modeling of music. Conferences such as DAFX (International Conference on Digital Audio Effects) cover the signal processing end of the research field, ISMIR (International Symposium on Music Information Retrieval) covers engineering approaches and practical applications such as MIR, ICMC (International Computer Music Conference) covers computer music, ICMPC (International Conference on Music Perception and Cognition) covers cognitive modeling of music, and ICMAI (International Conference on Music and Artificial Intelligence) covers the abstract modeling of music.

Perhaps the only criticism of CMMR 2004 in terms of its scope is that it tended strongly towards the practical rather than the theoretical end of the spectrum. However, possibly because of its small size, the atmosphere at CMMR 2004 was particularly relaxed and friendly and the interdisciplinary research presented prompted interesting discussion from a range of different viewpoints. If the bias toward practical applications could be redressed by the inclusion of more theoretical modeling and cognitive scientific research, CMMR would fill a niche as an intimate forum for the presentation of interdisciplinary research in modeling music.