|Vol. 29 Issue 2 Reviews||Reviews > Events >|
|International Computer Music Conference 2004: Papers|
University of Miami, Coral Gables, Miami, USA; 1-6 November 2004. Proceedings of the International Computer Music Conference (ICMC2004), 1-6 November, University of Miami, Tzanetakis, G., Essl, G., Leider, C. (Coordinators); available from The International Computer Music Association, 2040 Polk Street, Suite 330, San Francisco, California 94019, USA; Web www.computermusic.org/.
The 30th International Computer Music Conference (ICMC) took place at the University of Miami’s Phillip and Patricia Frost School of Music in Coral Gables, Florida, “The City Beautiful,” an apt counterpoint to the locations that many delegates flew in from at this time of the year. The event coincided with the US elections, and that added to the mix.
Paper Chairs George Tzanetakis and Georg Essl note in the introduction to the Proceedings: “ICMC 2004 received a total of 184 submissions… All acceptance decisions were made based on the scores provided by the reviewers. 119 submissions were accepted as papers, 15 as posters, 10 as demos, 13 as studio reports, and three as round table discussions” (p. viii).
After ICMC 2000, the move to full papers for review rather than abstracts might have reduced the number of submissions. However, the quality of material has improved as a result. The fewer number of submissions might also be due to new, competing events, such as NIME (New Interfaces for Musical Expression), and ISMIR (the International Symposium on Music Information Retrieval). Given this, perhaps future ICMCs need a sharper focus?
The main venues at the university were well suited to hosting the event. A recent change from protocol was that paper sessions were run without Chairpersons, causing some hiccups with timing. My review is based on the many sessions I was able to attend, but largely focused on the published proceedings—significant bedtime reading at 749 pages.
The upside of any ICMC is the wonderful compression of thought, contact with recent developments in academic computer music, being able to dip into unfamiliar areas of research, and meeting with a like-minded community. A limitation is that the event is entwined with academic legitimacy, so it feels introverted and dislocated from wider music communities. Further, as technology has improved, the balance of “computer” and “music” may have tilted in favor of the computer at the expense of music, or at least a physical and kinetic understanding of what music is. There may simply be two communities here, with the ritualized division of concerts and technical papers hindering a synthesis between engineers and musicians/composers.
The conference theme was “Expanded Horizons,” the idea being to focus on “music and research that addresses new musical interfaces, aesthetics, and ways in which our field can grow by bringing its music and technology to new audiences.” I am sure many members of our community ponder these issues. However, the responses presented, on balance, tended to be “plus ça change, plus c’est la même chose.”
Bringing its music and technology to new audiences? Let me digress. This might involve more papers touching on some of the drivers that make music work socially apart from intellect: sex, religion, religion, or tribalism, for example. It might also involve starting from where one’s audience is, a scary thought that arises flicking through Jessica Williams’ “50 Facts that Should Change the World” (The Disinformation Company, 2004). She notes that 26 million people voted in the 2001 UK general election, but more than 32 million votes were cast in the first series of “Pop Idol” and, more than seventy percent of the world's population has never heard a dial tone. Apart from this, the non-academic software and composition/production community in popular music has developed a significant audience, and there are few bridges between the ICMA and this community.
Without seriously exploring some of these bridges at the conference, I began to wonder if we were all playing a form of Hermann Hesse’s “Glass Bead Game,” cultivating our own Joseph Knechts. Recall that even protégé Knecht eventually saw the limitation of the separation of Castalia from the outside world, and the need to bridge it to make a working whole. In a more worrying moment, Jonathan Swift’s satire Gulliver’s Travels (1726) also sprang to mind. On the island of Laputa, music was made through abstract equations that resulted in three-hour concerts without breaks that stunned Gulliver with noise; and the division between science and humanity in the Grand Academy of Lagado allowed scientists to invent impractical devices that were never finished, while starvation in the general population was widespread.
Enough said. Given the large number of offerings at ICMC 2004, I will attempt to highlight general areas of discussion based on the division of papers in the proceedings, rather than mention every paper individually. Categorizing material is always difficult, and usually results in some quibbles. However, the allocations at this event were generally well considered.
The History-Aesthetics area was covered in a small session that included only four papers. Material ranged from documenting and preserving electroacoustic music by Mexican and Latin American composers (Manuel Rocha Iturbide Ricardo Dal Farra), to individual projects on electronic opera (momilani ramstrum) and research into trends on the synthesis of the singing voice (Anastasia Georgaki). Perhaps this area of the conference needs to be expanded in future ICMCs to promote wider discussion and reflection.
Computer Music Software and Practice was dealt with in four parts, although half of the sessions included only two papers. Some work discussed updates for existing software: Gem for PD (Johannes M. Zmoelnig), and updates for the Java music specification language (Nick Didkovsky). Additional contributions included “Copy-synth-patch: a tool for visual instrument design” by Mikael Laurson and Vesa Norilo, and “Comp-i: a system for visual exploration and editing of MIDI datasets” by Reiko Miyazaki, Issei Fujishiro, and Rumi Hiraga. Also of interest was “Sound and musical representation: the acousmographe software” by Yann Geslin and Adrien Lefevre, “AROOOGA: An audio search engine for the world wide web” by Ian Knopke. and “An internet browser plug-in for real-time sound synthesis using pure data” by Marcos Alonso, Guenter Geiger, and Sergi Jorda.
Perception/Psychoacoustics began with Stephen Horenstein’s “Understanding supersaturation: a musical phenomenon affecting perceived time,” and Andrew Horner and James Beauchamp’s “A search for best error metrics to predict discrimination of original and spectrally altered musical instrument sounds.” Erling Tind and Kristoffer Jensen offered “Phase models to control roughness in additive synthesis,” and Satoshi Oishi and Shuji Hashimoto presented “Pitch perception of time-varying notched noise.” A team from the Center for Computer Research in Music and Acoustics (CCRMA, Stanford) put forward two papers: “Simulation of networked ensemble performance with varying time delays: characterization of ensemble accuracy,” and “Loudness-based display and analysis applied to artificial reverberation.”
We could not have an ICMC without a session on Algorithmic Composition. Cellular Automata applications was a main focus, including Dale Millen’s “An interactive cellular automata music application in Cocoa,” Peter Beyls’ “Cellular automata mapping procedures,” and “Cellular automata in MIDI-based computer music” by Dave Burraston, Ernest Edmonds, Dan Livingstone, and Eduardo Miranda. The emerging area of intelligent agents was examined in Mikhail Malt’s “Khorwa, a musical experience with autonomous agents,” and “An object-oriented model of the Xenakis sieve for algorithmic pitch, rhythm, and parameter generation” by Christopher Ariza rounded out the session.
A small session on Latency included four papers, providing an excellent overview of recent work. A smaller session on Sonification included some curiosities, updates, and gems that are well worth attending.
The first Interactive Systems session included six papers, the session running well over time. Much of the work echoed the sentiments of the conference theme at many levels. Robert Gluck’s “Sounds of a community: an interactive sound installation” provided the opening, follow by diverse material ranging from emergent behavior, machine cognition, proximal interaction, and an interactive installation for children with autism. A second Interactive System session was smaller, but equally diverse. Topics included “Interactive Performance with Wireless PDA’s” by Graham McAllister, Michael Alcorn, and Philip Strain, and “Teabox: A Senor Data Interface System” by Timothy Place and Jesse Allison.
Computer Music Languages and Environments included only four papers, two concentrating of Graphic Interfaces for audio. Ge Wang and Perry Cook updated their recent work from Singapore (ICMC 2003) in “The AUDICLE: a context-sensitive, on-the-fly audio programming environ/mentality,” and Pedro Kroger offered “Csoundxml: a meta-language in XML for sound synthesis.”
Education might have attracted a larger range of work, given the conference theme. Many papers here concentrated on enhancing specialist knowledge within the academy, or on technical extensions. Topics included “MATCONCAT: an application for exploring concatenative sound synthesis using MATLAB” (Bob Sturm), “Signals and systems using MATLAB: an effective application for exploring and teaching media signal processing” (Bob Sturm and Jerry Gibson), and “Understanding the mathematics of the frequency transform: an interactive tutorial for computer musicians” (R. Gerard Pietrusko and Richard Boulanger). Broadening the session were “Learning to play the flute with an anthropomorphic robot” (Jorge Solis, Massimo Bergamasco, Chida Keisuke, Isoda Shuzo, and Atsuo Takanishi), and “Pocket gamelan: a J2ME environment for just intonation” (Greg Schiemer, Kenny Sabir, and Mark Havryliv).
The Diffusion session was of interest because of the range of offerings as well as some of the pragmatic outcomes. This included “WONDER—a software interface for the application of wave field synthesis in electronic music and interactive sound installations” by Marije Baalman and Daniel Plewe; “Implementation of a highly diffusing 2-D digital waveguide mesh with a quadratic residue diffuser” by Kyogu Lee and Julius Smith; “M2 diffusion—the live diffusion of sound in space” by Adrian Moore, Dave Moore, and James Mooney; “Composition for ubiquitous responsive sound environments” by Dan Livingstone and Eduardo Miranda; and “Three approaches to the dynamic multi-channel spatialization of stereo signals” by Christopher Keyes.
Synthesis was the largest individual session with nine papers: “Spectral tuning” by Eric Lyon; “Filter design using second-order peaking and shelving sections” by Jonathan Abel and David Berners; “A physically informed model of a musical toy: the singing tube” by Stefania Serafin and Juraj Kojs; “A comparison between local search and genetic algorithm methods for wavetable matching” by Simon Wun, Andrew Horner, and Lydia Ayers; “Synthesizing timbre tremolos and flutter tonguing on wind instruments” by Lydia Ayers; “Timbre representation of a single musical instrument” by Hugo de Paula, Mauricio Loureiro, and Hani Yehia; “Loudness scaling in a digital synthesis library” by Jesse Guessford, Hans Kaper, and Sever Tipei; “ATS: a system for sound analysis transformation and synthesis based on a sinusoidal plus critical-band noise model and psychoacoustics” by Juan Pampin; and “Low-dimensional parameter mapping using spectral envelopes” by Miller Puckette.
For the technically inclined, the session on Alignment, Segmentation, and Decomposition covered the following topics: “A hierarchical approach to onset detection” (Emir Kapanei and Avi Pfeffer); “Robust polyphonic MIDI score following with hidden Markov models” (Diemo Schwarz, Nicola Orio, and Norbert Schnell); “Signal decomposition by means of classification of spectral peaks” (Axel Roebel, Miroslav Zivanovic, and Xavier Rodet; “Improving score to audio alignment: percussion alignment and precise onset estimation” (Xavier Rodet, Joseph Escribe, and Sebastien Durigon; “Audio segmentation by singular value clustering” (Shlomo Dubnov and Ted Apel); and “Real-time temporal segmentation of note objects in music signals” (Paul Brossier, Juan Pablo Bello, and Mark D. Plumbley).
Statistical Models and Parameter Spaces included Ali Taylan Cemgil’s “Polyphonic pitch identification and Bayesian inference,” Shlomo Dubnov’s “Spectral anticipations,” David Gerhard and Daryl Hepting’s “Cross-modal parametric composition,” and Christopher Raphael’s “Aligning polyphonic musical scores with audio using a latent tempo process.” A similarly small session on Symbolic Processing covered “Learning expressive performance rules in jazz” (Rafael Ramirez and Amaury Hazan), “Intelligent scripting in ENP using PWconstraints” (Mika Kuuskankare and Mikael Laurson), and “Work toward a model of parallelism” (Ralph G. W. Smeenk).
I expected the session on Gestural and Haptic Interfaces to be more extensive, given the theme of the conference. Although containing only five papers, it turned out to be one of the highlights of the conference, including the ICMA/Journal of New Music Research 2004 prize-winning paper on “Digitizing North Indian performance” by Ajay Kapur, Philip Davidson, Perry Cook, Peter Driessen, and Andrew Schloss. Additional presentations included: “Re-coupling: the uBlotar synthesis instrument and the sHowl speaker-interface controller,” by Van Stiefel, Dan Trueman, and Perry Cook; “The Shamanic object as a model for new multimedia computer performance interfaces” by Matthew Burtner; “A new Beatbug: revisions, simplifications, and new directions” by Roberto Aimi and Diana Young; and “Recognition, analysis and performance with expressive conducting gestures” by Paul Kolesnik and Marcelo Wanderley
The rise of NIME does not seem to have dampened enthusiasm for the Music Information Retrieval area, with many papers coming from Asia. The substantial session included: “A new efficient approach to query by humming” by Leon Fu and Xiangyang Xue; “Musical genre classification by instrumental features” by Jiajun Zhu, Xiangyang Xue, and Hong Lu; “Annotated music for retrieval, reproduction, and sharing” by Keiji Hirata, Shu Matsuda, Kathuhiko Kaji, and Katashi Nagao; “Instrument recognition beyond separate notes—indexing continuous recordings” by Arie Livshin and Xavier Rodet; “Towards timbre recognition of percussive sounds” by Adam Tindale, Ajay Kapur, and Ichiro Fujinaga; and “Feature extraction and database design for music software” by Stephen Travis Pope, Frode Holm, and Alexandre Kouznetsov.
Compositional Systems was a small session, and some of the papers could easily have slotted into other sessions. They included: “SDIF sound description data representation and manipulation in computer assisted composition” by Jean Bresson and Carlos Agon; “The transformation engine” by Bruno Degazio; “Evolving decentralized musical instruments using genetic algorithms” by Assaf K. Talmudi; and “A fuzzy logic model for compositional approaches to audio-visual media” by Rodrigo Cadiz.
The Composition Practice session was a mixed bag, ranging from systems that are not currently very practical to many that are used professionally. The papers included: “Andante: Composition and performance with mobiles musical agents” by Leo Kazuhiro Ueda and Fabio Kon; “Interactive paths through tree music” by Judith Shatin; “Musical time in visual space” by Brian Evans; “Rock music: Granular and stochastic synthesis based on the Matanuska Glacier” by Mara Helmuth and Teresa Davis; “’Voice Networks’—Exploring the human voice as a creative medium for musical collaboration” by Gil Weinberg; “The Technophobe and the Madman: An Internet2 distributed musical” by Robert Rowe and Neil Rolnick; and “Re-realizing Philippe Boesmans’ Daydreams: A performative approach to live electro-acoustic music” by Robert Esler.
Music Analysis seems to have attracted more than the usual share of attention this year, divided into a number of sessions. Applied to tonal music, the opening session included: “Predicting reinforcement of pitch sequences via LSTM and TD” by Judy Franklin; “Automatic rag classification using spectrally derived tone profiles: by Parag Chordia; “Automatic generation of grouping structure based on the GTTM” by Masatoshi Hamanaka, Keiji Hirata, and Satoshi Tojo; “Stochastic estimation of BSF” by Tuukka Ilomaki and Yki Kortesniemi; and “Statistical description models for melody analysis and characterization” by Pedro J. Ponce de Leon and Jose M. Inesta. A second session included topics ranging from “Adaptive high-level classification of vocal gestures within a networked sound instrument” (Jason Freeman, C. Ramadrishnan, Kristjan Varnik, Max Neuhaus, Phil Burk, and David Birchfield) and “Mapping spectral frames to pitch with the support vector machine” (Andrew Schmeder) to “Spectromorphology hits Hollywood: Black Hawk Down—A case study” (Paul Rudy).
A third Music Analysis session included seven papers, one of the largest sets. The range of work makes it worth mentioning each paper here: “Optimal filtering of an instrument sound in a mixed recording given approximate pitch prior” by Adiel Ben Shalom and Shlomo Dubnov; “Real-time pitched/unpitched separation of monophonic timbre components” by Joseph Sarlo; “Automatic discover of right hand fingering in guitar accompaniment: by Ernesto Trajano, Marcio Dahia, Hugo Santana, and Geber Ramalho; “Inductive logic programming and music” by Rafael Ramirez; “Path difference learning for guitar fingering problem” by Aleksander Radisavljevic and Peter Driessen; “Spectral characteristics of the musical iced tea can” by Bob Sturm and Stephen Travis Pope; and “Contour hierarchies, tied parameters, sound models and music” by Lonce Wyse.
There were four Demo sessions, many well supported with papers in the proceedings. Most were technically focused, demonstrating audio or music applications under development. Topics ranged from Supercollider (Michael Leahy), an agent-based distributed interactive composition environment (Michael Spicer), to the Findsounds system (Stephen Rice and Stephen Bailey). The state of Linux as a mature digital audio workstation (Ivica Bukvic), synthesis by interactive learning (Michael Clarke, Ashley Watkins, Mathew Adkins, and Mark Bokowiec), sound analysis and processing with Audiosculpt 2 (Niels Bogaards, Axel Roebel, and Xavier Rodet), ATS user interfaces (Juan Pampin, Oscar Pabloo Di Liscia, William ‘Pete’ Moss, and Alex Norman), and loop-based composition with RTCMIX and HULA (John Gibson) were also covered
Posters are difficult to review, as in many cases the texts in the Proceedings do not do justice to the onsite explanations. There was no clear pattern or dominant subject, with topics ranging from music education, algorithmic composition, and interaction; to technical subjects in physical modeling, cellular automata, timbral theory, visualization, automatic genre classification, and synthetic databases in melodic retrieval testing. The quality of material varied, and a few of these might have better suited paper presentations.
The studio reports included submissions from: Danish Institute of Electronic Music—DIEM (Denmark), Interdsciplinary Centre for Scientific Research in Music at the University of Leeds (UK), Centre for Digital Music, Queen Mary University of London (UK), Georgia Tech (USA), Peabody Conservatory of Music (USA), School of Music at Ball State University (USA), Music Technology, Faculty of Music at McGill University (Canada), Center for Computer Research in Music and Acoustis—CCRMA (USA), University of Cincinnati (USA), Aalborg University (Denmark), Institut National de l’Audiovisuel-Groupe de Recherches Musicales—INA-GRM (France), and the Audiovusial Institute of the Pompeu Fabra University of Barcelona (Spain). There were many refreshing ideas and approaches presented exploring different strengths and adaptations to local conditions.
Apart from my earlier comments on the energetic although introverted nature of our community, the quality of work and byways/tangents within it reflect a collective imagination that one could happily get lost and revel in. I just hope this is not a reflections of Dostoevsky’s quip in The House of the Dead (1860) that, “Man is a creature that can get used to anything, and I think that is the best definition of him.”
It is an enormous undertaking to hold an events such as the ICMC, for which organizers give of their time well above and beyond the call of duty. On behalf of all those who attended the event, I wish to thank and congratulate Conference Chair Colby Leider and the organizing committee who contributed to mounting this 30th event. Your efforts and professionalism made this a pleasure to attend.