|Vol. 24 Issue 4 Reviews||Reviews > Events >|
|The 108th Audio Engineering Society Convention|
Palais des Congrès, Paris, France, 19-22 February 2000
Reviewed by Takebumi Itagaki (London, UK England)
The 108th Audio Engineering Society (AES) Convention, a biannual event alternatively held in Europe and in North America, was held this past February in Paris. The convention comprised a large exhibition of audio and broadcasting equipment, paper sessions, technical and standard committee meetings, student activities, and workshops. As I mainly participated in the paper sessions, my review is concentrated on these, in particular Audio Coding, Music Instrument Acoustics and Electronic Music Technology, and Signal Processing.
In the audio industry, the AES is best known as an authority of standards, such as the AES/EBU (European Broadcasting Union) digital connector and communication protocols. Coinciding with the Technical Committee meeting for Audio Coding and the Workshop for "MPEG-4 Version 2", the Audio Coding paper sessions also included a few papers on MPEG-4 applications. Riitta Väänänen from Helsinki demonstrated his article on "Synthetic Audio Tools in MPEG-4 Standard." These tools include specifications for sound coding based on structured, parametric descriptions of algorithmic synthesis methods along with wavetable synthesis and speech synthesis from text data. A German group from the Fraunhofer Institution for Integrated Circuits and DSP Solutions presented "Real-Time Implementation of the MPEG-4 Low Delay Advanced Coding Algorithm (AAC-LD) on Motorola DSP56300." For some Computer Music Journal readers, the Fraunhofer Institution will be known for its MP3 compression algorithm used in sound tools such as Digidesign ProTools or Syntrillium Cool Edit. The paper discusses the selection of a fixed-point DSP platform and describes the implementation and effects of a functional real-time AAC-LD codec on the DSP56300. They argue that for non-speech signals the algorithm delivers a better performance at lower bit-rates than specialized speech codecs. Markus Erne and George Moschytz from the Eidegeössische Technische Hochschule in Zürich presented their paper on "A Bit-Allocation Scheme for an Embedded and Signal-Adaptive Audio Coder". Their bit-allocation scheme for a wavelet-based audio coder is based on embedded zero-tree coding, EZW, that has mainly been used in video compression. They claim that the scheme additionally enables increased audio quality due to a non-integer quantization estimation.
The AES seemed to be opening its doors to computer/electroacoustic music researchers by programming a special paper session, "Music Instrument Acoustics and Electronic Music Technology." The contents of the session, however, was rather a mixed bag compared to sessions of the International Computer Music Conference (ICMC). One of the most remarkable papers of this session was presented by Jason Flaks of Gibson Guitar Corp, "Global Musical Instrument Communication Standard (GMICS): An Integrated Digital Audio and Control Communication Specification for Instruments." Since the implementation of MIDI in the early 1980s, there have been several attempts to replace the 8-bit signal-based standard, such as ZIPI (see Computer Music Journal 18/4). GMICS is based on the 100 Mbit Ethernet physical layer with RJ-45 connectors and provides low latency digital audio and control signals for instruments in live performance. The audio signal layer is capable of up to 16 channels in 32-bit format at 96 kHz sampling rate. One of the notable features is the allocation of phantom power (24 V DC, delivering 500 mA to 1 A) to unused connections of the RJ-45 units under the IEEE 802.3 standard. After the presentation, there was an exchange of opinions on the phantom power feature, regarding noise and effect on the signal. I hope that GMICS will be demonstrated at a forthcoming ICMC where a more critical audience, including disgruntled MIDI users, will be there. Another interesting paper in the session was "Extraction of Physical and Expressive Parameters for Model-Based Sound Synthesis of the Classical Guitar," presented by Cumhur Erkut from the Helsinki University of Technology. His group has presented a series of papers on sound synthesis for plucked strings by physical modeling, including at recent ICMCs. This paper describes a revision of the calibration process of a classical guitar model and extends it to parameter extraction for various performance techniques.
Coinciding with an exhibition of the Super Audio Compact Disc (SACD) by Sony and Philips, there were a few papers discussing this and related subjects such as delta-sigma Digital-to-Analog Converter (DAC) and Pulse Width Modulation (PWM) signal processing. James Angus of York University presented a paper entitled "Direct Digital Processing of Super Audio CD Signals." This turned out to be a good introduction to the signal format of SACD. Andrew Floros of Patras University reported "On the Nature of Digital Audio PWM Distortions." Using mathematical models of a Pulse Code Modulation (PCM)-to-PWM/PWM-to-PCM mapper, Mr. Floros discussed the nature of PWM-induced distortions.
In the Signal Processing paper session, Jean-Michel Raczinski and Gérard Marino from Centre dEtudes Mathématique et Automatique Musicales (CEMAMu) presented a paper entitled "A Flexible Architecture for Real-Time Sound Synthesis and Digital Signal Processing." The paper describes an implementation of real-time sound synthesis and distributed filter banks on Field Programmable Gate Arrays (FPGA). They claim that 800 sine oscillators at a 48 kHz sample-rate by wavetable synthesis with linear interpolation can be placed on the FPGA PCI plug-in card controlled from the UPIC system (developed at CEMAMu). They neglected to make, unfortunately, any direct comparisons with previously published additive synthesis systems such as Cor Jansens "Sine Circuitu" (see 1991 and 1992 Proceedings of the ICMC) or the Durham Music Technology Groups "160 Transputer Network" (see Computer Music Journal 21/4). The FPGA card itself seems to be quite efficient in terms of power consumption with good PC interface features. Another DSP paper, "Frequency Warped Signal Processing for Audio Applications," was presented by Matti Karjalainen of the Helsinki University of Technology (Finland). Using a "frequency-warped" signal processing method, Mr Karjalainen demonstrated that DSP algorithms can be designed and implemented in a way directly relevant to human auditory perception.
As part of the AES Convention in Paris, there were several Technical Tours, unfortunately carried out in parallel with the paper sessions. The facilities that were open for visits included Radio France, Canal Plus, Institut de Recherche et Coordination Acoustique/Musique (IRCAM) and Plus XXX Studios. In addition, there were a number of Special Events, including two lunchtime concerts that were rather refreshing, and an evening organ recital at the Madeleine Church performed by Graham Blyth.
After the AES Convention in Los Angeles in September 2000, the lineup includes Amsterdam, (May 2001), New York (September 2001), and Munich (May 2002). Information for these events will be posted to the AES website (http://www.aes.org/) and in the Journal of the Audio Engineering Society.