|Vol. 24 Issue 2 Reviews||Reviews > Events >|
| 1999 Columbia University Interactive Arts Festival
6-9 April 1999, New York, New York USA
Movement and Sound, Merce Cunningham Dance Theater
In Movement Study II, Wayne Siegel employs the DIEM (Danish Institute of Electroacoustic Music) Digital Dance Suit, developed by Mr. Siegel and Jens Jacobsen, as an elegant and transparent interface between a solo dancer, Mata Sakka, and the computer processor. Resistance sensors are applied to each of the joints of the dancer and this data is sent to a Max patch via wireless transmission which then translates the dancer's movements into synthesized music. This direct interface affords the dancer total freedom of movement while allowing the computer to precisely document and utilize the choreography. The work is divided into three sections. Slow, charged movements open the piece accompanied by droning pitches which are explicitly controlled by the opening and closing of the dancer's limbs. In the middle section, as Ms. Sakka's movements grow quick and precise, an energetic groove is triggered upon any movement, and is suddenly silent as her movement ceases. Mirroring the opening, a slow and expansive mood returns in the final section as low, foreboding pedal tones accompany the compelling choreography. Mr. Siegel's music is direct and communicative, but his exclusive use of patches from a synthesizer module produces a stagnant sonic world. The expressive opening and closing sections would have benefited from the exploration of more subtle timbral differences.
Ms. Sakka, as choreographer and performer of this work, was clearly very comfortable with the idiosyncrasies of the Dance Suit. She executed her choreography in a manner reflecting her apparent ease with the technological aspects of the piece, and demonstrated her intuitive sense of how even the most subtle details of her movements would translate into synthesized sound. The second piece on the program, Song for the Living/Dance for the Dead, was a triple collaboration between composer Russell Pinkston, choreographer Mata Sakka, and video artist Anita Pantin. The music is comprised of speech and nature-sound samples, processed to varying degrees to produce a rich palette of clearly recognizable sounds contrasted with intriguingly distorted versions of the same materials. While in some passages the music is simply sequenced by Max, most often the music is controlled by the movements of an ensemble of eight dancers. To facilitate this interaction, the work utilizes an interface system in two parts.
The main components of the interface are two MIDI Dance Floors. These are two large mats with pressure-sensitive triggers embedded along both sides. The floors interact with a Max patch, either triggering a sample or indicating the beginning of a new section. At the opening, two dancers enter the stage and work their way down the floor, each step eliciting another sound. Later, two dancers alternately leaping on and off of the two dance floors trigger contrasting "cool" and "hot" text and sounds.
During an extended, intensely rhythmic middle section, Mr. Pinkston utilized a second interface, the Very Nervous System (VNS) video tracking system which, by "watching" the dancers, enables them to influence the music through the character of their movements. The camera's view is divided into two fields, and, driven by an energetic percussion sequence, the dancers' frenetic movements on either side of a divided field of vision trigger various sonic events which augment the already dense texture.
While the use of the video tracking system was effective and unobtrusive, the two large MIDI Dance Floors created a significant obstacle for the choreographer and dancers. The obvious sequential relationship between a dancer's foot on the floor and the resulting sample-playback created a predictable scenario. Orizzonte degli eventi was composed by Marco Cardini, Leonello Tarabella, and Massimo Magrini, with additional technical coordination by Giuseppe Scapellato. It is divided into four sections which are dramatically distinct in their methods of interaction and in the resulting music and video. Nonetheless, the piece is unified through the use of wireless tracking devices developed at the Computing Center for the University of Pisa/National Council of Research (CNUCE/CNR) to explore the translation of hand movements into sound and video.
Mr. Tarabella performed the first section on the Virtual Piano, a tracking system which follows the movement of his hands above an imaginary keyboard. As the audience watched his hands dance up and down in the air, the Virtual Piano produced an impressively accurate piano performance simulation. Next, the lights were extinguished except for one ultraviolet light to illuminate the white-gloved hands of Mr. Magrini. Sculpting broad lines and gestures in the open space, thin bars of light cascaded across a video screen accompanied by a terse, synthesized music.
Mr. Tarabella also performed with the Twin Towers, an instrument much like a Theremin. It produces synthesized and sampled sounds sympathetic to hand motions above the instrument by means of an infra-red motion tracking system. In the final section, Mr. Magrini again utilized his hand gestures to manipulate video and sound. This video tracking system followed the movements of his brightly painted hands, as the powerful but controlled gestures translated into a pulsing, emphatic music with colorful, richly textured streaks on the digital canvas.
Each of these interactive systems is a creative and viable means of using the hands to interactively sculpt sound and video. However, with a striking lack of thematic or gestural development and little formal interest, the music of Orizzonte degli eventi is far less sophisticated than the interactive systems used to create it.
Matthew Suttor's multimedia theater piece, Sarrasine, based on Balzac's work of the same title, is comprised of eight sections. The original work is set in the 1830s at a party in Paris, as two lovers discuss the tale of Sarrasine, a frightful old man. Though the narrative is difficult to discern from a first listening, the major themes of Mr. Suttor's new work are conveyed through a delightfully overwhelming barrage of sound and video, enhanced and enriched by the live performance of the composer himself. This collage, collected from across centuries, dispels any notion of chronological time and geographical place and challenges the audience to weigh the world of Balzac with this new Sarrasine. The sonic world of the opera is a blend of the recorded speech of Mr. Suttor's voice reading from Balzac's text, recorded and processed mbira, harpsichord, flutes, and other electroacoustic samples. Despite their disparate features and implications, the composer successfully forges a unique soundworld which at once suggests the 18th century of Balzac as well as our own time. Though the composer remained mute throughout the performance, allowing the recorded music and text to speak for him, his reserved and precise performance complemented the active video.
The images on video are often taken from paintings from earlier times, including one recurring image of Botticelli's "The Birth of Venus." Mr. Suttor at one point assumes the pose of Venus, and the striking resemblance was as provocative as it was amusing. At another moment, he stood such that a portion of the video was projected onto his body as well as onto the screen. In this union of the video and live performance, the screen contained video footage of his writing and drawing as prerecorded images were superimposed on the screen. As the audience watches the composer literally step in and out of the video, they are compelled to draw parallels and conclusions regarding the juxtaposition of the two distinct yet successfully merged worlds of Balzac's and Mr. Suttor's Sarrasine. The piece makes use of two interactive systems developed at the Studio for Electronic Instrumental Music (STEIM) in Amsterdam, BigEye (video to MIDI) and Image/ine (real-time control of video processing). Mr. Suttor seamlessly integrates technology and live performance in a manner clearly driven by his artistic aims. Consequently, he has created a truly cohesive, engaging whole.
II. Multimedia Interactive
Works, Miller Theater
Since three of the four works on the second program of the Columbia Interactive Arts Festival utilized real-time computer-generated video, a massive screen was stretched across the shallow stage, adding a cinematic aspect to the concert experience.
The first piece, Lemma II, began the evening with a kaleidoscopic, fascinating integration of technology, musicianship, and visual pyrotechnics. The work combines four live musicians with real-time computer processing of sound and complex computer-animated images. Of the four performers in the piece, two played at Miller Theatre (New York) while the other two were connected via an ISDN connection from Intel Corporation's Oregon headquarters. These long- distance performances usually strike me as gimmicky, but I was pleasantly surprised by this one. Not only did the performers communicate musically with each other from opposite sides of the continent, but aspects of the performances from both locations were analyzed in real time and used to control various facets of the piece such as instrumental timbre and the appearance and movement of computer-animated images.
The beginning of Lemma II serves as a demonstration of how the musicians and computers interacted. The music began with drummer Steven Schick and pianist Anthony Davis (New York) trading single-note gestures with Vanessa Tomlinson and Scott Walton (Oregon). With each attack a bright sphere of color burst and faded like fireworks on the massive video screen hanging above the performers in both locations. Each performer's note appeared on the screen as a unique color, and it was very clear that the size and "lifespan" of each burst was directly related to the loudness of the note's attack. From these initial fireworks, the piece grew more musically and visually interesting as it progressed, passing through a myriad of sonic and visual spaces, producing an incredibly satisfying multimedia tour de force. My only critique is that the sonic "places" visited didnŐt generally seem to vary as strikingly as the visual ones.
Lemma II was created by collaborators Vibeke Sorensen (visual artist), Rand Steiger (composer), researchers Miller Puckette and Mark Danks, and performers George Lewis, Steven Schick, Anthony Davis, Vanessa Tomlinson, Michael Dessen, Harry Castle, and Scott Walton. Mr. Puckette's PD software analyzed audio input and transformed this information into control data which it then passed to computers in both performance locations. The control data was also used to manipulate 3-D graphic images using Mr. Danks' GEM software. Given all its technical complexity, the joy of Lemma II was that in performance the technology performed well and contributed to the realization of the work in integral and artistic ways. I found it to be the most satisfying work of the entire festival and consider it a significant achievement by its creators. The second piece on the concert was Ping Bang, a work for MIBURI synthesizer suit, additional electronic sounds, video, and computer animation. The work is a collaboration between composer Saburo Hirano and video artist Shinsuke Ina, and was performed by Hanachi Otani.
In Ping Bang the performer wears the MIBURI, described by the composer as "YamahaŐs new wearable physical modeling synthesizer." It consists of a specially designed suit laden with motion sensors, a small microphone, and two hand grips with multiple buttons on each. Information from the microphone, the performer's motions, and her button manipulations create all of the sounds, and can also affect the presentation of computer-animated images. This is managed as follows: first, the sensors and buttons output data which are mapped to physical modeling synthesis parameters. This synthesized music constitutes the "solo" instrument of the performer herself. Meanwhile, the same output is also sent to a computer running Max, which generates "accompaniment" music using the GCM (Globally Coupled Map) algorithm. In addition, Max also generates computer image display commands, which are relayed to a second computer controlling the graphics.
The result was a highly-integrated multimedia piece in which Ms. Otani's body movements clearly affected the musical texture. Filters opened and closed as she lifted and lowered her arms, for instance. The music itself was quite rhythmic, and at times seemed to be sonically akin to the noisy textures of Nine Inch Nails: highly distorted screams from Ms. Otani sailed over blocks of harmony and percussive ostinati. The opening and first several minutes were quite intriguing, and I enjoyed Ping Bang's direct, timbrally sharp approach. However, the piece seemed a bit long: the second half reached a plateau and then remained there. The computer graphics of Ping Bang alternated amongst a small group of images, creating a strict focus that paralleled the regularity of the music's rhythms. The primary images were photos of a statue of the Buddha and an exploding atomic bomb. At first I thought this choice too obvious, but then I became interested in the minimalist approach to their manipulation. They were presented repeatedly, focusing on different sections of the images each time, with varying positions on the screen and different image processing techniques. The shifting combinations were akin to the signal processing of the music. Third on the concert was I am Dying, an elegantly conceived work for amplified mandolin with signal processing and computer music on tape, composed by Brad Garton and performed by Terry Pender. The piece consists of a traditionally notated mandolin part accompanied by pre-composed computer sounds and real-time signal processing by the RTCmix music synthesis/processing software package, written in large part by Mr. Garton himself.
The pre-composed accompaniment consists of both abstract material and concrèt sounds from New York City, recognizable in varying degrees. All of the synthetic musical elements were created using RTCmix to render abstract gestures from the 'raw' material of the environmental sounds. In performance, as Mr. Pender played the mandolin the software processed his performance in real time using a variety of delays and other effects.
Clear and immediately comprehensible, utterly without pretension, this piece is a seamless combination of traditional performance and new technology, creating an uncluttered, unified experience. After the full-throttle energy of the concert's first two pieces, this one provided an enjoyable respite. The evening concluded with a quite experimental experience entitled Coney Island. It is a product of the Machine Child Ensemble (MCE), a group centered at the National Center for Supercomputing Applications in Illinois: Robin Bargar (concepts, engineering), Insook Choi (composer), Alex Betts (engineer), and Juhan Sonin (aesthetic engineer). The concept for this work is that it be something quite different from a traditional concert where the audience is completely passive. Instead, one member of MCE (in this case, Mr. Bargar) acts with the assistance of multiple audience members in a group exploration of a virtual environment of graphics and sound.
Technically, Coney Island presents the results of investigations into real-time virtual reality and how this may best be experienced as an integrated artistic work. Mr. Bargar, onstage, introduced the concept of the piece and asked for audience volunteers, who then manned four MIDI drum sets distributed around the auditorium. Meanwhile, he took up a control wand which he used to guide the group through the Coney Island environment. This virtual world consists of several scenes of dynamic activity, with varying degrees of abstraction from physical reality. Mr. Bargar controlled the general movement through the environment, though in each scene the volunteers could stimulate or alter the visual and sonic activity by means of their drum pads.
The computer sounds and graphics were output in real time by a network of five computers backstage. The heart of the computational system is ScoreGraph, an application designed to integrate multiple dynamic simulations with sound synthesis and 3D graphical display under interactive performance. Unfortunately, MCE had some problems with their network, and as a result the piece suffered. Many of the graphics looked unfinished (wire frames only or with no surface textures), and the audio portion of the piece would be more properly described as ambient sound design rather than composition. Moreover, the actions of the audience members seemed to influence the environment only in trivial ways, while Mr. Bargar's controller directed everything else—not an ideal form of interaction. While I was assured that it had worked much more successfully during its premiere performance, on this occasion, at least, Coney Island didn't quite live up to its intriguing ambitions.
Technologies in Jazz, Rock, and Improvisatory Works, The Kitchen
For the first work of the Jazz, Rock, and Improvised Music concert, Joshua Fried presented his Radio Wonderland. Before Mr. Fried entered the stage, the audience had the opportunity to ponder his sonic devices, the Musical Shoes, a collection of four shoes mounted upside down on stands, used to trigger electronics when struck by drumsticks. Occasionally manipulating a mixer to control how the snippets of a portable radio's output were to be edited, Mr. Fried explored the sonic world of triggering and recombining these samples into a collage. The Musical Shoes then did their "dance of samples" as the composer hammered away with his drumsticks. The visual and dramatic element of watching him trigger his samples in such a visceral manner added to the juxtaposition of vocal and musical snippets.
The Freight Elevator Quartet, joined by video artist Mark McNamara, presented a fresh outlook on technology's influence on live performance, this time in the realm of beat-oriented electronica. Although heavily dependent on samples, the piece The Revolution Will Be Streamed also depends on the live coordination of the efforts of each performer. Rachael Finn's processed cello, Paul Feuer's keyboard sounds, and the results of R. Luke DuBois' real-time signal processing were mixed by Stephen Krieger, who also manipulated drum loops and other instruments.
The Freight Elevator Quartet takes advantage of the relationship between the exciting elements of a live performance and the sonic wizardry of real-time shaping and manipulation. Their performance distilled and blew apart ambient sounds, jungle and down-tempo beats, experimental sonic collages, and Mr. McNamara's video images. Along with references to the themes of exploration, technolog,y and humanity, the video engaged the audience directly: a video camera captured the audience as background for visual manipulation using Image/ine. This gave the audience the role-blurring experience of watching themselves taking part in the performance by watching the performance. With such a wide array of stimuli, the experience tested the sensory capabilities of the audience in a manner that was engaging, powerful, and thoroughly fresh. Closing the first half of the evening, the group What Is It Like To Be A Bat" showcased their work She said-She said, Can You Sing Sermonette with Me. Composer-performers Kitty Brazelton, Dafna Naphtali, and Danny Tunick, using an expanded rock instrumentation of electric guitar and bass, keyboards, computer-triggered/manipulated samples, and drum kit, moved between episodes of different styles, relying on a succession of sharp contrasts as the main organizational device. High-voltage interruptions were followed by sections of gentle a cappella vocal combinations, a contrast made more jarring by the starkness of the transitions. Although highly structured, the work allowed the performers to draw from their own musical backgrounds and interests, making the eclectic stylistic changes seem natural and comfortable. This demonstrated the principle of "unity in diversity" that is emblematic of the information society of today's technological age.
enter,activity, performed by the Terry Pender Virtual Orchestra, consisted of Mr. Pender's guitar sounds manipulated and processed by three other participants. These actions were produced independently of the guitarist, the model for interaction being that of a freely-organized conversation between the participants. Although the audience could see what Mr. Pender was doing, it was harder to discern what his companions were doing until the results came directly out of the sound system. Perhaps that was the point: the audience "encountered" the conversation as a wash of sounds with occasional outbursts that owed their element of surprise in part to the "virtual" nature of the experience. Brad Garton operated various real-time signal-processing techniques using RTcmix, while R. Luke Dubois worked with various Max/MSP signal-processing interfaces he has developed, and Doug Geers managed the use of rack-mounted effects processors. For the initial part of the piece, the disparity between what was seen and heard was quite strong, since the "actual" guitar sounds are hidden amongst the swirling sonic environment emanating from the signal handlers. Erratic chirps and rumbles contrasted with the slowly evolving surroundings, while short repetitive gestures acted as sonic signposts for the audience to refresh their ears. At the end, the visual and sonic aspects of the performance came together as the audience experienced "actual" guitar chords as Mr. Pender played them, accompanied by a subtle wash of harmonies.
In the next piece, the electroacoustic duo Interface featured live performers in conjunction with real-time signal processing and sample triggering via Max/MSP, along with live video projection. The free-flowing, improvisatory music focused on the dialogue between Curtis Bahn's double bass and Dan Trueman's electric violin. The digital audio setup makes extensive use of various sensors on the instruments that interpret changes in the instrument's physical orientation as control data for triggering and signal processing. The improvisation took an organic shape, beginning with a sparse musical texture whose droplets and rumbles behaved as a counterpoint to Nick Fortunato's processed video images of clouds. Both Mr. Bahn and Mr. Trueman made use of extended techniques, often drawing unusual natural sounds from their instruments. The texture thickened as extended, active sections of the piece used the electronics to accentuate the wild exertions of the performers, juxtaposed with ominous, processed video imagery. Ultimately, the pent-up energy generated by the instrumentalists' sonic expressionism dissipated to a passage of consonance and harmonicity.
Eve Beglarian and Kathleen Supove of Twisted Tutu closed the evening with a set of songs drawing heavily on the tradition of music theater. Ms. Beglarian's wide-ranging vocals were matched throughout by Ms. Supove's skillful keyboard accompaniment. The duo injected the atmosphere of each song with the possibilities made available through the use of keyboards, sequencers, and signal processors, allowing them to move easily back and forth from the stylistic worlds of jazz, lounge, synth pop, and electronic rock. Their songs are tightly composed, drawing on a blend of wit and self-conscious sentimentality. The performance combined a sense of lightness and familiarity with a stylish take on each of the genres the duo evoked.