Vol. 40 Issue 1 Reviews
The Seventh KYMA International Sound Symposium (KISS)

The 7th Kyma International Sound Symposium (KISS), August 9-12, 2015, Montana State University, Bozeman, Montana, U.S.A. Information about the conference is available at http://kiss2015.symbolicsound.com/.

Reviewed by Silvia Matheus
Berkeley, California, USA

KISS EventKISS was expertly organized by filmmaker Theo Lipfert, who is the coordinator of the M.F.A. Program in Science & Natural History Filmmaking at Montana State University. The conference was sponsored by the Montana State University School of Film and Photography, and School of Music, along with the Symbolic Sound Corporation and Friends of KISS2015.

The theme of this year’s symposium was "Picturing Sound", as explained on the conference website: “Montana State University and Symbolic Sound invite you to reflect on what it means to picture sound (or to sound a picture). It may suggest a way of visualizing audio signals; it may bring to mind the visual cues a live performer gives the audience to help them parse the structure of the music; it may suggest creating sound for picture or creating moving images in response to sound; it may suggest generating both sound and image from the same data stream; it may suggest new ways of performing live cinema or creating sound art with a visual component."

Lectures and demonstrations, which focused on algorithmic composition, performance practice, alternative controls, and synthesis and sound design using Kyma, were presented each morning and afternoon. Concerts were presented in the evenings. Afterward, the participants dined together in a beautiful, Victorian style house that included a spectacular garden.

The participants included students from the Montana State University Science and Natural History, and Filmmaking programs, as well as filmmakers, composers, engineers, researchers, and software developers from around the world including: China, Brazil, Germany, Greece, South Korea, U.K., and the U.S.A. The participants had the opportunity to work closely with SoundProof, the ensemble in residence. SoundProof: Patricia Strange on violin, Stephen Ruppenthal on trumpet and flugelhorn, and Brian Belet on viola, double bass, and Kyma processing, performed challenging works using digital technology, exploring the creative and interactive potential in the convergence of sound, music, and text.

The symposium started with a Kyma 7 master class by Carla Scaletti in which she discussed time-varying control parameters, useful models, and control patterns. She also touched on how to identify these patterns in the Kyma’s Sound library. The master class was followed by a lab demonstration and an early evening concert in the Black Box Theatre. The theater was pitch black except for the brightly colored chairs. During the performance movies and still images were projected onto a large screen placed seven feet above the floor. The stage lighting was well designed and executed. Though each piece had its own particular stage set up, there were no noticeable interruptions between performances.

The concerts emphasized themes including science, nature, and human interaction with technology. Each piece involved the interplay of acoustic and digital instruments and moving images. The use of film contrasted greatly between pieces due to different film aesthetics and genres, which included abstraction, surrealistic phantasy, imaginary, futuristic, and animation. Kyma’s Pacarana and Paca were used as sound generators, controllers, and processors. It's impressive and telling that although Kyma was used exclusively, the sound and style of each piece was different.

The performers used a large variety of controllers including the iPad, Wacom tablet, Microsoft Xbox 360 Kinect, EEG Neuroheadset, AudioCubes (modular audiovisual instruments), Monome, Soundplate (Madrona Labs), and Leap Motion. They connected to Kyma via MIDI, Open Sound Control (OSC), and PacaConnect (Delora).
Two compositions were particularly impressive because of the interaction between the players and audience. #CarbonFeed by Jon Bellona and John Park, used the sonification of Twitter feeds with images created during performance in real time. The audience could participate by tweeting to #carbonfeed or #kiss2015. To keep the pace with the growth of online activity, there's a corresponding increase in the number of data centers that accounts for 2% of the electricity consumption in the U.S.A. Bellona estimated that a simple Google search generates about 0.2 grams of CO2.

Frontier by Paul Turowski used game software designed for a variable number of performers and improvisers. It captures audio via microphones and uses the audio to control avatars in a virtual, digital world. Turowski refers to his piece as a reactive score modeled on video game procedures. The design is still evolving, which led Torowski to solicit feedback from his colleagues at KISS. This piece represents the next stage in works that use three-dimensional environments, score animation, and interaction with multiple players. It is a logical continuation of earlier work by people such as Mick Grierson, Doug A. Bowman, Ricard A. Bartle, Michel Chion, and Lee Spector.

Carla Scaletti’s keynote speech, “Picturing Sounds (and First Contact)” was exhilarating, highly informative, and well received. Using information about sound capture, transmission, and transformation, she addressed the primary reasons for picturing sounds such as: visualizing, preservation, mapping, and translating pictures to sound. Scaletti proposed that picturing sounds included the visualization and preservation of sounds, as well as the synthesis of sound from pictures and the act of drawing itself.

She began with an historical review of sound and image documentation and visualization. The earliest known device for recording sounds is the phonautograph (1859), which transferred sound waves to paper. Alexander Graham Bell also used transducers to write sound on paper. The photophone transmitted speech via a light beam (presaging the popular "talk on a light beam" science fair project of the 1960s). Finally, there was the rapid development of the Gramophone, improved versions of the phonograph, and the dictaphone.

To demonstrate early attempts to synthesize sound from picture, Scaletti mentioned the work of Daphne Oram and her Oramic machine (circa 1960). This was a waveform generator and is considered the first controllable synthesizer. Oram’s dream was to build a reversed oscilloscope. To explain the concept of "sound over matter" Scaletti showed videos of the singing water bowl from the Han dynasty, and the well-known Chladni patterns, images formed by a vibrating plate set in motion by bowing it.

On the subject of the relationship between sound and picture, Scaletti spoke about the work of Carl Haber and Earl Cornell, Berkeley Lab researchers, who recovered and restored recordings made 128 years ago. They created high resolution digital images of the grooves on the original discs and cylinders and modeled the motion of a stylus moving through them, generating an audio signal.

Scaletti also spoke about recent work in opto-acoustics, such as a visual microphone that can detect audio by sensing microscopic vibrations between video frames, and microphones that use laser beams to detect vibrations. Schileren pressure gradient photographs provide another example of the optical representation of audio. We can anticipate even more activity in this field, now that high speed cameras are capable of capturing images at rates up to 100 billion frames per second.
Scaletti concluded by drawing connections between sound and image processing technology. With new developments like computational photography, panorama sticking, light field cameras, camera arrays, analog HDR photography, and "optical sound," where each pixel functions like a microphone, the future looks very exciting. Scaletti’s talk ably guided us through a richly nuanced historical perspective on sound and image documentation and visualization.

A total of 16 papers were presented at the conference. The subjects included instrument design, interactive composition, sonification, mapping, cognitive audio mixing, language communication, and performance. In his paper entitled “Kyma: Creative Environment for Musical Interface Design,” Klyoung Lee demonstrated how he designed a musical interactive environment using Monome and Kyma using bi-directional OSC to communicate. He concluded his talk with a demonstration of sound design for strings that gave him high levels of control and expressivity over the piece. Mel-King Lee talked about her experience as a composer and how she was inspired to write To Heaven, a piece for voices and Kyma dedicated to the memory of the Sandy Hook Elementary School children who were murdered. Franz Danksagmüller in “Rethorical Figures, Sonifications and Elements of Programmuski in Live Sound Tracks for Silent Movies,” described his experience working with silent films and his fascination with the works of Sergej Eisenstein. His analyses of program music and studies of medieval modes and baroque rhetorical figures helped him shape the way he composes and performs with silent film. Simon Hutchinson shared his thoughts about composing interactive works that integrate acoustic instruments into hybrid electronic works using Kyma in his paper entitled “NOPera and Tiger & Dragon Pieces: Thoughts on Composing for Kyma and Traditional Instruments.”

In their presentation “What Do You Mean? A Journey in Visual and Auditory Symbolism,” lker I??kyakar and E. Zoe Schutzman discussed their experience working together on a multi-layered and synesthetic-driven project that involves audiovisual real-time interaction with Kyma and AudioCube controllers. They discovered new (and re-discovered old) ideas based on their own interaction and involvement with sounds and images as visual and auditory representations of language and the need of human beings for communication.

Another keynote talk was given by Greg Hunter in his paper entitled “Technological and Cognitive Techniques for Mixing and Mastering Music.” As the title suggests, Hunter discussed how to master sounds using compressors, filters, limiters, and the use of distortion to create sonic vibrancy, and how to listen, when mastering, using our inner and outer “ears”, awareness of speaker placement, and our physical presence and engagement with sound.

In a paper entitled “Ideas and Techniques Behind Carbonfeed, an Interactive Internet Composition,” Jon Bellona discussed the carbon impact of digital content, the evolution of #Carbonfeed as an installation and composition, and methods for composing with random input from sources like Twitter. He attempted to show how one might harness a continuously fluctuating medium, while sharing resources and code in order to get Twitter’s API to work with Kyma.

Michael Wittgraf, in his paper entitled “Controlled Feedback” spoke about the use of feedback as source material for his composition and performance. He demonstrated how different types of feedback shaped his performance and how were used to control uses as a control equalization, filtering, gating, microphone placement, input and output levels, and pitch shifting in Kyma.

Brian Belet used his composition Still Harmless[Bass]ically to test Kyma’s algorithms in live performance. The ‘test’ entailed a live performance wherein everything needed to work as planned without the aid of a computer operator. Belet is interested in algorithms that process signals from his bass to control Kyma’s data input. He presented an informative demo session that included a subset of algorithms that he found most successful in this context.

Belet was also part of the informal presentation “Successful (and not so successful) Live Performance Paradigms using Kyma with the Ensemble SoundProof,” along with Stephen Rubenthal and Patricia Strange. The presenters discussed effective strategies and approaches that composers may want to consider when working with an ensemble such as SoundProof.

“A Theory of Electronic Instruments” was the title of the third keynote talk, given by composer, historian, and writer Joel Chadabe. He asked: With a historical perspective in mind, knowing that original electronic instruments are often inseparable from the music they produce, what are the possibilities of designing new instruments? He proposed that new instruments can emerge from a taxonomy, from deterministic to interactive, of instrumental behaviors, focusing on performance and taking into account general principles such as the connection between controls and variables.

Paul Turowski’s paper “Cybernetic Traces: Video Games as Dynamic Musical Scores” examined various aspects of the use of video games as models for reactive musical scores. He focused on how his ideas developed while discussing his motivations and inspirations.
In her presentation “The Relationship between Cinematic Design and Performative Actions in Interactive Music Performance,” Chi Wang provided an overall perspective of Chinese calligraphy and the importance of the movements of the strokes on paper, and how these movements shaped her gestural performance using the pen on a Wacom tablet.

In his paper entitled “The Cinematics of Musical Performance with Data-driven Instruments,” Jeffrey Stolet introduced different types of musical performances and the importance of performance rituals to experiencing and delivering music. He presented a classification of different music performance styles, and described audience expectation and response to these performance styles, providing examples of composers who carefully design their performances in order to underscore various types of musical communication.

Scott Miller in his talk “The Orrery Beneath Returning to Unknown Worlds,” explained how he utilized a mechanical model of the solar system to structure his composition Returning to Unknown Worlds. This framework proved to be flexible and reliable for improvisation in works that incorporate dynamic audio processing, variables, and other unpredictable elements. In his talk he related how he conceived his ideas and built an interactive system using this model.

In their presentation “Generative Algorithms in AQULAQUTAQU,” Madison Heying and Kristin Erickson related their implementation of three generative algorithms in Kyma. The first involved multi-dimensional cellular automata based on John Horton Conway’s Game of Life. The second is essentially a polyphonic sequencer based around the algorithmic model of the Koch curve, and the third is a Kyma sound translation of a Mandelbrot set. They also discussed the use of Lindenmayer systems and rewriting algorithms to generate text.

In addition to the concerts and paper session there were lab demonstrations by Scaletti, John Mantegna, and Klyoung Lee on subjects such as using the LinnStrument with 3D note, EEG Neuroheadset, and Soundplate, and a master class given by Scaletti in which she outlined various Kyma 7 features such as Multigrain and Gallery, helpful interactive menus, and a more efficient interface design. For more information about Kyma 7 see Barton McLean’s review in Volume 39, Number 3 (Fall 2015) of this journal.

Like previous KISS conferences this conference contained a variety of informative and exciting presentations, papers, workshops, and concerts. The beautiful surrounding environment, inspiring theme, social concerns and great tools made for a very inspiring symposium, which while taking stock of the past casts a glance toward future developments.