music

Navid Navab centers the poly-temporal, inter-agential, inter-cultural, in-situ, and material dimensions of the sonic event, and, where possible, steers away from fixed media and digital grids.

[2008-2014] Navab performs with software that he has developed over several years as an AI+music researcher and gestural-sound composer in close dialogue with the machine improvisation group at IRCAM (Centre Pompidou, Paris), matralab (Montreal), and the Topological Media Lab (Montreal). These “stylistic reinjection” and machine-improvisation environments learn in real-time typical features of a musician's style on multiple time-scales. They reinject in several different ways the musician's material that has gone through a machine-learning stage, allowing a semantics-level representation of the session and a smart recombination and transformation of this material in real-time. The resulting machine-co-improvisations, which today fall under the umbrella of AI-music, are further shaped, orchestrated, and sculpted by Navab on the spot, who uses his gestures to musically surf the improvising machine's agency.

[2012-2022] Navab’s gestural sound environments, on the other hand, dive deeper into live-sampled and live-improvised micro sound via descriptor and gesture-driven granular synthesis and physical modeling synthesis.

[2018- present] As a post-schizophonic composer-performer Navab is moving further away from computer music and idealizations of nature and towards immediate, material, and robotic engagement with the transductive dynamics of sonic formation within metastable energetic assemblages. Navab’s current music is focused on the creation of inter-agential comprovisation environments, innovating relational-realist strategies for structured improvisation across material, human, and machinic agencies.


Navid Navab | Maya Kuroki | Rainer Wiens

2019-2021 ; keywords: gestural micro-sound, music actuelle, live improvised

Navid Navab (gesturally-driven live-sampled granular synthesis) + Maya Kuroki (extended vocals) + Rainer Wiens (Mbiras and prepared guitar).

What unites these three artists is a fearless approach to improvisation, and original sound. Vocal Improviser Maya Kuroki’s theatrical takes, and Navid Navab whose mysterious rope-and-pully-activated electronics provided extremely convincing wingbeats […] The music was incredibly textured and precise.

Exclaim! 2019

Selected concerts:

Suoni Per Il Popolo Festival, Montreal, May 2021
Improvisation Festival, Guelph, Ontario, Sept 2021
No Hay Banda concert series, La Sala Rossa, Montreal, Nov 2019

VIDEO: The videos here were recorded and mixed live (both audio and video) at Suoni Per Il Popolo Festival.


Music Without Humans
George Lewis | Navid Navab | Michael Young

2014, AI+music machine-improvisation, stylistic modeling, software-driven piano, spatialized sound

On June 2014, Suoni per il Popolo festival hosted the world premiere of the first non-human live-improvised trio of musical improvising computers, featuring Navab’s machine-comprovisation software environment, co-developed with Sandeep Bhadwati over three years at Matralab and IRCAM paris, together with George Lewis’s “Voyager” and Michael Young’s “Prosthesis” software improvisers.

The continuous audio excerpt provided here was recorded live at this special one-time concert, which took place at the CIRMMT MultiMediaRoom in McGill University. After a first note that was played on a piano, the three improvising systems started listening to, learning from, and responding to each other, leading to the emergence of complex self-refrential musical structures, silence, chaos, and order that evolved, wihout human intervension.


Circle of Sleep [Debut: live AI+Music]

keywords: AI+music, live improv, stylistic modeling, software-driven prepared piano, spatialized 8.8 sound

Premiere: Western Front, Vancouver, Jan 2012

Debut concert at the Circle of Sleep project at the Western Front in Vancouver. Navab uses AI (stylistic modeling), real-time spatial sound instruments, and software-driven prepared piano to create improvised duos between the computer and the performers.

Navab’s “stylistic modeling” and machine-improvisation environments learn in real-time typical features of the other musician's style on multiple time-scales and reinject in several different ways the musician's material that has gone through a machine-learning stage, allowing a semantics-level representation of the session and a smart recombination and transformation of this material in real-time. These machine-co-improvisations, which today fall under the umbrella of AI-music, are further shaped, orchestrated, sculpted, and spatialized by Navab on the spot, musically surfing the agency of the machine.

The videos included here are excerpts from free improvisations during the Circle of Sleep concert series recorded live in concert at the Western Front art center, Vancouver Jan 21 2012

Circle of Sleep is an overnight concert, a meditation on sleep and dreaming that’s also a marathon of cutting-edge Canadian electronic music. Featuring nine artists from ambient music, new media, conceptual art, and improvisation, Circle of Sleep guides the audience through dream states from 10pm until dawn.

Montreal’s Navid Navab and Sandeep Bhagwati are at the leading edge of research into performer-interactive digital art and the melding of composed and improvised music through technology. Navab has developed performance software that creates real-time ‘comprovisation’ with a live performer, dialoguing with the live musician’s ideas in polyphonic surround sound. Veteran local improvisers Nikita Carter (saxophone) and Mei Han (zheng) perform in this digital oracle’s Vancouver debut.

As the night turns toward dawn, Circle of Sleep features artists from the ambient scene to draw the audience deeper into the immersive experience, maybe even into inspired dreaming. Navid Navab presents his own ambient ‘media alchemy,’

Mei Han: Zheng (Chinese Zither) / Nikita Carter (formerly known as C. Cooke): Saxophone / Cheryl l'Hirondelle: Voice
Navid Navab: AI-music (stylistic modeling driving real-rime sound instruments) + software driven prepared player piano


TranSenses: Gestural Sound Compositions

movement-driven (camera tracking) granular synthesis
Tokyo, 2016

Prepared-piano samples are transformed live through Navab’s granular synthesis engines that were modulated by the movement data of choreographer Akiko Kitamura.


Navab | Carter | Wiens

2012, live improvised music actuelle, AI+Music, stylistic modeling

Navid Navab :: Real-time Sound Instruments (machine-improvisation, live stylistic modeling driving concatenative synthesis)
Nikita Carter :: Saxophones
Rainer Wiens :: Prepared Guitar + Mbira

Navab’s “stylistic modeling” and machine-improvisation environments learn in real-time typical features of the other musician's style on multiple time-scales and reinject in several different ways the musician's material that has gone through a machine-learning stage, allowing a semantics-level representation of the session and a smart recombination and transformation of this material in real-time. These machine-co-improvisations, which today fall under the umbrella of AI-music, are further shaped, orchestrated, sculpted, and spatialized by Navab on the spot, musically surfing the agency of the machine.

Improvisation(Excerpts)
Recorded Dec 6 2012 @ Matralab, Montreal
mixed by navid navab


F O L D S: Gestural Sound Compositions

Movement-driven (camera tracking) granular synthesis and physical modeling, Montreal, 2016

Responsive and lively soundscapes accumulate memory under choreographed gestures. F O L D S deploys itself on perceptual ground. The body triggers a dialog with its own visuo-sonic images, either recognizable or permuted, until the creation of an illusive landscape.


Audio Mosaicing Etude#1 for Lori Freedman

August 2011
for Improvised Bass Clarinet (Lori Freedman)
and Realtime Automatic Orchestration via Audio-Mosiacing (Navid Navab).

Realtime orchestration or audio mosaicing using Corpus-based concatenative sound synthesis:

Corpus-based concatenative sound synthesis uses a large collection of source sounds, segmented into grains or units, and a unit selection algorithm that finds the sequence of units that best fits the sound or phrase to be synthesized. The selection is performed according to the descriptors of the units, which is a stream of characteristics extracted from the soloist. By modifying the algorithm and the source sounds on the fly, it is possible to move from predictable audio mosiacing to realtime orchestration or accompaniment.


Beijing Opera: Navid Navab + Lan Tung

2012, live improvised music actuelle, AI+Music, stylistic modeling

Recorded on April 2012 at Matralab, Montreal

Lan Tung (erhu/vocals) & Navid Navab (realtime sound instruments: stylistic modeling)

Playing a dynamic role in the Canadian music scene, Lan Tung is an erhu performer, composer, producer, and administrator. Originally from Taiwan, she incorporates Chinese music with contemporary expressions in her works. Lan Tung uses vocal techniques inspired by Beijing Opera but sung in"fake" Chinese. The words are made up sounds resembling Chinese, but not supposed to be understood.

Track 1-4: Navid Navab uses AI+Music software that he has co-developed at matralab/IRCAM. The experimental principal for the session was to explore machine-improvisation and syslistic modeling via software environments that learn in real-time formal features of a musician's style and play along with her interactively. The machine-co-improvisations are further shaped, orchestrated and sculpted by Navab on the spot.

Track 5-6: On June 2012, Navab and Tung were joined by Michael Menegon on prepared guitar for a live set at Montreal’s famouse Mardi Spaghetti music actuelle / the underground improvisation concert series.