Musical improvisation with AI
Session 1. Music Improvisation with AI.
The first event explores on Music Improvisation with AI and it will debate around these questions: What is improvisation? What is needed to be a good improviser? Can a machine algorithm complement human improvisation? What are your approaches for machine improvisation developments? How do musicians feel combining their creativity with one of the machines? What are the next steps in the system to support musicians in improvisation? What are the most advanced systems in this area of music technology?
Moderator · Ramon López de Mántaras
Mark d’Inverno & Matthew Yee King featuring ‘AI Musician’. (live performance)
Sara Sithi-Amnuai featuring ‘An Exploration of 3 Japanese Stories’. ‘Nami’, a glove interface.
Augusto Sarti & Clara Borrelli featuring ‘Virtual Duo’.
Talk about the complexity of the human creative process and music improvisation. Human and machine interaction in improvisation. Can an AI emulate the spontaneity of jazz music? and more.
An Exploration of 3 Japanese Stories (World Premiere)
Audiovisual poem using NAMI, an interactive glove developed by the author. Sara Sithi-Amnuai for ARTIFICIA 2021.
Mark d’Inverno is a researcher and academic at the Department of Computing at Goldsmiths, University of London. He has mostly investigated the role Artificial Intelligence plays in learning and creativity practices. How do creativity, learning, and AI relate to each other? ¿What are the possibilities and limits for AI to support, challenge and provoke human creativity? Mark has published more than 200 articles and authored and edited several books, such as “Computer and Creativity” with Jon McCormark. He has led a range of research projects funded by EPSRC, AHRC, Welcome in the UK and the EU. Mark is a critically acclaimed jazz pianist and, over the last four decades, has led a variety of bands across a wide range of music, including the Mark d’Inverno Quintet.
Matthew Yee-King is a British electronic musician, percussionist and researcher based in London, performing music as Yee-King. He is known for bringing an education in science and genetics into music, including his celebrated 2001 Drill ‘n’ bass release SuperUser released by Rephlex Records, his work with Finn Peters in making music from brainwaves, and his doctoral work on applying Artificial intelligence techniques to automatic synthesizer programming. “Goodnight Toby”, a track from SuperUser, was listed in the top 100 greatest IDM tracks by FACT magazine. As of 2020, he is a lecturer in the department of computing at Goldsmiths, University of London.
Sara Sithi-Amnuai is a professional musician, composer, and creative technologist based in Los Angeles, California. She received her MFA in Performance & Composition at the California Institute of the Arts and a BA in Ethnomusicology (Jazz Trumpet) and a Music Industry minor from UCLA. Sithi-Amnuai’s recent work focuses on the intersection between identity, improvisation, and live performance interaction between the performer’s body and their instrument through gesture and sound. Her latest work is Nami, a custom built glove interface designed for live musical performance utilizing gesture recognition tools and community research. Sithi-Amnuai is a member of the Pan Afrikan People’s Arkestra, an artist-in-residence with Poieto, and a Japanese American Cultural & Community Center (JACCC) and Sustainable Little Tokyo (SLT) Nikkei Music Reclamation Project Fellow. She is also a recipient of the 2019 ASCAP Foundation Johnny Mandel Prize and Herb Alpert Young Jazz Composer Award, and the 2018 BMI Future Jazz Master Scholarship.
Augusto Sarti is Full Professor of Music and Acoustic Engineering. Politecnico di Milano. He is a co-founder of the Image and Sound Processing Group of the Politecnico di Milano, and its first laboratory, the Image and Sound Processing Lab (Milano Leonardo Campus). He established two more laboratories: the Sound and Music Computing Lab; and the Musical Acoustics Lab (Cremona campus). He is a Senior Member of the IEEE, elected member of the Audio and Acoustics Signal Processing Technical Committee (AASP-TC), Senior Area Editor of IEEE Signal Processing Letters and Associate Editor of IEEE/ACM Transactions on Audio Speech and Language Processing. As far as EURASIP is concerned, he is the founder and chairman of the Special Area Team (SAT) on “Acoustic, Speech and Music Signal Processing” (ASMSP). He is also a founding member of the European Acoustics Association (EAA) Technical Committee on Audio Signal Processing.
Clara Borrelli was born in Pisa, Italy, in 1992. She received the B.Sc. degree and M.Sc. in Computer Science and Engineering from University of Pisa and Politecnico di Milano in 2014 and 2018 respectively. She is currently a PhD student in Information Technology working with the Image and Sound Processing Lab of the Politecnico di Milano, Italy. Her research interests concern the application of machine learning and deep learning to sound and music computing and music information retrieval tasks. Her current research activity is on modeling and measuring the emotional impact of music.