An Exploration of 3 Japanese Stories (Premiere)
“An Exploration of 3 Japanese Stories” is a piece that explores the reclamation of my Nikkei (Japanese American) heritage through sound, figurative gesture, and storytelling through Wekinator, a machine learning program created by Rebecca Fiebrink. I was interested in seeing how I could approach the art making process differently through AI systems and specifically how it can connect to my cultural body. The themes and gestures were inspired by my upbringing and some of the stories (“Tsuru No Ongaeshi”, “Urashima Taro”, and “Orihime & Hikoboshi”) that my mother told me during my childhood. Only in the past couple of years did I begin to notice how these stories manifested themselves in aspects of daily life and have interwoven itself into cultural and historical aspects of Little Tokyo, a Japantown in Downtown Los Angeles.
Prior to World War II, there were 43 Japantowns within the US (mostly on the West Coast) and sadly there are only 3, all within California. A major contributor to that decline was World War II, the Japanese American internment camps and gentrification which drove social pressure on Japanese communities to embrace America and distance themselves from Japanese culture over the following decades. Since then, Nikkei people, like myself, have struggled to find a strong sense of cultural identity – we don’t feel entirely American, nor Japanese – but have gradually establishing our own Nikkei culture and music, through community led initiatives like “Sustainable Little Tokyo,” an initiative that develops and promotes Nikkei culture in Little Tokyo, Downtown Los Angeles.
In this piece, a majority of the music creation was done through a custom glove-interface I built called Nami. “Nami” in Japanese means “wave” and in the musical context it is the embodiment of life experiences flowing through each “beat.” This idea stuck with me and I imagined how my own embodied experiences and the life experiences of other Nikkei community members could be at the center of the design of a musical instrument, the sound, the gestural language, movement, and facilitated through AI/machine learning. Nami is designed for live electro-acoustic performance, improvisation, and a tool to extend my own multicultural background – primarily drawing from and contributing to the augmented trumpet, Nikkei, African American music, performer-composer, and gestural repertoires. The Nami iteration used for this piece utilizes a force-sensitive-resistor (FSR), flex sensors, buttons, hall effect sensors, and photoresistors. Most importantly, Nami is designed to be culture general and flexible, valuing cross-cultural exploration and accommodating a variety of cultural gestural language rather than imposing a culture specific framework or gestural language.
The piece is split up into 3 parts: Part 1 – Tsuru No Ongaeshi, Part 2 – Urashima Taro, and Part 3 – Orihime and Hikoboshi. Tsuru No Ongaeshi is about the Crane Wife and is set in the Japanese American Cultural & Community Center’s James Irvine Japanese Garden. I saw a clear parallel between the Crane Wife and the garden’s representation of the hardships and sacrifices that the Issei (first generation Japanese Americans) have gone through for people they care for. Orihime and Hikoboshi is about the star-crossed lovers and their wishes — currently there is a wishing tree set up in the Japanese Village in Little Tokyo carrying this tradition. For this piece, I developed a specific set of figurative gestures inspired by the stories themselves and choreographer Michio Ito’s movement scales. I used a combination of pre-recorded processed sounds (on trumpet, sheng, voice and other objects), acoustic sounds, and live improvisation. The gestures were not only storytelling tools but they controlled a drum machine I built that used pre-recorded sounds and also controlled the effects and processes on the live improvised sounds. The final video shows a blend of gestures and special Little Tokyo locations.
In terms of the score and the sound worlds, I was inspired by the poetic descriptions of the six stages of boiling water that I learned from Shingon Buddhist priest and master artist Hirokazu Kosaka and saw a deep connection to improvisation. As an improviser, I found the ability to fluidly shift and react through moments is like the way water shifts and transforms. These 6 stages of boiling water are integral to the Japanese tea ceremony and garden, and I was especially inspired by 3 stages Kosaka described: the “squeaking from the kama (Japanese iron pot or kettle),” the “sound of waves hitting the rocks”, and “the sound of the wind in the pine forest.” My hope is with this piece that it might inspire others to consider AI/machine learning tools in the context of their own reclamations and cultural bodies, and that when we design or create AI systems, frameworks, instruments, pieces, etc. that they deeply consider, encourage, listen to, and uplift those historically underrepresented voices in this space.
Sara Sithi-Amnuai is a professional musician, composer, and creative technologist based in Los Angeles, California. She received her MFA in Performance & Composition at the California Institute of the Arts and a BA in Ethnomusicology (Jazz Trumpet) and a Music Industry minor from UCLA. Sithi-Amnuai’s recent work focuses on the intersection between identity, improvisation, and live performance interaction between the performer’s body and their instrument through gesture and sound. Her latest work is Nami, a custom built glove interface designed for live musical performance utilizing gesture recognition tools and community research. Sithi-Amnuai is a member of the Pan Afrikan People’s Arkestra, an artist-in-residence with Poieto, and a Japanese American Cultural & Community Center (JACCC) and Sustainable Little Tokyo (SLT) Nikkei Music Reclamation Project Fellow. She is also a recipient of the 2019 ASCAP Foundation Johnny Mandel Prize and Herb Alpert Young Jazz Composer Award, and the 2018 BMI Future Jazz Master Scholarship.
Sara participated in the first online session of Artificia 2021. ‘Musical improvisation with AI’ (watch video)