Improvise+=Chain: Listening to the Ensemble Improvisation of an Autoregressive Generative Model
This work was exhibited on ‘MUSES EX MACHINA’ by TOKUI Nao Computational Creativity Lab, Keio University, November 14, 2022 -- January 15, 2023, at NTT Intercommunication Center [ICC], Tokyo.
Concept of this Work
This work is an audiovisual installation artwork of autonomous musical performance using artificial intelligence technology, designed to provide the audience with an experience exploring the differences between human and AI-based virtual musicians.
Using a transformer decoder, we developed a four-track (melody, bass, chords and accompaniment, and drums) symbolic music generation model, which generates each track in real-time to create an endless chain of phrases.
3D visuals and LED lights represent the Attention information calculated within the model. This work aims to highlight the differences for viewers to consider between humans and artificial intelligence in music jams by visualizing the only information virtual musicians can communicate with while humans interact in various modals during the performance.
We developed user interface for installers using Max/MSP patcher that can be used as a monitor of value for LED control and attentions.
Paper
Published as a demo paper at NIME2023.
Kobayashi, A., Nishikado, R., & Tokui, N. (2023). Improvise+=Chain: Listening to the Ensemble Improvisation of an Autoregressive Generative Model. Proceedings of the International Conference on New Interfaces for Musical Expression, 633--636. https://doi.org/10.5281/zenodo.11189329
Contributors
- Concept Design / Research & Development: Atsuya Kobayashi
- Visualization : Ryo Simon
- Filming:


