Exploiting domain structure for music ML tasks
Dr Cătălina Cangea
Learning to represent and generate music is a highly relevant task for the machine learning field. This data domain is ideal for density estimation tasks and exhibits many interesting properties, such as long-term dependencies and patterns residing at various scales. More crucially, music generation is a main pillar of creative AI, which complements the ML community efforts in pushing scientific progress and gives us the amazing opportunity to assist artists in their creative process !
Existing state-of-the-art approaches for symbolic music generation [1, 2] operate on and produce sequences of tokens. However, the music domain contains structure at several scales: local (e.g. chords, arpeggios, multiple voices being played together in a fugue) and long-term/global (e.g. ABA form , the key that a piece is written in, repeating patterns/motifs). In that sense, there have been relatively few studies (e.g. [4, 5, 6, 7, 8]) that investigate explicit representations of structure for modelling and generating music. Each of these works has certain limitations—for instance, each piece modelled in  (see Fig. 2) is turned into a single graph by connecting all notes with temporal links, but does not capture the fundamental music-theoretical dependencies and insights that composers most likely used when writing the piece. Alternatively, the work in  and a few others look at rhythm-specific encodings only. Perhaps the closest modelling strategy to the one intended for this project is , where the authors choose to encode various musical relationships between bars of the score; however, it would be easier to work with more general graph representations of music, building up from first principles such as the circle of fifths / the tonality of a piece (e.g. use a sliding window and compute relationships / a graph within each local window and between windows at discrete points in the score - there are endless ways to think about this!)
This project aims to study the effects of leveraging music-theoretical graph representations in ML models. The goal is to encode symbolic music sequences in a principled manner that reflects the composition process and underlying structure up to a greater extent that previously. This encoding would then be passed as (additional) input to a model [1, 2, 9]. Finally, the effects on model performance would be studied in classification and/or generation scenarios (TBD based on time constraints).
1. Download dataset(s), choose task(s)
2. Become familiar with basic music theory concepts and design 1 or more ways of encoding the structure in symbolic music (see Appendix A.2  for a description of event-based MIDI representations; see Chordify in Resources; see Wiki)
3. Set up model codebase and obtain baseline performance on chosen tasks
4. Add relational structure to the model and find suitable ways to encode it
5. Interpret the new results and investigate changes in model processing (e.g. visualising attention in layers, emerging patterns)
6. Open-source code, to allow researchers to preprocess their own music data and build more graph-based ML models for music tasks!
- Perceiver AR codebase (to be released soon)
- MusPy - A toolkit for symbolic music generation
- Chordify - music21 Documentation
- A structural way to encode music
 Ternary form
 StructureNet: Inducing Structure in Generated Melodies - IR Anthology
 Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions
 MELONS: generating melody with long-term structure using transformers and structure graph