| --- |
| library_name: symupe |
| license: cc-by-nc-sa-4.0 |
| datasets: |
| - SyMuPe/PERiScoPe |
| tags: |
| - music |
| - piano |
| - midi |
| - expressive-performance |
| - transformer |
| - MLM |
| --- |
| |
| # SyMuPe: MLM baseline |
|
|
| **MLM-base** is a Transformer-based masked language modeling baseline for expressive piano performance rendering. |
|
|
| Introduced in the paper: [**SyMuPe: Affective and Controllable Symbolic Music Performance**](https://arxiv.org/abs/2511.03425). |
|
|
| - **GitHub:** https://github.com/ilya16/SyMuPe |
| - **Website:** https://ilya16.github.io/SyMuPe |
| - **Dataset:** https://huggingface.co/datasets/SyMuPe/PERiScoPe |
|
|
| ## Architecture |
|
|
| - **Type:** Transformer Encoder |
| - **Objective:** Masked Performance Modeling (MLM) |
| - **Inputs:** |
| - **Score features (y):** `Pitch`, `Position`, `PositionShift`, `Duration` |
| - **Performance features (x):** `Velocity`, `TimeShift`, `TimeDuration`, `TimeDurationSustain` |
| - **Conditioning (c_s):** `Velocity` and `Tempo` score tokens for tempo and dynamics. |
| - **Outputs:** Categorical distributions for unmasked performance tokens. |
| - **Training:** Trained for 300,000 iterations on the [PERiScoPe v1.0](https://huggingface.co/datasets/SyMuPe/PERiScoPe) dataset as described in the paper. |
| |
| ## Quick Start |
| |
| Before using this model, ensure you have the `symupe` library installed: |
| ```shell |
| pip install -U symupe |
| ``` |
| |
| Use the following code to render performances: |
| |
| ```python |
| import torch |
| from symupe import AutoGenerator |
| |
| device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
| |
| # Build Generator by loading the model and tokenizer directly from the Hub |
| generator = AutoGenerator.from_pretrained("SyMuPe/MLM-base", device=device) |
| # model, tokenizer = generator.model, generator.tokenizer |
| |
| # Perform score MIDI (tokenization is handled inside) |
| gen_results = generator.perform_score( |
| "score.mid", |
| use_score_context=True, |
| num_samples=8, |
| seed=23, |
| ) |
| # gen_results[i] is PerformanceRenderingResult(...) containing: |
| # - score_midi, score_seq, gen_seq, perf_seq, perf_midi, perf_midi_sus |
| |
| # Save performed MIDI files |
| generator.save_performances(gen_results, out_dir="samples/mlm") |
| ``` |
| |
| ## License |
| |
| The model weights are distributed under the [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license. |
| |
| |
| ## Citation |
| |
| If you use the dataset, please cite the paper: |
| |
| ```bibtex |
| @inproceedings{borovik2025symupe, |
| title = {{SyMuPe: Affective and Controllable Symbolic Music Performance}}, |
| author = {Borovik, Ilya and Gavrilev, Dmitrii and Viro, Vladimir}, |
| year = {2025}, |
| booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia}, |
| pages = {10699--10708}, |
| doi = {10.1145/3746027.3755871} |
| } |
| ``` |