GenerationMania is an experimental framework for generating keysounded rhythm action game stages (For example, BMS charts) from sequence files (audio sample - timestamp pairs), featured in the paper "GenerationMania: Learning to Semantically Choreograph".
This repository contains the code used to produce the dataset and results in the paper. You can also try the "mix two charts" option to create your very own chart.
The main entry script provides utilities for you to build the dataset from your own BMS data, training the models, evaluate them, and generate new charts. However, you can also download the dataset we have pre-generated in the link listed in the Pretrained dataset section.
We are working on supporting other formats of input/output files, and other features. Stay tuned!
Please email me with any issues: zhiyulin [@at@] gatech (.dot.) edu
- Tensorflow-gpu won't import unless you have CUDA 9.0 installed and set as the default cuda (symlinked to cuda/). May fallback to tensorflow (disallow gpu acceleration) later.
- Currently BMS file with audio file references to a subfolder is not yet supported.
- Remove all prefix folders in BMS sound file referencing section (#WAVxx). example:
foo/bar/piano1.ogg => piano1.ogg
and move them to root folder wherefoo.BMS
lies in.
- Remove all prefix folders in BMS sound file referencing section (#WAVxx). example:
To start, do the following:
[Install CUDA 9.0]
[Read DEPENDENCIES if you encounter any other import error]
pip install pipenv
(sudo) pipenv install
- Depending on your enviromnent, try with or without
sudo
.
pipenv shell
python3 main.py
- First startup will take a while.
matplotlib
may have to initialize its font.
[Follows the on-screen prompt]
We have generated a pretrained dataset if you would like to skip generating it from raw BMS datasets.. This dataset is generated from BOF2011 dataset.
Download it from here:
https://drive.google.com/file/d/1gn5Rlt1KeD-si139AXBnEV9bLpDf8TwK/view?usp=sharing
After you finished downloading, unzip it and put all the extracted contents in traindata/
and go directly to training step.
For proof-of-concept we provided a chart blending function. You will take audio event sequence from a chart, relational summary and challenge model from another. This will yield a half and half brand new for you to play.
Follow these steps.
- Get some BMS
- You **Do not have to ** use the charts in the dataset. The BMS of Fighters website contains a wide variety them; Alternatively search for
BMS Starter Pack
for some easier ones.
- You **Do not have to ** use the charts in the dataset. The BMS of Fighters website contains a wide variety them; Alternatively search for
- Download pretrained models and put it in place, or train it from scratch.
- Run 2.1 through 2.4, 4.1 through 4.2 (takes a long time), 6.1 if you want to train from scratch.
- 3.1 is not necessary since the model is already embedded.
- Run 2.1 through 2.4, 4.1 through 4.2 (takes a long time), 6.1 if you want to train from scratch.
- Run 6.3.
- You will need to specify a Mix Base which provides the audio events, and Mix With for other information.
- Provide a folder using relative path to the BMS you wish to use.
- A minority of BMS will not work out of the box; see Known Issues. You will have to fix the file structure by yourself; follow the BMS format tutorial provided in the paper, or just use another BMS file.
- Provide a folder using relative path to the BMS you wish to use.
- Wait for a while.
- You will get
gen_<your MIX BASE BMS Title>
folder under root of this script.
- You will need to specify a Mix Base which provides the audio events, and Mix With for other information.
- Play this!
- We used
bmson
, a json-based, web friendly chart format for output. visit https://bemuse.ninja/, select start game and select either mode (keyboard mode omits one column.). - Look for the left buttom button saying "Play Custom BMS", and drag the folder just generated there.
- You should see a new entry from the song selection list. Click this entry, and the stage should start.
- We used
- Have fun!!! Let me know how you think of the charts generated by sending an e-mail to me :)
If you use this framework or dataset in your research, please consider citing via the following BibTex:
@article{lin2018generationmania,
title={GenerationMania: Learning to Semantically Choreograph},
author={Lin, Zhiyu and Xiao, Kyle and Riedl, Mark},
journal={arXiv preprint arXiv:1806.11170},
year={2018}
}
Q: I have no GPU but torch wants to use it anyway and the program crashes. (On different environment, error log can look a little differently but is all about GPU, CUDA or drivers.)
A: The feedforward model is trained on GPU so you need to put this snippet to L251/L253 of model_feedforward.py.
torch.load('<originally there>', map_location={'cuda:0': 'cpu'})