This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015.
auralise.py: includes all required function for deconvolution.
example.py: includes the whole code - just clone and run it by python example.py
You might need to use older version of Keras, e.g. this (ver 0.3.x)
src_songs: includes three songs that I used in my blog posting.
Load weights that you want to auralise. I'm using this function
W = load_weights()
to load my keras model, it can be anything else.
W
is a list of weights for the convnet. (TODO: more details)
Then load source files, get STFT of it. I'm using librosa
.
Then deconve it with get_deconve_mask
.
This paper, or simply,
@inproceedings{choi2015auralisation,
title={Auralisation of Deep Convolutional Neural Networks: Listening to Learned Features},
author={Choi, Keunwoo and Kim, Jeonghee and Fazekas, George and Sandler, Mark},
booktitle={International Society of Music Information Retrieval (ISMIR), Late-Breaking/Demo Session, New York, USA},
year={2015},
organization={International Society of Music Information Retrieval}
}
- The second blog post has more extensive demo. Detailed description will follow after paper submission.
- The first blog post that explains my ISMIR 2015 Late-Breaking session paper.
- Keras, librosa, [Matt's deconvolution] paper(http://arxiv.org/abs/1311.2901), Naver Labs