The main problem of any supervised deep learning method is to obtain labeled data to use in training. Fortunately, my team and I labeled hundreds of hours of recordings of vocal communications between mice. These recording were from three different mouse strains in various ages and social states. The labeled data was then used to create a deep learning hybrid CNN-LSTM model to extract USVs from audio recordings. The model was integrated into a user-friendly GUI that allows manual as well as automatic labeling of USVs, and provides an easy way to browse and analyze USVs that outperform other models.
To get started, download the app and follow the instructions in the manual
manual: https://abrasive-lightyear-f6a.notion.site/HybridMouse-18a2b4e869ea492f843a03e9046842b2
file: https://github.com/gutzcha/HybridMouse/blob/master/HybridMouse_app.zip
You may also download the sample files to play around with the app
Please consider citing the main article:
@ARTICLE{10.3389/fnbeh.2021.810590,
AUTHOR={Goussha, Yizhaq and Bar, Kfir and Netser, Shai and Cohen, Lior and Hel-Or, Yacov and Wagner, Shlomo},
TITLE={HybridMouse: A Hybrid Convolutional-Recurrent Neural Network-Based Model for Identification of Mouse Ultrasonic Vocalizations},
JOURNAL={Frontiers in Behavioral Neuroscience},
VOLUME={15},
YEAR={2022},
URL={https://www.frontiersin.org/article/10.3389/fnbeh.2021.810590},
DOI={10.3389/fnbeh.2021.810590},
ISSN={1662-5153}}