Skip to content

Latest commit

 

History

History
22 lines (20 loc) · 1.39 KB

File metadata and controls

22 lines (20 loc) · 1.39 KB

Discrimination_of_reflected_Sound_Signal

Machine learning (ML) in the acoustics and signal processing domain has experienced rapid advancements and developments with persuasive outcomes over a course of years. The statistical techniques of Machine Learning offer detection of data patterns, which helps in the identification of the convoluted relationship between features and further discrimination, or classification based on these features. One of the ML techniques, called Binary classification is usually used to discriminate between two class labels. This project provides an ML-based solution for the discrimination of reflected sound signals, which are reflected from two different objects. Firstly, data pre-processing is performed on the reflected time signals to render the dataset. Secondly, Quadratic Time-Frequency representation (QTFR) of the reflected sound signal is generated and features extraction is performed on it. Afterward, four different Machine Learning classification models; namely, K Nearest Neighbors, Random Forest, Logistic Regression, and Decision Trees are utilized for data training and prediction for the realization of binary classifier or discriminator. Finally, the best performing classifier based on accuracy, that is RF classifier, is being saved for final implementation.

A user-friendly GUI is also provided which runs the same implementation code on backend.