Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Emotion Based Music Player Project #911

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions Emotion based music player/Dataset/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
<h2>Emotion Based Music Player</h2>

### Goal 🎯
The objective of the emotion-based music player project is to create an intelligent system that detects and analyzes users' emotions in real-time through techniques like facial recognition, voice analysis, or biosensors. Based on the detected emotional state, the player automatically curates and adjusts music playlists to enhance the user's mood and provide a personalized listening experience. The system aims to reduce the burden of manual song selection, adapt to emotional changes dynamically, and offer privacy-conscious and culturally relevant music suggestions, while giving users the flexibility to override or customize the music based on their preferences.

### Model(s) used for the Web App 🧮

The models and technologies used in the emotion-based music player project include:

1. Pretrained Keras Model (model.h5): A deep learning model, likely a Convolutional Neural Network (CNN), is loaded to predict emotions based on processed facial landmarks and hand movements.

2. Mediapipe Library: Mediapipe is used for extracting facial landmarks and hand landmarks, which serve as input features for emotion recognition. It captures key points from the user's face and hands for emotion detection.

3. Streamlit and WebRTC: Used for the web interface and real-time video streaming, capturing the users face for emotion recognition through a web camera.

4. The project leverages deep learning (Keras) and computer vision (Mediapipe) to detect emotions based on facial and hand landmark data, then uses the model to predict the emotion, which influences the music recommendation. ​

Binary file added Emotion based music player/Dataset/emotion.npy
Binary file not shown.
Binary file added Emotion based music player/Dataset/labels.npy
Binary file not shown.
Binary file added Emotion based music player/Images/Capture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/Information.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/Output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/emotion.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/open page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
49 changes: 49 additions & 0 deletions Emotion based music player/Model/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
Project Title: Emotion-Based Music
🎯 Goal
---The main goal of this project is to create a web application that recommends music based on the user's emotions. This is achieved by using a model that classifies different emotions.

🧵 Dataset
---The dataset used in this project are emotion.npy and label.npy which are typically used to store NumPy arrays, which may contain data such as numerical values, model weights, or feature sets, possibly for your emotion-based music player project..

🧾 Description
---This project utilizes MediaPipe, Keras, OpenCV, and Streamlit to build a web application. The application captures webcam input to detect emotions and recommend music accordingly. The project is explained in detail in a video linked in the README.

🧮 What I Had Done!
---Developed a model to classify emotions using a live emoji project code.
---Created a web application using Streamlit and Streamlit-webrtc for webcam capture.
---Integrated the emotion classification model into the web application for music recommendation.

🚀 Models Implemented
1. Pretrained Deep Learning Model (model.h5):

-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.<br>
2. Mediapipe's Holistic and Hands Models:

-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition.

📚 Libraries Needed
MediaPipe
Keras
OpenCV
Streamlit
Streamlit-webrtc

📊 Exploratory Data Analysis Results
Exploratory Data Analysis (EDA) involved examining the distribution of the dataset, visualizing sample images, and understanding the different classes of facial expressions. The dataset are set to evaluate the model's performance effectively.

📈 Performance of the Models based on the Accuracy Scores
---Final Accuracy = 58.33%, Validation Accuracy = 54.99%
c:\Users\DELL\Downloads\model_training_results.JPG


📢 Conclusion
---The emotion-based music player successfully integrates deep learning and computer vision techniques to create a personalized, emotion-driven music experience. By leveraging facial expression and hand gesture recognition through Mediapipe, combined with a pretrained deep learning model, the system can detect the user's emotional state in real-time. This allows for dynamic music recommendations that adapt to the user's mood, enhancing the listening experience.

The project demonstrates how artificial intelligence can transform user interaction with media, making it more intuitive, personalized, and engaging. With future improvements, such as more advanced emotion recognition and enhanced music recommendations, this system could revolutionize how users interact with digital content, making it more emotionally responsive and contextually aware.

✒️ Your Signature
Nadipudi Shanmukhi satya
github : https://github.com/shanmukhi-developer<br>
linkedin : https://www.linkedin.com/in/nadipudi-shanmukhi-satya-6904a0242/<br>
Binary file added Emotion based music player/Model/model.h5
Binary file not shown.
13 changes: 13 additions & 0 deletions Emotion based music player/Requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
*Requirements for Running the Project*

Python 3.x
Python libraries:
1. streamlit
2. streamlit-webrtc
3. opencv-python
4. mediapipe
5. keras
6. numpy

-->A pre-trained Keras model (model.h5) and a NumPy labels file (labels.npy), both included in the project.
-->A webcam to capture live video input.
17 changes: 17 additions & 0 deletions Emotion based music player/Web apps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
<h2>Emotion Based Music Player</h2>
Goal 🎯
---The main goal of this project is to create a web application that recommends music based on the user's emotions. This is achieved by using a model that classifies different emotions.

Model(s) used for the Web App 🧮
1. Pretrained Deep Learning Model (model.h5):

-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.<br>
2. Mediapipe's Holistic and Hands Models:

-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition


Signature ✒️
Nadipudi Shanmukhi satya
102 changes: 102 additions & 0 deletions Emotion based music player/Web apps/music.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
import streamlit as st
from streamlit_webrtc import webrtc_streamer
import av
import cv2
import numpy as np
import mediapipe as mp
from keras.models import load_model
import webbrowser

model = load_model("model.h5")
label = np.load("labels.npy")
holistic = mp.solutions.holistic
hands = mp.solutions.hands
holis = holistic.Holistic()
drawing = mp.solutions.drawing_utils

st.header("Emotion Based Music Recommender")

if "run" not in st.session_state:
st.session_state["run"] = "true"

try:
emotion = np.load("emotion.npy")[0]
except:
emotion=""

if not(emotion):
st.session_state["run"] = "true"
else:
st.session_state["run"] = "false"

class EmotionProcessor:
def recv(self, frame):
frm = frame.to_ndarray(format="bgr24")

##############################
frm = cv2.flip(frm, 1)

res = holis.process(cv2.cvtColor(frm, cv2.COLOR_BGR2RGB))

lst = []

if res.face_landmarks:
for i in res.face_landmarks.landmark:
lst.append(i.x - res.face_landmarks.landmark[1].x)
lst.append(i.y - res.face_landmarks.landmark[1].y)

if res.left_hand_landmarks:
for i in res.left_hand_landmarks.landmark:
lst.append(i.x - res.left_hand_landmarks.landmark[8].x)
lst.append(i.y - res.left_hand_landmarks.landmark[8].y)
else:
for i in range(42):
lst.append(0.0)

if res.right_hand_landmarks:
for i in res.right_hand_landmarks.landmark:
lst.append(i.x - res.right_hand_landmarks.landmark[8].x)
lst.append(i.y - res.right_hand_landmarks.landmark[8].y)
else:
for i in range(42):
lst.append(0.0)

lst = np.array(lst).reshape(1,-1)

pred = label[np.argmax(model.predict(lst))]

print(pred)
cv2.putText(frm, pred, (50,50),cv2.FONT_ITALIC, 1, (255,0,0),2)

np.save("emotion.npy", np.array([pred]))


drawing.draw_landmarks(frm, res.face_landmarks, holistic.FACEMESH_TESSELATION,
landmark_drawing_spec=drawing.DrawingSpec(color=(0,0,255), thickness=-1, circle_radius=1),
connection_drawing_spec=drawing.DrawingSpec(thickness=1))
drawing.draw_landmarks(frm, res.left_hand_landmarks, hands.HAND_CONNECTIONS)
drawing.draw_landmarks(frm, res.right_hand_landmarks, hands.HAND_CONNECTIONS)


##############################

return av.VideoFrame.from_ndarray(frm, format="bgr24")

lang = st.text_input("Language")
singer = st.text_input("singer")
choose = st.text_input("Select")
if lang and singer and st.session_state["run"] != "false":
webrtc_streamer(key="key", desired_playing_state=True,
video_processor_factory=EmotionProcessor)
btn = st.button("Recommend me ")

if btn:
if not emotion:
st.warning("Please let me capture your emotion first")
st.session_state["run"] = "true"
elif choose == "youtube":
webbrowser.open(f"https://www.youtube.com/results?search_query={lang}+{emotion}+song+{singer}")
else:
webbrowser.open(f"https://open.spotify.com/search/{lang}%20{emotion}%20songs%20{singer}")
np.save("emotion.npy", np.array([""]))
st.session_state["run"] = "false"
8 changes: 8 additions & 0 deletions Emotion based music player/Web apps/tempCodeRunnerFile.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
import streamlit as st
from streamlit_webrtc import webrtc_streamer
import av
import cv2
import numpy as np
import mediapipe as mp
from keras.models import load_model
import webbrowser