- Information
- AI Chat
Was this document helpful?
Emotion Based Music Recommendation System
Course: Btech (kcs-701)
478 Documents
Students shared 478 documents in this course
University: APJ Abdul Kalam Technological University
Was this document helpful?
Emotion Based Music Recommendation System
Micah Mariam Joseph
Department of Engineering
Amity University
Dubai, United Arab Emirates
micahJ@amitydubai.ae
Diya Treessa Varghese
Department of Engineering
Amity University
Dubai, United Arab Emirates
diyaV@amitydubai.ae
Lipsa Sadath
Department of Engineering
Amity University
Dubai, United Arab Emirates
lsadath@amityuniversity.ae
Ved Prakash Mishra
Department of Engineering
Amity University
Dubai, United Arab Emirates
vmishra@amityuniversity.ae
Abstract— A person often finds it difficult to choose which
music to listen from different collections of music. Depending on
the user’s mood, a variety of suggestion frameworks have been
made available for topics including music, festivals and
celebrations. Our music recommendation system’s main goal is
to offer users recommendations that match user’s preferences.
Understanding the user’s present facial expression can enable
us to predict the user’s emotional state. Humans frequently use
their facial expressions to convey their intentions. More than
60% of users have at some point believed that the count of songs
in their music playlist is so much that the user’s are unable to
choose a music to play. By creating a suggestion system, it might
help a user decide which music to listen, allowing the user to
feel less stressed. This work is a study on how to track and match
the user's mood using face detection mechanism, saving the user
time from having to search or look up music. Deep learning
performs emotion detection which is a well-known model in
facial recognition arena. Convolution neural network algorithm
has been used for the facial recognition. We use an open-source
app framework known as Streamlit to make a web application
from the model. The user will then be shown songs that match
his or her mood. We capture the user’s expression using a
webcam. An appropriate music is then played on, according to
their mood or emotion.
Keywords— emotion recognition, convolution neural
network, streamlit.
I.
INTRODUCTION
Nowadays, music services make vast amounts of music
easily accessible. People are constantly attempting to
enhance music arrangement and search management, in order
to alleviate the difficulty of selection and make discovering
new music works easier. Recommendation systems are
becoming increasingly common, allowing users to choose
acceptable music for any circumstance. Recommendations
for music can be used in a range of situations, including music
therapy, sports, studying, relaxing, and supporting mental and
physical activity. [1] However, in terms of personalization
and emotion driven recommendations, there is still a gap.
Humans have been massively influenced by music. Music is
a key factor to for various effects on humans such as
controlling their mood, relaxation, mental and physical work
and help in stress relief. Music therapy can be used in a
variety of clinical contexts and practices to help people feel
better. In this project, we’re creating a web application that
recommends music based on emotions. It influences how
people live and interact with one another. At times, this could
seem that we have been controlled by our emotion. The
emotion we are encountering at any given moment have an
effect on the final decision that we choose, actions that we
undertake, and the impression that we form. Neutral, angry,
disgust, fear, glad, sad, and surprise are the seven primary
global emotions. The look on a person's face might reveal
these basic emotions. This study presents a method for
detecting these basic universal emotions from frontal facial
expressions. After, implementing the facial recognition
machine learning model, we then further continue to make it
into a web application by using Streamlit. The Emotion
detection is performed using Deep learning. Deep Learning
is a well-known model in the pattern recognition arena. The
keras library is being used, as well as the Convolution Neural
Network (CNN) algorithm. A CNN is indeed an artificial
neural network with some machine learning component.
Among other things, CNN can also be used to detect objects,
perform facial recognition and process images. [2]
II.
LITERATURE REVIEW
Humans frequently convey their emotions through a
variety of ways like hand gestures, voice, tonality and so on,
but they mostly do through facial expressions.An expert
would be able to determine the emotions being experienced
by the other person by observing or examining them.
Nevertheless, as there is technological advancement in
today’s world, machine are attempting to become more
smarter. Machines are aiming to operate in an increasingly
human-like way. On training the computer on the human
emotions, the machine would be capable to perform analysis
and react like a human. By enabling precise expression
patterns with improved competence and error-free emotion
calculation, data mining can assist machines in discovering
and acting more like humans. A music player which is
dependent on emotions takes less time to find the appropriate
music that the user can resonate with. People typically have a
lot of music on their playlist, this would make it difficult for
the user to choose an appropriate song. Random music does
not make the user feel better, so with the aid of this
technology, users can have songs played automatically based
on their mood. [3] The webcam records the user's image, and
the pictures are stored. The system recorded user’s varied
expressions to assess their emotions and select the apt music.
The ability to read a person's mood from their expression is
important. To capture the facial expressions, a
webcam is used. This input can be used, among other things,
to extract data that can be used to infer a person's attitude.
Songs are generated using the "emotion" which has been
inferred from the previous input. This reduces the tedious job
of manually classifying songs into various lists. The Facial
Expression Based Music Recommender’s main objective is
scanning and analyzing the data, and then it would suggest
music in line with the user’s mood. [4]
By utilizing image processing, we have developed an
emotion-based music system that would allow the user to
2023 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE)
March 9-10, 2023, Amity University Dubai, UAE
979-8-3503-3826-3/23/$31.00 ©2023 IEEE
505
2023 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE) | 979-8-3503-3826-3/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICCIKE58312.2023.10131874
Authorized licensed use limited to: CENTRE FOR DEVELOPMENT OF ADVANCED COMPUTING - CDAC - Thiruvananthapuram & Kochi. Downloaded on September 14,2023 at 12:41:31 UTC from IEEE Xplore. Restrictions apply.