Facial Emotion Recognition System Using CNN for Song Mapping
DOI:
https://doi.org/10.3126/injet.v1i2.66709Abstract
This project introduces a novel system that integrates Facial Emotion Recognition (FER) with music mapping to enhance human-computer interaction. Leveraging a custom Convolutional Neural Network (CNN) algorithm, developed using the FER2013 dataset, we classify emotions into four categories- happy, sad, neutral, and angry- employing Haar Cascade for precise face detection and grayscale conversion for optimal CNN input. Our custom CNN demonstrates superior performance, achieving a testing accuracy of 77.23%, notably surpassing established models like VGG16 and ResNet50, which achieved 55.87% and 62.76% respectively. This system swiftly identifies emotions and recommends songs from the "278k Emotion Labeled Spotify Songs" playlist, aiming to boost user satisfaction. Through nuanced comparisons with VGG16 and ResNet50, our approach underscores its inherent strengths, suggesting promising advancements in Facial Emotion Recognition (FER) and music recommendation systems. With a focus on precision in emotion detection, our system subtly elevates user experience, contributing meaningfully to ongoing research in the field.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.