The human voice is very versatile and carries a multitude of emotions. Emotion in speech carries extra insight about human actions. Through further analysis, we can better understand the motives of people, whether they are unhappy customers or cheering fans. Humans are easily able to determine the emotion of a speaker, but the field of emotion recognition through machine learning is an open research area. We begin our study of emotion in speech by detecting one emotion. Specifically, we investigate the classification of happy, sad, anger or other related emotions in speech samples. In our analysis of emotion, we start by delineating the data used. We transition to discussing our methodology, and through this analysis, we investigate the best algorithms to select features that are relevant to predicting emotion. We also consider multiple machine learning models with which to classify emotion.
Autorentext
Faiz ul haque Zeya(1975-) did his MS from University of Tulsa, OK USA in Computer science AI.Started working on AI in BE and then took courses in AI in MS.He teaches AI for 7 years in different universities in Pakistan.His citation which is the work of BE project in book "Agents and computational autonomy" is present in around 400 libraries worldwide. He is CEO of Transys which develops software agents.