An attention-based ensemble model for emotion recognition in conversation
Emotion Recognition in Conversation (ERC) has become a major area of interest in the field of Natural Language Processing (NLP). The aim of this task is to achieve accurate classification of the emotions conveyed by speakers during conversation. For this purpose, it is essential to capture not only the literal meaning of words but also the relationships and interactions between different modalities. Most state-of-the-art approaches primarily focus on textual data to analyze speaker dependencies and contextual information within conversation.
This limitation restricts their ability to fully exploit information from multiple modalities, such as text and audio. Moreover, current ERC models struggle with the class imbalance issue, resulting in an inadequate performance for recognizing minority class emotions. To address these challenges, we propose an ensemble of attention-based cross-modal framework for ERC.
This framework addresses the challenges of insufficient utilization of multi-modal information and class imbalance in emotion classification. Experiments are conducted using the benchmark MELD dataset to evaluate the performance of the proposed model. Our model achieves a weighted average accuracy (WAA) of 64.33% and a weighted F1 (WF1) of 62.09%, for ERC outperforming state-of-the-art approaches.
History
School
- Loughborough University, London
Source
IEEE ICPRS 2024 - 14th International Conference on Pattern Recognition SystemsPublisher
IEEEVersion
- AM (Accepted Manuscript)
Rights holder
© IEEEPublisher statement
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Acceptance date
2024-04-26Copyright date
2024ISBN
9798350375657Publisher version
Language
- en