Skip to main content
Skip main navigation
No Access

Multimodal music emotion recognition method based on multi-source data fusion

Published Online:pp 187-194https://doi.org/10.1504/IJRIS.2024.139838

Aiming at the problems of low recognition accuracy and long recognition time in traditional multimodal music emotion recognition methods, a multimodal music emotion recognition method based on multi-source data fusion is proposed. First, build a multimodal music emotion model, then use TF-IDF to extract lyric modal emotion features, and use Mel frequency cepstrum coefficient to extract audio modal emotion features. Then, after preprocessing the extracted multimodal features, fuse the two multi-source data features of lyric mode and audio mode, and finally calculate the probability distribution of a song in the emotional space according to the fusion results. The emotion category with the highest corresponding value is taken as the emotion category to which the music belongs, so as to achieve the purpose of emotion recognition of multimodal music. Simulation results show that the proposed method has higher accuracy and shorter recognition time for multimodal music emotion recognition.

Keywords

multi-source data fusion, multimodal music, emotional recognition, inverse text frequency IDF, Mel frequency cepstrum coefficient, MFCC