02921nas a2200373 4500000000100000008004100001653002000042653001900062653002000081653001900101653002300120653002100143653003400164653001900198653001600217653003300233653001000266653002000276653002000296653002300316653002100339653001700360653001800377653002200395653003300417653003400450653001900484100001200503700001300515245012200528856015000650520172700800022002002527 d10aAnalysis models10aArts computing10aAudio acoustics10aAudio features10aDigital humanities10aDigital humanity10aIntangible cultural heritages10aModal analysis10aMulti-modal10aMultimodal sentiment analyse10aMusic10aNeural networks10aNeural-networks10asentiment analysis10aSpectra features10aSpectrograms10aSpectrographs10aSpectrum analysis10aIntangible cultural heritage10amultimodal sentiment analysis10aneural network1 aTao Fan1 aHao Wang00aMultimodal sentiment analysis of intangible cultural heritage songs with strengthened audio features-guided attention uhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85138347518&doi=10.1177%2f01655515221114454&partnerID=40&md5=3e9b97624720dd1472a96bd8f87f520d3 aIntangible cultural heritage (ICH) songs convey folk lives and stories from different communities and nations through touching melodies and lyrics, which are rich in sentiments. Currently, researches about the sentiment analysis of songs are mainly based on lyrics, audios and lyric-audio. Recent studies have shown that deep spectrum features extracted from the spectrogram, generated from the audio, perform well in several speech-based tasks. However, studies combining spectrum features in multimodal sentiment analysis of songs are in a lack. Hence, we propose to combine the audio, lyric and spectrogram to conduct multimodal sentiment analysis for ICH songs, in a tri-modal fusion way. In addition, the correlations and interactions between different modalities are not considered fully. Here, we propose a multimodal song sentiment analysis model (MSSAM), including a strengthened audio features-guided attention (SAFGA) mechanism, which can learn intra- and inter-modal information effectively. First, we obtain strengthened audio features through the fusion of acoustic and spectrum features. Then, the strengthened audio features are used to guide the attention weights distribution of words in the lyric with help of SAFGA, which can make the model focus on the important words with sentiments and related with the sentiment of strengthened audio features, capturing modal interactions and complementary information. We take two world-level ICH lists, Jingju (京剧) and Kunqu (昆曲), as examples, and build sentiment analysis datasets. We compare the proposed model with other state-of-the-arts baselines in Jingju and Kunqu datasets. Experimental results demonstrate the superiority of our proposed model. a01655515 (ISSN)