01957nas a2200229 4500000000100000000000100001000000100002008004100003653002300044653002500067653003000092100001100122700001100133700001400144700001500158245007900173856016200252300001200414490000600426520127500432022002001707 2022 d10aDigital humanities10aImage classification10aMultimodal Classification1 aF. Tao1 aW. Hao1 aL. Yueyan1 aD. Sanhong00aClassifying Images of Intangible Cultural Heritages with Multimodal Fusion uhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85130441432&doi=10.11925%2finfotech.2096-3467.2021.0911&partnerID=40&md5=40d47b33d12e0a4ab78f48469e5b9ba5 a329-3370 v63 a[Objective] This paper proposes a new method combining images and texual descriptions, aiming to improve the classification of Intangible Cultural Heritage (ICH) images. [Methods] We built the new model with multimodal fusion, which includes a fine-tuned deep pre-trained model for extracting visual semantic features, a BERT model for extracting textual features, a fusion layer for concatenating visual and textual features, and an output layer for predicting labels. [Results] We examined the proposed model with the national ICH project-New Year Prints to classify the Mianzu Prints, Taohuawu Prints, Yangjiabu Prints, and Yangliuqing Prints. We found that fine-tuning the convolutional layer strengthened the visual semantics features of the ICH images, and the F1 value for classification reached 72.028\%. Compared with the baseline models, our method yielded the best results, with a F1 value of 77.574\%. [Limitations] The proposed model was only tested on New Year Prints, which needs to be expanded to more ICH projects in the future. [Conclusions] Adding textual description features can improve the performance of ICH image classification. Fine-tuning convolutional layers in image deep pre-trained model can improve extraction of visual semantics features. a20963467 (ISSN)