Author
Keywords
Abstract

Intangible Cultural Heritage (ICH) witnesses human creativity and wisdom in long histories, composed of a variety of immaterial manifestations. The rapid development of digital technologies accelerates the record of ICH, generating a sheer number of heterogenous data but in a state of fragmentation. To resolve that, existing studies mainly adopt approaches of knowledge graphs (KGs) which can provide rich knowledge representation. However, most KGs are text-based and text-derived, and incapable to give related images and empower downstream multimodal tasks, which is also unbeneficial for the public to establish the visual perception and comprehend ICH completely especially when they do not have the related ICH knowledge. Hence, aimed at that, we propose to, taking the Chinese nation-level ICH list as an example, construct a large-scale and comprehensive Multimodal Knowledge Graph (CICHMKG) combining text and image entities from multiple data sources and give a practical construction framework. Additionally, in this paper, to select representative images for ICH entities, we propose a method composed of the denoising algorithm (CNIFA) and a series of criteria, utilizing global and local visual features of images and textual features of captions. Extensive empirical experiments demonstrate its effectiveness. Lastly, we construct the CICHMKG, consisting of 1,774,005 triples, and visualize it to facilitate the interactions and help the public dive into ICH deeply.

Year of Publication
2023
Journal
Heritage Science
Volume
11
Issue
1
Date Published
2023///
Publication Language
English
ISBN-ISSN
20507445 (ISSN)
URL
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85160098451&doi=10.1186%2fs40494-023-00927-2&partnerID=40&md5=99b2b8c49466764e0ef181fb5080cbf5
DOI
10.1186/s40494-023-00927-2
Alternate Journal
Herit. Sci.
Download citation