Autor | |
Palabras clave |
|
Resumen |
The integration of cultural preservation into smart city initiatives has become increasingly vital as urban systems seek to balance technological advancement with heritage sustainability. This paper presents a computer vision-based framework for recognizing and digitally modeling the intricate skill motions involved in intangible cultural heritage (ICH), such as paper cutting, calligraphy, and embroidery. The system combines pose estimation, action segmentation, and graph-based temporal modeling to capture fine-grained spatial-temporal patterns of craft demonstrations. A dedicated ICH dataset, recorded from authentic workshops, is used to train a hybrid neural network architecture combining 2D CNNs with temporal graph convolutional networks. The recognized motion sequences are reconstructed into digital avatars for use in virtual exhibitions, educational platforms, and cultural simulation systems within smart city infrastructures. Experimental results demonstrate significant improvements over baseline models in recognition accuracy and motion fidelity. This framework provides a foundation for the intelligent integration of ICH into digital urban services, supporting long-term cultural dissemination in future smart societies. |
Acta title |
Proceedings of SPIE - The International Society for Optical Engineering
|
URL |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-105010641339&doi=10.1117%2F12.3073363&partnerID=40&md5=7e1eb0cfad3d5b7af10ef5a69f0bd22b
|
DOI |
10.1117/12.3073363
|
Descargar cita |