02198nas a2200301 4500000000100000000000100001008004100002653002700043653001900070653003400089653002300123653002400146653002300170653001500193653002400208653002500232653002500257653001400282100002600296700001900322700002200341245006800363856015000431300001400581490000700595520127400602022002001876 d10aCompact representation10aDance Analysis10aIntangible cultural heritages10aK-means clustering10aKey frame selection10aKeyframe selection10aKinematics10aMotion capture data10aMotion summarization10aSemantic information10aSemantics1 aAthanasios Voulodimos1 aIoannis Rallis1 aNikolaos Doulamis00aPhysics-based keyframe selection for human motion summarization uhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85058075491&doi=10.1007%2fs11042-018-6935-z&partnerID=40&md5=0879679b72998cfafbb3ce1217177ca0 a3243-32590 v793 aAnalysis of human motion is a field of research that attracts significant interest because of the wide range of associated application domains. Intangible Cultural Heritage (ICH), including the performing arts and in particular dance, is one of the domains where related research is especially useful and challenging. Effective keyframe selection from motion sequences can provide an abstract and compact representation of the semantic information encoded therein, contributing towards useful functionality, such as fast browsing, matching and indexing of ICH content. The availability of powerful 3D motion capture sensors along with the fact that video summarization techniques are not always applicable to the particular case of dance movement create the need for effective and efficient summarization techniques for keyframe selection from 3D human motion capture data sequences. In this paper, we introduce two techniques: a “time-independent” method based on k-means++ clustering algorithm for the extraction of prominent representative instances of a dance, and a physics-based technique that creates temporal summaries of the sequence at different levels of detail. The proposed methods are evaluated on two dance motion datasets and show promising results. a13807501 (ISSN)