Mitra, Adway ; Biswas, Soma ; Bhattacharyya, Chiranjib (2017) Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (3). pp. 430-443. ISSN 0162-8828
PDF
1MB |
Official URL: http://doi.org/10.1109/TPAMI.2016.2557785
Related URL: http://dx.doi.org/10.1109/TPAMI.2016.2557785
Abstract
A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.
Item Type: | Article |
---|---|
Source: | Copyright of this article belongs to IEEE |
ID Code: | 127716 |
Deposited On: | 13 Oct 2022 11:03 |
Last Modified: | 13 Oct 2022 11:03 |
Repository Staff Only: item control page