CHOSUN

비디오 데이터의 객체 움직임간 의미 유사성 측정

Metadata Downloads
Author(s)
조미영
Issued Date
2007
Abstract
During the last decade, the emerging technology for video retrieval is mainly based on the content. However, semantic-based video retrieval has become more and more necessary for humans especially the users who can only use the human language during retrieval. So, semantic-based video retrieval research has caused many researchers’ attentions. Since the most important semantic information for video event representation is motions, there has been a significant amount of event understanding researches in various application domains.
One major goal of this study is to accomplish the automatic extraction of feature semantics from a motion and to provide support for semantic-based motion indexing/retrieval/management. Most of the current approaches to activity recognition are composed of defining models for specific activity types that suit the goal in a particular domain and developing procedural recognized by constructing the dynamic models of the periodic patterns of human movements and are highly dependent on the robustness of the tracking.
Spatio-temporal relations are the basis for many of the selections users perform when they formulate queries for the purpose of semantic-based motion retrieval. Although such query languages use natural-language-like terms, the formal definitions of these relations rarely reflect the language people would use when communicating with each other. To bridge the gap between the computational models used for spatio-temporal relations and people's use of motion verbs in their natural language, a model of these spatio-temporal relations was calibrated for motion verbs.
In many researches, the retrieval using spatio-temporal relations is similar trajectory retrieval, it’s only the content-based retrieval but not semantic-based. So, in this paper, I propose a novel approach for motion recognition from the aspect of semantic meaning. This issue can be addressed through a hierarchical model that explains how the human language interacts with motions. And, I evaluate my new approach using trajectory distance based on spatial relations to distinguish the conceptual similarity.
In the experiment and application part, I apply the proposed approach to semantic recognition of motions and trajectory retrieval and get the satisfactory results. Extending our novel motion verbs model with more abundant motion verbs for gapping the chasm between high-level semantics and low-level video features is our further consideration.
Alternative Author(s)
Cho Mi Young
Affiliation
전자정보공과대학 전자계산학과
Department
일반대학원 전자계산학과
Awarded Date
2008-02
Degree
Doctor
Publisher
조선대학교 대학원
Citation
조미영. (2007). 비디오 데이터의 객체 움직임간 의미 유사성 측정.
Type
Dissertation
URI
https://oak.chosun.ac.kr/handle/2020.oak/7088
http://chosun.dcollection.net/common/orgView/200000236096
Appears in Collections:
General Graduate School > 4. Theses(Ph.D)
Authorize & License
  • AuthorizeOpen
  • Embargo2008-02-19
Files in This Item:

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.