• 검색 결과가 없습니다.

조 조 조선 선 선대 대 대학 학 학교 교 교 대 대 대학 학 학원 원 원

N/A
N/A
Protected

Academic year: 2022

Share "조 조 조선 선 선대 대 대학 학 학교 교 교 대 대 대학 학 학원 원 원"

Copied!
40
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

2005 2005 2005

2005년 년 년 년 8888월 월 월 월 석사학위논문 석사학위논문 석사학위논문 석사학위논문

도 도 도메 메 메인 인 인 지 지 지식 식 식 구 구 구축 축 축에 에 에 의 의 의한 한 한 의 의 의미 미 미적 적 적 비 비 비디 디 디오 오 오 이 이 이벤 벤 벤트 트 트 표 표 표현 현 현

조 조 조선 선 선대 대 대학 학 학교 교 교 대 대 대학 학 학원 원 원

전전전 자자자 계계계 산산산 학학학 과과과

송 송 송 단 단 단

(2)

도 도 도메 메 메인 인 인 지 지 지식 식 식 구 구 구축 축 축에 에 에 의 의 의한 한 한 의 의 의미 미 미적 적 적 비 비 비디 디 디오 오 오 이 이 이벤 벤 벤트 트 트 표 표 표현 현 현

Semantic Semantic Semantic

Semantic Video Video Video Video Event Event Event Event Description Description Description Assisted Description Assisted Assisted by Assisted by by by Building Building Building Building Domain

Domain Domain

Domain Knowledge Knowledge Knowledge Knowledge

2005 2005 2005

2005년 년 년 년 8888월 월 월 월 일 일 일 일

조선대학교 조선대학교 조선대학교

조선대학교 대학원 대학원 대학원 대학원

전 전

전 전 자 자 자 자 계 계 계 계 산 산 산 학 산 학 학 과학 과과과

송 단 단 단 단

(3)

도 도 도메 메 메인 인 인 지 지 지식 식 식 구 구 구축 축 축에 에 에 의 의 의한 한 한 의 의 의미 미 미적 적 적 비 비 비디 디 디오 오 오 이 이 이벤 벤 벤트 트 트 표 표 표현 현 현

지도교수 지도교수 지도교수

지도교수 김 김 판 판 구

이 논문을 논문을 논문을 이학석사학위신청 논문을 이학석사학위신청 이학석사학위신청 논문으로 이학석사학위신청 논문으로 논문으로 제출함논문으로 제출함제출함....제출함

2005 2005 2005

2005년 년 년 년 4444월 월 월 월 일 일 일 일

조선대학교 조선대학교 조선대학교

조선대학교 대학원 대학원 대학원 대학원

전자계산학과 전자계산학과전자계산학과 전자계산학과

송 송

송 단 단 단 단

(4)

송단의 송단의 송단의

송단의 석사학위논문을 석사학위논문을 석사학위논문을 석사학위논문을 인준함 인준함 인준함 인준함

위원장 위원장 위원장

위원장 조선대학교 조선대학교 조선대학교 교수 조선대학교 교수 교수 교수

....

위 원 원 조선대학교 조선대학교 조선대학교 교수 조선대학교 교수 교수 교수

....

위 원 원 조선대학교 조선대학교 조선대학교 교수 조선대학교 교수 교수 교수

....

2005 2005 2005

2005년 년 년 년 5555월 월 월 월 일 일 일 일

조선대학교 조선대학교 조선대학교

조선대학교 대학원 대학원 대학원 대학원

(5)

Contents Contents Contents Contents

Event Representation using MPEG-7 High Level Descriptors--- 3.4. Video Object Description with Low Levels Features--- 14

Ontology Infrastructure Building Building Object Ontology in D

Concepts Definitions in D V i d e o E v e n t R e p re s e n ta ti o n

Retrieval---

(6)

FFFiiiggguuurrreeeCCCooonnnttteeennntttsss

Figure 1 MPEG-7 Tools Application for A Video Description Figure 2 Framework for Video Event Understanding

Figure 3 ideo Modeling and Representation--- 8 Figure 4 Histogram Difference and Billiard Game's Shot Detection Figure 5 MPEG-7 Framework

Figure 6 Conceptual Abstractions of a Video Shot Contenttt--- 12 Figure 7 Key-frame's Moving Object Trajectory--- Figure 8 Moving Object trajectory Recordin

Figure 9 Object Ontology Structure--- Figure 10 Motion Detection in Key-frames---

Figure 11 Key-Frames Annotation of Specific Event"The Three Ball"- 23 Figure 12 Video Research by Semantic Content---

(7)

T T

TaaabbbllleeeCCCooonnnttteeennntttsss

1 Lexicon MPEG-7 XML file

Low-level Features in MPEG -7 XML file SVQL Query

(8)

ABSTRACT ABSTRACT ABSTRACT ABSTRACT

Semantic Semantic Semantic

Semantic Video Video Video Event Video Event Event Description Event Description Description Assisted Description Assisted Assisted by Assisted by by Building by Building Building Building Domain

Domain Domain

Domain Knowledge Knowledge Knowledge Knowledge

Dan Song

Adviser : Prof. Kim Pan-Koo Ph.D.

Department of Computer Science, Graduate School of Chosun University

The MPEG-7 visual standard under development specifies content-based descriptors that allow users or agents (or search engines) to measure similarity in images or video based on visual criteria, and can be used to efficiently identify, filter, or browse images or video based on visual content. More specifically, MPEG-7 specifies color, texture, object shape, global motion, or object motion features for this purpose. This paper outlines the aim, methodologies, and broad details of the MPEG-7 standard development for video event description. Except for assistant by the MPEG-7 tools, we also put forward a novel method for video event analysis and description based on the Domain Knowledge in this paper. Semantic concepts in the context of the video event are described in one specific domain enriched with qualitative attributes of the semantic objects, multimedia processing approaches and domain independent factors: low level features (pixel color, motion vectors and spatio-temporal relationship). In order to apply large-scale semantic knowledge in vision problems effectively, catering the naive user’s retrieval and index processing with semantic (human) language, a few major issues are

(9)

resolved in this paper. Firstly, how can we get the semantic shot for the specific Domain Knowledge? The former existing algorithm has been adopted to solve the problem. Secondly, what visual observables should be collected? This is usually dependent on the problem domain. Here, we consider one shot of the billiard game clip as the specific Domain Knowledge. Thirdly, how can these observables be translated into the semantic representation, we are from two aspects to expose that issue:

Firstly, video event representation using MPEG-7 high level descriptors which was defined in the MPEG-7 XML files. Secondly, video object motion analysis with the help of the MPEG-7 low level descriptors(video object motion detection and moving trajectory analysis). In addition, the most important contribution in this work is exploiting the video object ontology to map the MPEG-7's high-level descriptors to low level features descriptors which have been defined in the MPEG's logical structure.

(10)

1. 1. 1.

1. Introduction Introduction Introduction Introduction

Nowadays, the rapid increase of the available amount of multimedia information has revealed an urgent need for developing intelligent methods for understanding and managing the conveyed information. To face such challenges developing faster hardware or more sophisticated algorithms has become insufficient. Rather, a deeper understanding of the information at the semantic level is required [1]. This results in a growing demand for efficient methods for extracting semantic information from such content. Although new multimedia standards, such as MPEG-4 and MPEG-7 [2], provide the needed functionalities in order to manipulate and transmit objects and metadata, their extraction, and that most importantly at a semantic level, is out of the scope of the standards and is left to the content developer.

Extraction of low-level features and object recognition are important phases in developing multimedia database management systems [3].

We have got some significant results in the literature recently, with successful implementation of several prototypes [4]. However, the lack of precise models and formats for object and system representation and the high complexity of multimedia processing algorithms make the development of fully automatic semantic multimedia analysis and management systems a challenging task. Correspondingly, referring to the low level, the features of the moving object have become more and more concerned. Because moving objects refer to semantic real-world entity definitions that are used to denote a coherent spatial region and be automatically computed by the continuity of spatial low-level features. This is due to the difficulty that often mentioned as the semantic gap, in capturing concepts mapped into a set of image and low-level features that can be automatically extracted from the raw video data. The use of Domain Knowledge is probably the only way by which higher level semantics can be incorporated into techniques that capture the semantic concepts. Although there were some works about the domain application, they are restricted by the gap [5,6]. So, in this paper, a novel method for video event analysis and description based on the

(11)

specific Domain Knowledge was proposed.

The remainder of the paper is organized as follows: Section 2-The related work introduces the MPEG-7 fundamental knowledge. Section 3-Video analysis based on Domain Knowledge. Also, this section will introduce the video event representation using MPEG-7 high level descriptors and show the video object description with low level features. Domain Knowledge ontology infrastructure building, experiment results and query explanatory will be demonstrated in Section 4. After these comprehensive explanations, we will conclude in Section 5.

(12)

2. 2. 2.

2. Related Related Related Work Related Work Work Work

MPEG-7, formally named "Multimedia Content Description Interface,"

is the standard that describes multimedia content so users can search, browse, and retrieve that content more efficiently and effectively than they could using today's mainly text-based search engines. It's a standard for describing the features of multimedia content. It provides the world's most comprehensive set of audiovisual descriptions. These descriptions are based on the catalog(title, creator, rights), semantic (the who, what, when, where information), and structural (the color histogram -measurement of the amount of color-associated with an image or the timbre of a recorded instrument) features of the AV content, and are leveraged on the AV data representation defined by MPEG-1, -2, and -4. MPEG-7 also uses XML Schema as the language of choice for content description.

However, MPEG-7 won't standardize the (automatic) extraction of AV descriptions/features. Nor will it specify the search engine (or any other program) that can make use of the description. It's left up to the creativity and innovation of search engine companies to manipulate and massage the MPEG-7-described content into search indexes that can be used by their browser and retrieval tools. Typical query examples enabled by MPEG-7 include:

• Audio: I want to search for songs by humming or whistling a tune or, using an excerpt of Pavarotti's voice, get a list of Pavarotti's records and video clips in which Pavarotti sings or simply makes an appearance.

• Graphics: Sketch a few lines on a screen and get a set of images containing similar graphics, logos, and ideograms.

• Image: Define objects, including color patches or textures, and get examples from which you select items to compose your image. Or check if your company logo was advertised on a TV channel as contracted.

• Video: Allow mobile phone access to video clips of goals scored in a soccer game, or automatically search and retrieve any unusual

(13)

movements from surveillance videos.

In addition, it's important to note that MPEG-7 addresses many different applications in many different environments. This means that it needs to provide a flexible and extensible framework for describing audiovisual data. MPEG-7 defines a library of methods and tools for many types of multimedia applications. It standardizes:

• A set of descriptors: A descriptor(D) is a representation of a feature that defines the syntax and semantics of the feature representation.

• A set of description schemes: A description scheme(DS) specifies the structure and semantics of the relationships between its components, which may be both descriptors and description schemes.

• A language that specifies description schemes (and possibly descriptors), the Description Definition Language(DDL): It also allows for the extension and modification of existing description schemes. MPEG-7 adopted XML Schema Language as the MPEG-7 DDL. However, the DDL requires some specific extensions to XML Schema Language to satisfy all the requirements of MPEG-7. These extensions are currently being discussed through liaison activities between MPEG and W3C, the group standardizing XML.

• One or more ways(textual, binary) to encode descriptions: A coded description is one that's been encoded to fulfill relevant requirements such as compression efficiency, error resilience, and random access.

In MPEG-7 the main tools for describing the structure of AV content are segment entities, segment features, and segment relations. In Figure1 you can see an example of a structural description of a video sequence that's described using such tools. The description is made up of three video segments, two moving regions, one segment relation, and several two-segment decompositions.

The root video segment corresponds to the entire video sequence. The remaining two video segments are formed from a segment decomposition of the root segment. The last two video segments are then related by the segment relation "Before". With a further decomposition of one of these segments, you obtain two moving regions that correspond to the

(14)

two objects: "player", "cue ball","object ball". How do them interact with each others? What does MPEG-7 DDL syntax look like? Let's look at this paper that was taking advantage of MPEG-7 syntax extended with XML files for Scheme Description.

Figure Figure Figure

Figure 1. 1. 1. MPEG-7 1. MPEG-7 MPEG-7 Tools MPEG-7 Tools Tools Tools Application Application Application Application for for for A for A A A Video Video Video Video Description Description Description Description

(15)

3. 3.

3. 3. Video Video Video Video Analysis Analysis Analysis Based Analysis Based Based on Based on on on DDDDomain omain omain omain KnowledgeKnowledgeKnowledgeKnowledge

3.1 3.1 3.1

3.1 Overview Overview Overview Overview

Currently, object and event recognition algorithms are not mature enough to scale up to such a large problem domain, as this would require huge libraries of object and event representations and associated algorithms[7, 8].

Furthermore, the semantic meaning of many events can only be recognized in context. So, our approach is to couple video analysis with a structured semantic knowledge base. The use of the Domain Knowledge is probably the only way by which higher-level semantics can be incorporated into techniques that capture the semantic concepts.

Moreover, the building Domain Knowledge for the video object detection is the preferred method for mapping the domain dependent concepts into the domain independent low-level features.

Figure Figure Figure Figure 2. 2. 2. 2. Framework Framework Framework for Framework for for Video for Video Video Event Video Event Event Event UnderstandingUnderstandingUnderstandingUnderstanding

(16)

Content-based analysis of multimedia (event as such) requires method which will automatically segment video sequences and key frames into image areas corresponding to salient objects (e.g. ball, player, spectator, table, etc), track these objects in time, and provide a flexible framework for object recognition, indexing, retrieval and for further analysis of their relative motion and interactions. Semantic concepts within the context of the examined domain are defined in ontology, enriched with qualitative attributes of the semantic objects(spatial relations, temporal relations, etc), multimedia processing methods(motion clustering, etc), and numerical data or low-level features generated via training.

The proposed approach, by exploiting the Domain Knowledge modeled in ontology, enables the recognition of the underlying semantics of the examined video. The retrieval system overall framework, shown in Figure 2, addresses some important issues mentioned in this paper. One is the domain obtainment, and its knowledge representation by the MPEG-7 low-level descriptors combining with high(intermediate-level) descriptors.

And, the most important issue is the video object ontology building. Its significant contribution is to realize the knowledge combination and reconstruct, in the other word is that it can realize mapping the low-level features to the semantic level. The domain-independent, primitive classes comprising the analysis ontology serve as attachment points allowing the integration of the two ontologies. Practically, each domain ontology comprises a specific instantiation of the multimedia analysis ontology providing the corresponding motion model, restrictions etc as will be demonstrated in more details in the next section.

3.2 3.2 3.2

3.2 Video Video Video Video Shot Shot Shot Shot Detection Detection Detection Detection for for for for DDDDomain omain omain Knowledge omain Knowledge Knowledge Knowledge

Video is a structured medium in which actions and events in time and space convey stories, so, a video program(raw video data) must be viewed as a document, not a non-structured sequence of frames. The Figure 3 has shown us one video clip about the Billiard Game's program modeling and

(17)

presentation. The Hierarchical levels of video stream abstraction, in decreasing degree of granularity:

SHOT SCENE CLIP

FRAME

FRAME 500 FRAME 516

SHOT 1 SHOT 2 SHOT K

SCENE 1 SCENE M

CLIP 1

ATTRIBUTES CLIP

index CL001

category SPORT

sub-category Billiard

date 08/18/04

duration 00:40:30 number of stories ...

...

ATTRIBUTES CLIP

index CL001

category SPORT

sub-category Billiard

date 08/18/04

duration 00:40:30 number of stories ...

... ATTRIBUTES SCENE/STORY

index SC001

name competition duration 00:01:56 frame start 0000

frame end 3490

number of shots K event William's turn key words William, ThreeBall..

ATTRIBUTES SCENE/STORY

index SC001

name competition duration 00:01:56 frame start 0000

frame end 3490

number of shots K event William's turn key words William, ThreeBall..

ATTRIBUTES SHOT

index SH002

name Three Ball

duration 00:00:30 frame start 0400

frame end 0632

camera still

text ....

ATTRIBUTES SHOT

index SH002

name Three Ball

duration 00:00:30 frame start 0400

frame end 0632

camera still

text ....

Figure Figure Figure Figure 3. 3. 3. Video 3. Video Video Video Modeling Modeling Modeling Modeling and and and and RepresentationRepresentationRepresentationRepresentation

From the Figure 3, we can see the fourth layer, video shots, which are directly related to video structures and contents, are the basic units used for accessing video and a sequence of frames recorded contiguously and re-presenting a continuous action in time or space. And we consider on shot that contains a series of actions that can be used to express one kind of meaningful event in the video as one Domain Knowledge. In order to get it to facilitate for our research, an automatic shot detection technique has been proposed for adaptive video coding applications [9]. We focus on video shot detection on compressed MPEG video of Billiard Game.

Since there are three frame types (I, P, and B) in a MPEG bit stream, we first propose a technique to detect the scene cuts occurring on I frames, and the shot boundaries obtained on the I frames are then refined by detecting the scene cuts occurring on P and B frames. For I frames, block-based DCT is used directly as:

(18)

where

One finds that the dc image [consisting only of the dc coefficient (u=v=0) for each block] is a spatially reduced version of I frame. For a MPEG video bit stream, a sequence of dc images can be constructed by decoding only the dc coefficients of I frames, since dc images retain most of the essential global information of image components.

Yeo and Liu have proposed a novel technique for detecting shot cuts on the basis of dc images of a MPEG bit stream [10],in which the shot cut detection threshold is determined by analyzing the difference between the highest and second highest histogram difference in the sliding window. In this article, an automatic dc-based technique is proposed which adapts the threshold for shotcut detection to the activities of various videos. The color histogram differences (HD) among successive I frames of a MPEG bit stream can be calculated on the basis of their dc images as

where Hj(k) denotes the dc-based color histogram of the jth I frame, Hj-1(k) indicates the dc-based color histogram of the (j-1)th I frame, and k is one of the M potential color components. The temporal relationships among successive I frames in a MPEG bit stream are then classified into two opposite classes according to their color histogram differences and an

16 v ) 1 y 2 cos( 16

u ) 1 x 2 cos( ) y , x ( 4 I

c ) c v , u ( F

7

0 y 7

0 x v

u × + π + π

=

∑ ∑

=

=

=

=

0 v , u for ,

2 1

otherwise ,

1 C C u, v

2 j 1

j M

0 k

)]

k ( H ) k ( H [ )

1 j , j (

HD − =

=

(19)

optimal threshold Tc.

The optimal threshold Tc can be determined automatically by using the fast searching technique given in Ref. [10]. The video frames ~including the I, P, and B frames. Between two successive scenes cuts are taken as one video shot. The following Figure4 has shown us the shot we have detected using the algorithm mentioned above. The shot has contains a series of I frames.

(a) (b) (c)

(d) Figure

Figure Figure

Figure 4. 4. 4. (b)and(c): 4. (b)and(c): (b)and(c): (b)and(c): the the the RGB the RGB RGB RGB histogram histogram histogram histogram and and and and color color color histogram color histogram histogram histogram difference difference difference difference of of of of three

three three

three snapshots(a). snapshots(a). snapshots(a). snapshots(a). (d):(d):(d): Billiard (d):Billiard Billiard Billiard Game's Game's Game's Game's Shot Shot Shot DetectionShot DetectionDetectionDetection

(20)

3.3.

3.3. 3.3.

3.3. Event Event Event Event Representation Representation Representation Representation using using using using MPEG-7 MPEG-7 MPEG-7 MPEG-7 High High High High Level Level Level Level Descriptors

Descriptors Descriptors Descriptors

Video event has the semantic meanings that can be expressed by the human language, it contains the feature elements, camera motion, time, key frame, annotation and object set elements, among others. Our task is how to analyze and describe the event taking advantage of those elements in the specific domain.

MPEG-7 is a new standard for describing the content of multimedia data [11]. MPEG-7 is a means of attaching meta-data to multimedia content. It offers a comprehensive set of audiovisual description tools including meta-data elements and their structures and relationships defined by the standard in the form of Descriptors and Description Schemes.

Figure Figure Figure Figure 5. 5. 5. MPEG-7 5. MPEG-7 MPEG-7 FrameworkMPEG-7 FrameworkFrameworkFramework

Figure 5 shows the relationship among the different MPEG-7 elements

(21)

introduced above. The DDL allows the definition of the MPEG-7 description tools, both Descriptors and Description Schemes, providing the means for structuring the Ds into DSs [12]. The DDL also allows the extension for specific applications of particular DSs. The description tools are instantiated as descriptions in textual format (XML) thanks to the DDL (based on XML Schema). Binary format of descriptions is obtained by means of the BiM defined in the Systems part. And this kind of organizing mechanism facilitates our analysis about our instance "Billiard Game". We use the semantic (high-level) descriptors to define the components in the video shot of our example.

Figure 6 illustrates possible conceptual aspects and abstractions of a specific instance (image: "billiard_Shot_I01.jpg") billiard_Shot_I01.jpg") billiard_Shot_I01.jpg") of billiard_Shot_I01.jpg") of of a of a a a video video video video shot shot shot shot content.content.content.content.

Semantic TimeDS Semantic Location DS

"01.07.03" "The Billiard Room"

"time of" "locate of"

Event DS Object DS

AgentObject DS describes

Billard Table

"The Three Ball"

describes

Cue Ball Man Player

Object Ball 1 Object Ball 2

"Willam"

Figure Figure 6. Figure Figure 6. 6. 6. Conceptual Conceptual Conceptual Conceptual Abstractions Abstractions Abstractions Abstractions of of of of a a a a Video Video Video Video Shot Shot Shot Shot ContentContentContentContent

(22)

Table Table Table

Table 1. 1. 1. Lexicon 1. Lexicon Lexicon MPEG-7 Lexicon MPEG-7 MPEG-7 XML MPEG-7 XML XML XML filefilefilefile

(23)

The Structure DSs and Semantic DSs can be related by a set of links allowing the shot content to be described on the basis of both content structure and semantic structures together. The links relate different semantic concepts to the instances within the shot content described by the segments.

Furthermore, most of the MPEG-7 Content Description and Content Management DSs are linked together and in practice, also often included within each other in the MPEG-7 descriptions. As our instance, our Structure DS is the event -" The Three Ball", and it related to three kinds of DSs: "Semantic time", "Semantic Location" ,"Object" and with their corresponding semantic concept definitions.

We can make use of the MPEG-7 descriptions to do the annotation work both from semantic level and low level features respectively. MPEG-7 instances are XML documents that conform to a particular MPEG-7 schema, as expressed in the DDL and that describe visual content.

We are trying to create a standard description is an appropriate way to address semantic level concepts based the detection video shot. .An example of lexicon MPEG-7 XML file is shown in Table 1. In this file, it defines the event, objects with their attributes and describes the semantic logical structure. That the set of lexicon is dependent on the summarization application, and can be easily modified and imported into the annotation tool.

3.4.

3.4. 3.4.

3.4. Video Video Video Video Object Object Object Object Description Description Description Description with with with with Low Low Low Levels Low Levels Levels Features Levels Features Features Features

Previous sections have been analyzed the video event's logical structure and schema by using the MPEG-7 high level descriptors from the point of view of the human meanings. However, we know that, what the Video Database stored is the "Raw Video Data", such as the video object's dominant colors, the motion parameters which have described the spatio-temporal relationships between moving objects, and the trajectories of them. So, analyzing the video object motions become our challenge task in

(24)

this section.

According to the characteristics of the "Billiard Game", the trajectory is preferred to considers our research object. We try to use the object's motion trajectory to express the event meanings whose definitions are described in the MPEG-7 high-level descriptors.First issue is that we apply the DCT algorithm to pick up the key-frames in the shot that was detected before.

Just like the below Figure 7 showed us, key-frames are obtained.

These four key-frames have present the whole process of the event of

"The Three Ball": the "cue ball" touches the "object ball 1"(the gray ball) then bounce back the Northwest direction, then come back to hit another

"object ball 2"(the red ball). And the 3-D Figure 8 defined as a series of spatial-temporal localization of one of its representative points. Therefore, the core information of trajectory is a list of points, which has both spatial and temporal locations.

In order to describe the spatial localization of the points, spatial coordinate system must be specified. In fact, this shot records about 60 points from the 20 frames. Each 20 points present one Ball's trajectory. Here, we also use the MPEG-7 XML file to describe the video object moving state.

high(North)

low(South)

left(West) Right(East)

(1)

high(North)

left(West)

Right(East)

low(South) (2) high(North)

low(South)

left(West) Right(East)

(3)

high(North)

low(South)

left(West) Right(East)

(4)

Figure Figure Figure Figure 7. 7. 7. 7. Key-frame's Key-frame's Key-frame's Key-frame's Moving Moving Moving Moving Object Object Object TrajectoryObject TrajectoryTrajectoryTrajectory

(25)

t(103ms)

0.6 1.2 1.8 2.4 3.0

x

y

x

x x

x xx

x

x x

xx

x

Figure Figure Figure 8. Figure 8. 8. 8. Moving Moving Moving Moving Object Object Object Object trajectory trajectory trajectory trajectory RecordingRecordingRecordingRecording

The motion trajectory records the moving path of a specific object. It is a high-level feature associated with a moving region. As mentioned before, in the Table2, we annotate the shot, which includes16 frames, of a video as

"the three ball" and annotate a rectangular region in the key frame-514. In MPEG-7, each video shot is defined as a Video Segment, where the shot start-time (shown in the Thh:mm:ss:nnF500 format) and duration are given and the annotations are described. Furthermore, the embedded

<SpatioTemporalDecomposition> tag allows us to specify the region location and the corresponding text annotation in a key frame. In this table, the key frame is the 514th frame of the video sequence.

The annotated region is specified by the <SpatialLocator> tag. It is identified by a polygon whose n vertex coordinates are recorded in the order of <x0, y0>, <x1, y1>, …, <xn-1, yn-1> after the <CoordsI> tag. For multiple regions in a key frame, the system needs to repeat the section between <StillRegion> and its closing tag inside the

<SpatialDecomposition> section.

(26)

Table Table Table

Table 2. 2. 2. 2. Low-level Low-level Low-level Low-level Features Features Features Features in in in MPEG-7 in MPEG-7 MPEG-7 XML MPEG-7 XML XML XML filefilefilefile

(27)

4. 4. 4.

4. DDDDomain omain omain omain KnowledgeKnowledgeKnowledgeKnowledge Ontology Ontology Ontology Ontology Infrastructure Infrastructure Infrastructure Infrastructure Building

BuildingBuilding Building

We already described the video content both from the semantic level and the low level features. However, the MPEG-7 formalism does not allow reasoning mechanism because it is defined solely in XML schema. Although XML schema is suitable for expressing the syntax, structural, cardinality and typing constraints required by MPEG-7, it displays some limits when we need to extract implicit information. This is due to the fact that there is neither formal semantics nor inference capabilities associated with the elements of an XML document. In this setting, the MPEG-7 standard is extended by an ontology for the Domain Knowledge representation allowing some reasoning mechanism over the MPEG-7 description.

4.1 4.1 4.1

4.1 Building Building Building Building Object Object Object Ontology Object Ontology Ontology in Ontology in in in DDDDomain omain omain omain Knowledge Knowledge Knowledge Knowledge

In order to make the annotation can range from high-level semantic concepts to low-level feature descriptions. The low-level features automatically extracted from the resulting moving objects are mapped to high-level concepts using ontology in a specific Domain Knowledge, combined with a relevance feedback mechanism is the main contribution in this part. In this study, ontologies [13, 14] are employed to facilitate the annotation work using semantically meaningful concepts (semantic objects).

(28)

object ontology

domainant color {most dominant color: blue second dominant color: white, red, blue}

shape {slightlyoblong,round}

foreground/

background

background position

x-speed {low,medium,high}

y-direction {top low ,low top}

y-speed {low,medium,high}

foreground motion

vertical axis {high,middle,low}

horizontal axis {left,middle,right}

dominant color set in Luv color space, DCOL={(Li, ui, vi, Pi)}

eccentricity value of contour-based shape descriptor

motion trajectoryMTI={(xi, yi, ti)}

using "Integrated" coordinates

motion trajectoryMTI={(xi, yi, ti)}

using "Integrated" coordinates

hight-level descriptor(keyword) low-level mpeg-7 descriptor

x-direction {right left ,left right}

Figure Figure Figure

Figure 9. 9. 9. Object 9. Object Object Object Ontology Ontology Ontology Ontology Structure. Structure. Structure. Structure.

The The The

The Correspondence Correspondence Correspondence Between Correspondence Between Between Between Low-level Low-level Low-level MPEG-7 Low-level MPEG-7 MPEG-7 MPEG-7 Descriptors

Descriptors Descriptors

Descriptors and and and High-level and High-level High-level DescriptorsHigh-level DescriptorsDescriptorsDescriptors

The object ontology is presented in Figure 9, where the possible intermediate level descriptors and descriptor values are shown. Each intermediate level descriptor value is mapped to an appropriate range of values of the corresponding low-level, arithmetic descriptor. With the exception of color (e.g. "black") and direction (e.g. "low?high") descriptor values, the value ranges for every low-level descriptor are chosen so that the resulting intervals are equally populated.

This is pursued so as to prevent an intermediate-level descriptor value from being associated with a plurality of spatio-temporal objects in the database, since this would render it useless in restricting a query to the potentially most relevant ones. Overlapping, up to a point, of adjacent value ranges, is used to introduce a degree of fuzziness to the descriptor values;

(29)

for example, both "slightly oblong" and "moderately oblong"values may be used to describe a single object. Regarding color, a correspondence between the 11 basic colors used as color descriptor values and the values of the HSV color space is heuristically defined.

More accurate correspondences based on psycho visual findings are possible; this is however beyond the scope of this work. Regarding the direction of motion, the mapping between values for the descriptors "x direction", "y direction" and the MPEG-7 Motion Trajectory descriptor is based on the sign of the cumulative displacement of the foreground spatio-temporal objects.

4.2 4.2 4.2

4.2 Concepts Concepts Concepts Concepts Definitions Definitions Definitions Definitions in in in DDDDomain in omain omain omain Knowledge Knowledge Knowledge Knowledge

Through this domain ontology, we map the low-level features with high-level concepts using spatio-temporal relations between objects.

Moreover, in order to give specific annotation to frames, we defined some semantic concepts with clear and typical characteristics to describe shot events.

� Class Object: the the the subclass the subclass subclass and subclass and and instance and instance instance of instance of of of the the the superclass the superclass superclass superclass "Object", "Object", "Object", "Object", all all

all all video video video video objects objects objects objects that that that can that can can can be be be be detected detected detected detected through through through the through the the the analysis analysis analysis analysis process. process. process. process.

Each Each Each

Each object object object object instance instance instance is instance is is is related related related related to to to appropriate to appropriate appropriate appropriate feature feature feature feature instances instances instances instances by by by by hasFeature

hasFeature hasFeature

hasFeature property. property. property. Each property. Each Each object Each object object can object can can be can be be be related related related to related to to to one one one one or or or or more more more more other

other other

other objects objects objects through objects through through a through a a set a set set of set of of of predefined predefined predefined spatial predefined spatial spatial properties.spatial properties.properties.properties.

� Class feature: the the the the subclass subclass subclass subclass and and and and instance instance instance instance of of of the of the the the superclass superclass superclass superclass

"feature",

"feature",

"feature",

"feature", which which which which is is is the is the the low-level the low-level low-level low-level features features features of features of of of multimedia multimedia multimedia multimedia associated

associated associated

associated with with with each with each each object. each object. object. object. In In In our In our our video our video video video shot shot shot domain, shot domain, domain, domain, the the the features the features features features covered

covered covered

covered low-level low-level low-level low-level background background background background features features features features like like like like color, color, color, color, moving moving moving object moving object object object direction,

direction, direction,

direction, each each each each of of of of which which which which has has has has a a a a closely closely closely closely relation relation relation with relation with with with the the the the object object object object that that that that will

will will

will be be be be detected detected detected detected andthroughout andthroughout andthroughout andthroughout the the the the whole whole whole process whole process process process of of of shot of shot shot shot analysis; analysis; analysis; analysis;

for for for

for the the the the high-level high-level high-level semantics high-level semantics semantics associated semantics associated associated with associated with with with foreground foreground foreground features foreground features features features

(30)

such such such

such as as as as motionactivity, motionactivity, statusmotionactivity, motionactivity, statusstatusstatus………… it it it it emphasize emphasize emphasize emphasize on on on their on their their their instinct instinct instinct instinct meaningful

meaningful meaningful

meaningful description description description description of of of of certain certain certain video certain video video events.video events.events.events.

� Class: featureParameter: featureParameter: featureParameter: featureParameter: denotes denotes denotes denotes the the the the actualqualitativedescriptions actualqualitativedescriptions actualqualitativedescriptions actualqualitativedescriptions of of

of of each each each each corresponding corresponding corresponding feature. corresponding feature. feature. It feature. It It is It is is is subclassed subclassed subclassed subclassed according according according to according to to to the the the the definedsuperclass,"featureParemeter",including:ColorfeatureParam definedsuperclass,"featureParemeter",including:ColorfeatureParam definedsuperclass,"featureParemeter",including:ColorfeatureParam definedsuperclass,"featureParemeter",including:ColorfeatureParam eter,MotionfeatureParameter,PositionfeatureParameter,Directionfe eter,MotionfeatureParameter,PositionfeatureParameter,Directionfe eter,MotionfeatureParameter,PositionfeatureParameter,Directionfe eter,MotionfeatureParameter,PositionfeatureParameter,Directionfe atureParameter.

atureParameter.

atureParameter.

atureParameter.

� Spatial Relations: approach, approach, approach, approach, touch, touch, touch, touch, disjoint. disjoint. disjoint. These disjoint. These These three These three three three spatial spatial spatial spatial descriptors

descriptors descriptors

descriptors are are are used are used used to used to to to describe describe describe the describe the the object the object object relations object relations relations relations between between between between each each each each individual

individual individual

individual frame; frame; frame; and frame; and and and describe describe describe the describe the the the differences differences differences and differences and and relations and relations relations relations between

between between

between two two two adjacent two adjacent adjacent frames.adjacent frames.frames.frames.

� Temporal Relations: before, before, before, before, meet, meet, meet, after, meet, after, after, starts, after, starts, starts, completes. starts, completes. completes. Plus completes. Plus Plus Plus the

the the

the three three three three spatial spatial spatial spatial descriptors, descriptors, descriptors, descriptors, a a a a whole whole whole whole process process process of process of of of describing describing describing describing the the the the moving

moving moving

moving video video video video objects, objects, objects, objects, emphasize emphasize emphasize emphasize the the the the semantics semantics semantics semantics and and and and relations relations relations relations between

between between

between shots, shots, shots, come shots, come come up come up up up with with with with a a a a meaningful meaningful meaningful meaningful metadata metadata metadata metadata in in in in the the the the backend.

backend.

backend.

backend.

� Actions: beforeShooting, beforeShooting, beforeShooting, beforeShooting, cue cue cue ball cue ball ball ball hitting, hitting, hitting, hitting, object object object ball object ball ball hitting, ball hitting, hitting, hitting, finishShooting,

finishShooting, finishShooting,

finishShooting, change change change change player. player. player. We player. We We defined We defined defined defined five five five actions five actions actions actions to to to to describe describe describe describe the the

the the semantic semantic semantic semantic event event event event in in in one in one one shot, one shot, shot, and shot, and and and could could could use could use use use it it it it as as as as akeyword akeyword akeyword akeyword annotation

annotation annotation

annotation of of of of free-text, free-text, free-text, free-text, as as as as well well well as well as as as index index index index the the the key the key key frame key frame frame frame instead instead instead of instead of of of getting

getting getting

getting all all all all the the the the I-frames I-frames I-frames I-frames an an an an annotation.annotation.annotation.annotation.

The developed domain ontology mainly focused on the representation of semantics in each detected shot and its frames. As a consequence, we could simply pick up the key-frames from the shot depository, and annotate them sequentially and semantically according to the inner presentation of moving objects. To facilitate the explanation, we consider one shot in the Billiard Game put forward above as our Domain Knowledge. Because one shot usually has the semantic meaning that contains a sequence of frames, we call one

(31)

shot an Event. The event we are going to describe and analysis is named as

"The Three Ball". So, the following section shows the application to "the three ball" billiard game video shots

4.3 4.3 4.3

4.3 Video Video Video Video Event Event Event Event Representation Representation Representation Representation

The proposed approach in the section 5 was tested in one specific Domain Knowledge: The Billiard Game Shots In the domain, appropriate domain ontology was defined. In this case, Figure10 is the result of the video object's motion trajectory detection through object ontology's knowledge restructure for mapping the MPEG-7 high level descriptors to the low level features.

For each video object created by applying the segmentation algorithm to a collection of video shots, MPEG-7 low-level descriptors were calculated and the mapping between them and the intermediate-level descriptors defined by the object ontology was performed. Subsequently, the object ontology was used to define, using the available intermediate-level descriptors, semantic objects. Querying using these definitions resulted in initial results produced by excluding the majority of spatio-temporal objects in the database.

Figure Figure Figure

Figure 10. 10. 10. 10. Motion Motion Motion Motion Detection Detection Detection in Detection in in in Key-framesKey-framesKey-framesKey-frames

(32)

cue ball object ball1 object ball2

Event "The Three Ball" participanted by Wlliam

before shooting hitting cue ball hitting object ball complete shooting

Figure Figure Figure

Figure 11. 11. 11. Key-Frames 11. Key-Frames Key-Frames Annotation Key-Frames Annotation Annotation Annotation of of of of Specific Specific Specific Specific EventEventEventEvent

"The

"The "The

"The Three Three Three Three Ball"Ball"Ball"Ball"

Figure 11 has shown the results of the annotation of key-frame of the video event. These descriptors are presented by the XML description output from the ontology modeling described above, then, parsed by the XML parser. All of the keywords, not only extracted from the clip, scene from the video storage, but the annotators from the ontology modeling are integrated into the keywords database for catering the user's retrieval.

4.4 4.4 4.4

4.4 Retrieval Retrieval Retrieval Retrieval

This retrieval model is designed based on the SVQL(semantic views query languages). SVQL is a high level query language that allows users to express their requirements following the Semantic Views Model in a concise, abstract and precise way.

Moreover, SVQL is designed particularly to make possible the retrieval of audiovisual data described by MPEG-7 standard. SVQL allows the

(33)

users to formulate high level queries on top of MPEG-7 descriptions without getting involved into the implementation details of MPEG-7.

Compared to traditional database query languages such as SQL, OQL, and XQuery, SVQL has the advantage of being specially designed for the retrieval of audiovisual data based on a semantic model of user’s requirements. Query languages such as SQL, OQL, and XQuery are designed to fit specific structures of data.

However these languages are not suited to express the abstract level of the Semantic Views Model. Using XQuery and XML Schema is an advantageous method for the “implementation” of the Semantic Views Model and Semantic Views Query Language on top of MPEG-7 instances. Nevertheless, from a conceptual point of view, XQuery does not allow the user to directly express his/her abstract requirements following the Semantic Views Model.

Table Table Table Table 3: 3: 3: 3: SVQL SVQL SVQL QuerySVQL QueryQueryQuery

$semanti cVi ews := semanticViews("Vi deo data Reposi tory") $newsItem := newsItem ($semanticViews), $fact := fact($sem anticViews),

$shot := shot($sem anticViews),

$videoSegment := videoSegment($sem anticViews),

match(getDescription($fact,Event),event("the three ball"))AND

match(getDescription($shot,Person),person("man_player"))AND

greaterThan(getDescription($videoSegment,shot),shot("3"))AND

corresponds($videoSegment,$shot)

$video Segment LET

WHERE

RETURN

As can be observed in the example, the query is based on a

"LET-WHERE-RETURN" structure[16], This type of query syntax, refereed to as "keyword oriented syntax" [17], is used in the most well-known query languages, such as SQL, OQL, and XQuery, and is a familiar mode of query expression for specialized end users and

(34)

application programmers.

As can be seen in the above query, the LET clause contains two types of expressions: In the first expression, the SemanticViews() function is called with the name of the file containing the MPEG-7 instances to be retrieved. This function creates the Semantic Views Document corresponding to the MPEG-7 file. In the next series of expressions, a set of functions are called to get different required BasicViewEntities of the Semantic Views Document and to assign them into a set of variables. Each function name indicates the type of the BasicViewEntity, e.g. NewsItem,Shot, and VideoSegment.

The WHERE clause also contains two different types of expressions:

The first four expressions represent a set of conditions that should be held on the BasicViewEntities of different Views. These conditions are expressed via a set of ViewOperators: here match(), and greaterThan() are used.

The last expression represents the condition concerning the InterViewRelation of BasicViewEntities via corresponds() operator. It determines which of the cited BasicViewEntites correspond to each other. This expression is used if more than one View is used in the query. Finally, the RETURN clause identifies a variable containing a BasicViewEntity to be returned to the user. In the above query the variable containing the VideoSegment is returned.

Figure Figure Figure Figure 12. 12. 12. Video 12. Video Video Video Research Research Research Research by by by by Semantic Semantic Semantic Semantic Content Content Content Content

(35)

Querying interface based on SVQL is showed in Figure 12. We provide a friendly, visually rich inter-fact that allows the user to interactively query the database, browse the results, and view the selected video clips.

In order to make sure the user can get the exact results. we make some conceptual restrictions on the video retrieval interface, we provide the possible concepts to facilitate the user's choice and two text boxes for the user's own will. And this search engine is responsible for searching the database according to the parameters provided by the user.

(36)

5. 5. 5.

5. ConclusionsConclusionsConclusionsConclusions

Our paper was mainly designed for video processing, video analysis, and video event presentation. From the video shot and key-frame detection to internal moving objects description, from the MPEG-7 descriptors definitions for the low-level features and high-level semantic to the video object ontology, we all did the comprehensively and clearly exposition. We combined the traditional algorithms with our own methods put forward for applying to our specific examples.

In this paper, not only did we analysis the MPEG-7 in one specific domain comprehensively, such as have focused on a few specific visual dimensions such as color, motion and spatial information. but we make a lot of improvement and novelty works based on that. We chiefly take advantage the XML files to perform knowledge enrichment on top of the primary information extracted from the video which can be considered the extended format of MPEG-7 files, since the MPEG-7 formalism does not allow reasoning mechanism.

This enriched data representative helps users search for multimedia video content more efficiently. Although XML schema is suitable for expressing the syntax, structural, cardinality and typing constraints required by MPEG-7,it displays some limits when we need to extract implicit information. This is due to the fact that there is neither formal semantics nor inference capabilities associated with the elements of an XML document. So, we proposed a novel method for video event analysis and description on the fundamental of the Domain Knowledge object ontology, the MPEG-7 standard is extended by an ontology for the human knowledge representation allowing some reasoning mechanism over the MPEG-7 descriptions, the most contribution is that it helps us overcome gap between the low-level features and high level semantics, but combining these two aspects in the most efficient and flexible (expressive) manner.

The proposed approach aims at formulation a domain specific analysis

(37)

model facilitating for the video event retrieval and video object motion detection. And at the last part, we designed a friendly retrieval interface for the users based on the SVQL querying language.

Our future work includes the enhancement of the domain ontology with more complicated model representation and we try to use the more complex spatio-temporal relationship reference rules to analyze the moving features.

We will also do more technical work to intensify our retrieval part function.

Extending the modeling scheme with more concepts and conceptual relationships and test our system on other categories of video like documentaries, more complex sports(like football) or movies also are our further research.

(38)

References ReferencesReferences References

1. S.-F. Chang. The holy grail of content-based media analysis. IEEE Multimedia,9(2):610, Apr-Jun. 2002.

2. S.-F. Chang, T. Sikora, and A. Puri. Overview of the MPEG-7 standard. IEEE Trans. on Circuits and Systems for Video Technology, 11(6):688-695, June 2001.

3. A. Yoshitaka and T. Ichikawa. A survey on content-based retrieval for multimedia databases. IEEE Transactions on Knowledge and Data Engineering, 11(1):81-93, Jan-Feb 1999.

4. P. Salembier and F. Marques. Region-Based Representations of Image and Video:Segmentation Tools for Multimedia Services. IEEE Trans.

Circuits and Systems for Video Technology, 9(8):1147-1169, December 1999.

5. S.-F. Chang, T. Sikora, and A. Puri, "Overview of the MPEG-7 standard," IEEE Trans. Circuits Syst. Video Technol., vol. 11, pp.

688-695, June 2001.

6. W. Al-Khatib, Y. F. Day, A. Ghafoor, and P. B. Berra, "Semantic modeling and knowledge representation in multimedia databases," IEEE Trans. Knowledge Data Eng., vol. 11, pp. 64-80, Jan-Feb. 1999.

7. R. Nelson and A. Selinger. Learning 3D recognition models for general objects from unlabeled imagery: An experiment in intelligent brute force. In Proceedings of ICPR,volume 1,pages 18, 2000.

8. C. Town and D. Sinclair. A self-referential perceptual inference framework for video nterpretation. In Proceedings of the International Conference on Vision Systems, volume 26, pp. 54-67, 2003.

9. J. Fan, D. K. Y. Yau, W. G. Aref, and A. Rezgui, ''Adaptive motioncompensated video coding scheme towards content-based bit rate allocation,''J. Electron. Imaging: 521-533, 2000

10. B. L. Yeo and B. Liu, ''Rapid scene change detection on compressed video,''IEEE Trans. Circuits Syst. Video Technol. 5, 533-544, 1995.

11.Sikora, T.: The MPEG-7 Visual standard for content description - an overview.IEEE Trans. on Circuits and Systems for Video

(39)

Technology,special issue on MPEG-7 696-702, 2001

12.Multimedia content description interface-part 8: extraction and use of MPEG-7 descriptors. ISO/IEC, 2002.

13.P. Martin and P. W. Eklund, "Knowledge retrieval and the World Wide Web," IEEE Intell. Syst., vol. 15, pp. 18-25, May-June, 2000.

14.A. T. Schreiber, B. Dubbeldam, J. Wielemaker, and B. Wielinga,

"Ontology-based photo annotation," IEEE Intell. Syst., vol. 16, pp.

66-74, May-June 2001.

15.M.K.Smith, C.Welty, and D.L.McGuinness, "OWLWeb Ontology Language Guide,"W3C Candidate Recommendation, Feburary 2004.

16.Chapitre du livre: N.Day, N. Fatemi, O. Abou Khaled. Search and Browsing. Introduction to MPEG-7: Multimedia Content Description Interface. Wiley. England: B.S. Manjunath ,Philippe Salembier, Thomas Sikora,ISBN 0-471-48678-7, pp. 335-352, 2002.

17.Group,Cotton, P. and Robie J. The W3C XML Query Working XML Europe 2001, Berlin, Germany, May 2001.

(40)

저작물 저작물

저작물 저작물 이용 이용 이용 허락서 이용 허락서 허락서 허락서

학 과 전자계산학과 학 번 20037087 과 정 석사

성 명 한글: 송단 한문 : 宋 丹 영문 : Dan Song

주 소 조선대학교 여자기숙사 638호실

연락처 E-MAIL : star_sd821@hotmail.com 논문제목

한글 :도메인 도메인 도메인 도메인 지식 지식 지식 지식 구축에 구축에 구축에 구축에 의한 의한 의한 의한 의미적 의미적 의미적 의미적 비디오 비디오 비디오 비디오 이벤트 이벤트 이벤트 이벤트 표현표현표현표현 영문 :Semantic Semantic Semantic Semantic Video Video Video Video Event Event Event Description Event Description Description Description Assisted Assisted Assisted Assisted by by by by Building Building Building Building

Domain Domain Domain Domain Knowledge Knowledge Knowledge Knowledge

본인이 저작한 위의 저작물에 대하여 다음과 같은 조건아래 -조선대학교가 저작 물을 이용할 수 있도록 허락하고 동의합니다.

- 다 음 -

1. 저작물의 DB구축 및 인터넷을 포함한 정보통신망에의 공개를 위한 저작물 의 복제, 기억장치에의 저장, 전송 등을 허락함

2. 위의 목적을 위하여 필요한 범위 내에서의 편집형식상의 변경을 허락함. 다 만, 저작물의 내용변경은 금지함.

3. 배포전송된 저작물의 영리적 목적을 위한 복제, 저장, 전송 등은 금지함.

4. 저작물에 대한 이용기간은 5년으로 하고, 기간종료 3개월 이내에 별도의 의 사표시가 없을 경우에는 저작물의 이용기간을 계속 연장함.

5. 해당 저작물의 저작권을 타인에게 양도하거나 또는 출판을 허락을 하였을 경우에는 1개월 이내에 대학에 이를 통보함.

6. 조선대학교는 저작물의 이용허락 이후 해당 저작물로 인하여 발생하는 타인 에 의한 권리 침해에 대하여 일체의 법적 책임을 지지 않음

7. 소속대학의 협정기관에 저작물의 제공 및 인터넷 등 정보통신망을 이용한 저작물의 전송출력을 허락함.

2005 년 5 월 일 저작자: (서명 또는 인)

조선대학교 총장 귀하

참조

관련 문서

입천장쪽 겉질뼈의 두께는 세 군 모두에서 이틀능선에서 치아뿌리끝쪽으로 갈수 록 약간씩 증가하였으나 그 차이는 미미하였다.Park등(2008)은 사체에서 위턱

연소 불안정을 제어하기 위한 방법으로 수동적 제어기구인 배플(baffl e)과 음향 공 명기(acousticcavity)의 설치가 있다.배플은 인젝터 면에

우리나라에서는 1996년까지 한국인에 많다고 알려진 알레르겐 35종을 검출하는 MAST-CLA KoreaIgE Panel 을 이용 하였으나,1996년 말부터는 알레르겐을 음식 형과 흡입형

최근 증가하는 비만환자와 성인병의 관리에 있어서 체중조절은 필수적이다.본 연구는 실제 환자의 진료에서 비만지표로서 이용하고 있는 BMI와 관련된 여러 요

자외선 조사는 각질형성세포 내의 ERK와 JNK를 모두 활성화시킬 수 있으나 이 두 경로가 활성화되는 정도에는 차이가 있다.JNK는 자외선에 의하여 매우

본 심판 판정의 신뢰도를 높이고 불공정한 판정을 줄이기 위해 현재 활동하고 있는 심판들의 판정 능력 수준 및 공정성에 대한 선수들의 인식을

각 프라이머들을 이용해서 검출할 수 있는 최소한의 genomi cDNA 양을 알기 위하 여 민감도를 검증하였다.. Sensi ti vi ty test of PCR wi th 5 pri mer sets for the i denti

3&gt;... 00/mm 2 으로 나타났다.정상군과 정상 수영군의 선조체에서 TUNEL-양성 세포의 유의차는 없었고,뇌출혈군은 선조체 에서 TUNEL-양성 세포 수를