• 검색 결과가 없습니다.

Efficient Object-based Image Retrieval Method using Color Features from Salient Regions

N/A
N/A
Protected

Academic year: 2021

Share "Efficient Object-based Image Retrieval Method using Color Features from Salient Regions"

Copied!
8
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

IEIE Transactions on Smart Processing and Computing

Efficient Object-based Image Retrieval Method using

Color Features from Salient Regions

Jaehyun An1, Sang Hwa Lee2, and Nam Ik Cho2

1 LG Electronics, Seoul, Republic of Korea jhahn@ispl.snu.ac.kr

2 Department of Electrical and Computer Engineering, INMC, Seoul National University, Seoul, Korea

{lsh529, nicho@snu.ac.kr} * Corresponding Author: Nam Ik Cho

Received March 21, 2017; Revised May 2, 2017; Accepted June 12, 2017; Published August 30, 2017 * Regular Paper

Abstract: This paper presents an efficient object-based color image–retrieval algorithm that is

suitable for the classification and retrieval of images from small to mid-scale datasets, such as images in PCs, tablets, phones, and cameras. The proposed method first finds salient regions by using regional feature vectors, and also finds several dominant colors in each region. Then, each salient region is partitioned into small sub-blocks, which are assigned 1 or 0 with respect to the number of pixels corresponding to a dominant color in the sub-block. This gives a binary map for the dominant color, and this process is repeated for the predefined number of dominant colors. Finally, we have several binary maps, each of which corresponds to a dominant color in a salient region. Hence, the binary maps represent the spatial distribution of the dominant colors in the salient region, and the union (OR operation) of the maps can describe the approximate shapes of salient objects. Also proposed in this paper is a matching method that uses these binary maps and which needs very few computations, because most operations are binary. Experiments on widely used color image databases show that the proposed method performs better than state-of-the-art and previous color-based methods.

Keywords: Object-based color image retrieval, Dominant color, Saliency, Spatial distribution

1. Introduction

As we take photos using digital cameras and mobile devices, we may sometimes wish that the photos are automatically and intelligently categorized in our own devices without sending them to a computer. Also, the need for searching and matching images on devices connected to the Internet has also been increasing. In this respect, fast content-based image retrieval (CBIR) algorithms would be important in many applications for modern consumer devices, such as mobile cameras, smart phones, and smart TVs.

CBIR systems have long been researched and developed for many applications, which have been well summarized [1]. Historically, color histogram and filter output were first used to describe color and texture features for image retrieval and relevant feedback [29]. Recently, bag of features (BOF) methods, including scale invariant feature transform (SIFT) and speeded up robust features

(SURF) descriptors [2], were developed for image retrieval or matching. The BOF approaches generally yield better retrieval or matching results than color and filter bank approaches. However, they are sometimes not better than color-based retrieval methods when there are insufficient feature points in the images. More recently, many kinds of features including SIFT/SURF have been developed and studied for image matching applications [27]. For image retrieval, deep feature aggregation methods were proposed that exploit pre-trained convolutional neural networks [28]. However, since extraction and exploitation of these features require a lot of computation and memory, they are not yet suitable for applications in consumer devices. Hence, we focus on the use of color again, for simple (yet efficient) description and matching of image content.

Since color is an important feature in representing and describing images, there have been many algorithms and MPEG-7 standards for color-based image retrieval [12-14]. In most color-based methods, the color histograms of the overall image regions are compared [15-17]. Also,

(2)

MPEG-colors to be matched [18]. However, histogram matching often finds unrelated images, because the histograms of whole images or segmented regions do not reflect the spatial information of colors and/or object shapes. Since color histograms usually lose the spatial distribution of colors, some proposed approaches consider spatial relations of colors in addition to the histogram [19-23]. Pass and Zabih [19] and Pass et al. [20] modeled the density of each color, called color coherence vectors, to consider the spatial information of colors. Their methods improved retrieval performance, but did not sufficiently reflect spatial variations of color distribution. Mandal et al. exploited the moment of color histograms and modeled wavelet sub-images as a Gaussian distribution for spatial information of colors [21]. This method handled color information and spatial distribution separately; thus, it was important to determine the tradeoff between color and spatial information for similarity matching. Chan and Chen used spatial features of color and the mean value of each color component [23]. This method shows good retrieval performance, but it is sometimes influenced by object shifts.

This paper focuses on the development of a simple image descriptor based on color, which describes the dominant color of salient objects and the approximate shapes of objects. Using the dominant color from the salient region has already been tried in previous work [4]. However, while that previous work focused only on color similarity in the salient regions, this paper additionally measures the similarity of saliencies when comparing the images. Specifically, this paper defines several binary maps that describe the salient objects’ dominant color and their approximate shapes. This paper first extracts the salient regions in a given image by using a regional feature vector [3]. Then several dominant colors are selected for each salient region, and the spatial distribution of each dominant color is defined as a binary map. Hence, each binary map represents the distribution of a dominant color, which is one of the main image features. Also, the logical OR of the binary maps describes the approximate shapes of salient objects or regions. We also propose a matching method for this descriptor based on simple binary operations. The experiments show that the proposed method performs better than classical and more recent color image–retrieval methods, because the proposed retrieval method considers the color distribution and rough shapes of the salient objects.

2. Proposed Method

This paper proposes an object-based image retrieval method using color features in salient regions. First, we segment the object regions of interest by exploiting saliency maps. Then, the dominant colors in the salient regions are extracted, and the corresponding spatial binary maps are generated. Then, we propose a matching method

that needs simple computations.

2.1 Detection of Salient Objects

For object-based image retrieval, it is important to locate the objects of interest so that features from the background do not much affect the retrieval results. Since the object of interest usually appears as, or belongs to, a salient region, we find the salient regions and define them as the objects of interest. In this paper, we find the saliencies of pixels by using the algorithm proposed by Jiang et al. [3]. Then, a fixed threshold is applied to the saliencies (the threshold is set to 0.3 in this paper), which results in salient region segmentation. For each of these connected binary regions, we crop the region into an image block which will be called an object-centered image (OCI). For scale-invariance, the object area is finally normalized to a fixed size. Fig. 1 illustrates an example of salient object regions, where Figs. 1(a) and (b) are the query and the saliencies of the pixels, respectively, produced with the method from Jiang et al. [3]. Fig. 1(c) is the OCIs for the query. The OCI is the main content or object to be retrieved.

2.2 Dominant Color Extraction

Color appearance is prone to change according to changes in capturing devices and environments, especially a change in lighting conditions. Hence, it is important to find color space and a quantization scheme that are robust to radiometric variations. Specifically, we exploit the hue saturation value (HSV) color space, which is known to be quite invariant to various radiometric changes. The color histogram bins are divided into 10 H bins, 10 S bins, and 5 V bins. The V component is considered separately from H and S since it is not related to the chromatic components [5]. The proposed method selects the pixels (p) satisfying the following condition:

(a) (b) (c) Fig. 1. Results of OCIs (a) Query images, (b) saliency maps produced by Jiang et al. [3], (c) OCIs produced by the proposed method. The OCIs are normalized to 128x128.

(3)

{

p I p| ( )>TS and ( )I p >TV

}

(1) where Ts is the minimal S component to be a chromatic color, and TV is the minimal V. The colors that satisfy the

condition in Eq. (1) are quantized to 10×10 HS bins. Conversely, if the colors do not satisfy the condition in Eq. (1), V is quantized into 5 bins. In summary, the total number of histogram bins is set to 105.

From the HSV color histogram of the OCI described above, dominant colors are extracted. According to the experimental observations, a salient region usually contains a small number of colors, and thus, an entire color histogram is not necessary. We chose dominant colors based on color distribution and maximal frequencies in the histogram bins. It needs to be noted that the dominant colors are usually influenced by the design of color space quantization or clustering. As an example of the dominant color extraction algorithm, MPEG-7 [12] uses the generalized Lloyd algorithm (GLA) [6] to extract and cluster dominant colors. However, this method needs a lot of computation due to the GLA characteristics. Hence, we propose a new method for dominant color extraction that is based on the observation that only a few dominant colors are sufficient to describe most OCIs. To be precise, since most OCIs contain object areas and small background regions, quantizing the colors into a small number does not much affect visual appearance. Specifically, the color histogram of an OCI has many bins with zero or very low frequencies. Hence, we first detect the zero-bins, which will be denoted as zb, and merge the color in the n-th bin (cn) that lies between the zero-bins into a dominant color

cluster ( zb r d ) as 1 i i zb n zb n zb r n + < < ⎧ = ⎨ ⎩ c d c (2)

where cn=

(

h s vn, ,n n

)

is the n-th color in the HSV color histogram. For these clusters, we measure the similarity in colors between the clusters by using the algorithm from Smith and Chang [7], where the similarity between any two colors, indexed by ci =

(

h s vi, ,i i

)

and cj =

(

h s vj, ,j j

)

is given by

(

) (

2

) (

2

)

2

cos cos sin sin

1 5 i j i i j j i i j j ij v v s h s h s h s h a = − − + − + −

If the similarity is larger than a certain threshold (in this paper, 0.8), two clusters are merged into a single cluster. The final k-th dominant color so obtained is denoted as dk. Fig. 2 shows an example of dominant color

extraction for a given OCI.

2.3 Spatial Binary Map and Matching

Even when two OCIs have the same dominant colors, they do not necessarily have high similarity because different content often has similar color statistics. Hence, it is important to include the spatial relationship of dominant colors and/or some shape information in the matched features. For this, we exploit the spatial distributions of dominant colors in the OCI. The main idea is that we define several binary maps that describe the distribution of colors, instead of histogram matching. Also, a combination of the binary maps may also describe the rough shape of an object. To be specific, we define binary image Bk where a

pixel is 1 when it corresponds to dominant color dk. Then,

binary image Bk is partitioned into sub-blocks of size L×L,

and another binary spatial map Mk(l) is generated as

(a) (b)

(c) (d)

Fig. 2. Example of dominant color extraction (a) Extracted salient region (OCI), (b) dominant colors {cn, n=1,2,…10} in the HSV color histogram (c)

dominant color clusters between the zero bins {drzb, r=1,2,3,4} using (2), (d) final dominant color

clusters {dk, k=1,2,3}.

(a)

(b)

(c)

Fig. 3. Spatial maps corresponding to (a) {cn} n=1,2,3,4,

(4)

where l is the sub-block index, N(l) is the total number of pixels in the l-th sub-image, and Nk(l) is the number of

pixels with dominant color dk. The threshold parameter, γ,

is set to 0.3 in all of the experiments. Note that it is difficult to match the spatial distribution of the dominant color in the pixel-wise manner. Hence, the sub-image structure of the spatial map reduces the errors and computational cost. Fig. 3 shows the binary spatial maps for the dominant colors in Fig. 2.

For matching the OCIs, we need to define a similarity measure for the dominant colors’ statistics and their spatial distributions. To be specific, we first test whether a pair of OCIs is worth comparing. That is, we do not compare the maps if their coincident region is too small. The criterion for passing this test is defined as

(

)

1 Q, DB k k c k c d N

M M ≥τ (4)

where Nc is the normalization factor (the number of pixels

in the cluster for dk), k is the index for dominant color dk,

Q k

M is the binary map of dk for the query image, as

defined in Eq. (3), DB k

M is the image in the database, and the measure d(·) is a logical AND operation between the binary maps, defined as

(

)

(

)

1 , L L ( ), ( ) Q DB Q DB k k k k l d × H l l = =

M M M M (5)

Finally, the matching score between the query and the database image is calculated as

(

)

{

(

)

} (

)

S , s 1 c Q, DB k k k k k Q DB =

λω + −λ ω ×d M M (6) where s k

ω is the similarity of saliency, defined as

(

)

2

(

)

2 2 2 1 1 exp exp Q DB k k s k s s s s ω σ σ ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ = − × − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (7) where Q k s ( DB k

s ) is an average saliency value corresponding to dk in the query (database) image; ω kc represents the color similarity, defined as

2 2 2 exp Q DB k k c k c ω σ ⎛ ⎞ ⎜ ⎟ = ⎝ ⎠ m m (8) where Q k m ( DB k

m ) is a 3-D vector that represents average HSV color pixel values within the same histogram bin.

features in the background can also contribute to matching when λ is small. Hence, if we want to retrieve an object with consideration for the background or the neighborhood of the object, parameter λ is set small.

3. Experimental Results

We tested our algorithm on various color image databases. In the experiments, the size of the OCI is normalized to 128×128, and the number of the initial dominant color cluster, dzb, is 4. The OCI is partitioned into the 8× 8 sub-blocks for a spatial distribution map, and parameter τ is set to 0.2. The parameters in the matching c score equation are set as follows: σc=0.3, σs =0.7, and

0.6 λ= .

The proposed algorithm was compared with state-of-the-art methods and with some of the conventional dominant color-based methods using two Corel datasets. One of the datasets is the Corel-1k dataset, which contains 1,000 images categorized into 10 classes. The other dataset is Corel-10k, which has 10,800 images in 80 classes [11]. Some of the image classes have objects with different colors, such as buses and flowers. Hence, for these classes, we additionally divided the categories into sub-categories according to the colors.

Fig. 4 shows the comparison of linear block algorithm (LBA) [8], the saliency map method [10], and the proposed method on the Corel-1k dataset. Each sub-figure shows the top 20 retrieval results for the query shown at the top left. Also, we compared the proposed method with an MPEG-7 dominant color descriptor [24, 25] and a perceptual color-based method [9] with the Corel-10k dataset. Some results are shown in Fig. 5, which demonstrates that the proposed algorithm retrieves similar images better than the other algorithms..

For evaluating retrieval performance objectively, we employed several measures, such as average retrieval rate (ARR), average normalized modified retrieval rank (ANMRR) [13], mean average precision (MAP) and P(10) [26]. A higher ARR and a lower ANMRR mean better retrieval results. MAP is one of the widely used metrics in content-based image retrieval, which describes precision and recall in a single metric. Finally, P(10) is a precision value of the first 10 retrieved images. The best value for MAP and P(10) is 1. We compared these measures with 79 queries on the Corel-1k dataset and 174 queries on the Corel-10k dataset. Also, we tested retrieval performance of the proposed binary spatial map (BSM) against LBA [8] and a perceptual color-based method (PCM) [9]. As shown in Tables 1 and 2, the proposed BSM improves retrieval performance and produces the best results. Note that the conventional methods use only the dominant color in the salient areas and its distributions, whereas our method considers the shape of the saliency in addition to the above-stated information. Hence, it is believed that

(5)

including the structure information for image matching brings better retrieval results.

Fig. 6 shows an application to video frame retrieval, which finds similar objects within randomly selected video frames. For this experiment, three movies were used, and video databases from the middle of movies were sampled, with each sample 30 minutes long. The query images were randomly captured while the video was displayed on the monitor. The video frames from the database were sampled every second. The proposed object-based image retrieval algorithm finds the same or similar objects from all video frames, even from different scenes. As shown in Fig. 6, the images with the same object are well retrieved even when they are not in the same shot, i.e., the same objects in different backgrounds are also well found. This

is because the dominant colors are mainly extracted from the object of interest, and image matching reflects the object content very well.

According to various experiments, it is believed that the proposed color-based image retrieval method can be applied to web image searches, video frame detection, and object retrieval. Furthermore, since the proposed method can be implemented with simple operations, most of which are binary operations, it is also suitable for mobile devices and smart TVs. Specifically, the proposed algorithm retrieves more than 100 VGA images per second in a smart phone. On the other hand, the BOF-based image-search algorithms need a huge number of computations and a huge amount of memory, which may be suitable only for servers or powerful PCs. In the future, we will combine the proposed method with the BOF approach to improve retrieval accuracy.

4. Conclusions

We have proposed an object-based color image– retrieval algorithm based on a description of color distributions around the salient objects. We first extract the salient regions based on color contrast, and then find several dominant colors for each region. The spatial distribution of each dominant color is described as a binary map. Each salient region is partitioned into small sub-blocks, and each sub-block is assigned 1 or 0, according to the number of pixels that have the dominant color. When the binary maps for every dominant color are combined via OR, they also describe the rough shapes of objects. In summary, the proposed binary maps describe the shapes as (a) (b) (c)

Fig. 4. Examples of the retrieved images from the Corel-1k database, using (a) LBA [8], (b) the saliency-based method [10], (c) the proposed method. The top-left image (in the blue box) is the query, and each sub-figure shows the top 20 retrieved results (from left to right and from top to bottom).

Table 1. Comparison of objective measures for 79 queries on the images in the Corel-1K dataset.

Method ARR ANMRR MAP P(10)

LBA [8] 0.4722 0.4562 0.4956 0.8172 LBA with BSM 0.4886 0.4535 0.5715 0.7779 PCM [9] 0.5648 0.3516 0.6493 0.8527 Proposed method 0.6207 0.3132 0.7062 0.8682 Table 2. Comparison of objective measures for 174 queries on the images in the Corel-10K dataset.

Method ARR ANMRR MAP P(10)

LBA [8] 0.3515 0.5828 0.3683 0.5724 LBA with BSM 0.2948 0.6387 0.3302 0.6822 Proposed method 0.5028 0.4258 0.5337 0.8770

(6)

(a) (b) (c)

Fig. 5. Examples of retrieved images from the Corel-10k database using (a) an MPEG-7 dominant color descriptor [24,25], (b) the perceptual color-based method [9], (c) the proposed method. The top-left-hand corner image of each result is the query image, and each sub-figure shows the top 20 retrieved results (from left to right and from top to bottom).

(a) (b) (c)

Fig. 6. Experiments for video frame retrieval on three sampled movies (a) Monsters Inc. (2001), (b) Frozen (2013), (c) Her (2013). The top-left corner image of each result is a query frame, and each sub-figure shows the top 15 retrieval results (from left to right and from top to bottom).

(7)

well as dominant colors of objects. For the proposed description, a matching method is also proposed, which needs very simple computations. Hence, it is suitable for object retrieval on the web and in smart TVs, video-on-demand services, and mobile devices.

Acknowledgement

This research was supported by Projects for Research and Development of Police science and Technology under Center for Research and Development of Police science and Technology and Korean National Police Agency (PAC000001).

References

[1] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Computing Surveys, vol. 40, no. 2, pp. 1-60, Apr. 2008.

[2] Y. Zhang, Z. Jia, and T. Chen, “Image retrieval with geometry-preserving visual phrases,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Colorado, USA, pp. 809-816, Jun. 2011.

[3] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient Object Detection: A Discriminative Regional Feature Integration Approach,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Portland, Oregon, pp. 2083-2090, Jun. 2013.

[4] Jaehyun An, Sang Hwa Lee, and Nam Ik Cho, “Content-based image retrieval using color features of salient regions,” in Proc. IEEE International Conf. on Image Proc. (ICIP), 2014.

[5] P. Perez, C. Hue, J. Vermaak, and M. Gangnet, “Color-based probabilistic tracking,” in European Conference on Computer Vision, Copenhagen, Denmark, pp. 661-675, May 2002.

[6] S. P. Lloyd, “Least squares quantization in PCM,” IEEE Transaction on Information Theory, vol. 28, no. 2, pp. 129-137, Sep. 1982.

[7] J. R. Smith and S.-F. Chang, “VisualSEEk: a fully automated content-based image query system,” in Proc. the Fourth ACM International Conference on Multimedia, Boston, USA, pp. 87-98, Nov. 1996. [8] N.-C. Yang, W.-H. Chang, C.-M. Kuo, and T.-H. Li,

“A fast MPEG-7 dominant color extraction with new similarity measure for image retrieval,” Journal of Visual Communication and Image Representation, vol. 19, no. 2, pp. 92-105, Feb. 2008.

[9] S. Kiranyaz, M. Birinci, and M. Gabbouj, “Perceptual color descriptor based on spatial distribution: A top-down approach,” Image and Vision Computing, vol. 28, no. 8, pp. 1309-1326, Aug. 2010.

[10] A. Talib, M. Mahmuddin, H. Husni, and L. E. George, “A weighted dominant color descriptor for content-based image retrieval,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 345-360, Jan. 2013.

[11] W. Bian and D. Tao, “Biased discriminant Euclidean embedding for content-based image retrieval,” IEEE Transaction on Image Processing, vol. 19, no. 2, pp. 545-554, Feb. 2010.

[12] A. Yamada, M. Pickering, S. Jeannin, and L. C. Jens, “MPEG-7 visual part of experimentation model version 9.0-part 3 dominant color,” ISO/IEC JTC1/SC29/WG11/N3914, Pisa, 2001.

[13] Gao Li-chun, and Xu Ye-quang, “Image retrieval based on relevance feedback using blocks weighted dominant colors in MPEG-7,” Journal of Computer Applications, vol. 31, no. 6, pp. 1549-1551, Jun. 2011. [14] R. Min, and H. D. Cheng, “Effective image retrieval

using dominant color descriptor and fuzzy support vector machine,” Pattern Recognition. vol. 42, no. 1, pp. 147-157, Jan. 2009.

[15] M. Swain, and D. Ballard, "Color Indexing," International Journal of Computer Vision, vol. 7, No. 1, pp. 11-32, Nov. 1991.

[16] M. Stricker, and A. Dimai, “Color indexing with weak spatial constraints,” Proc. Symp. on Electronic Imaging: Science and Technology - Storage fJ Retrieval or Image and Video Databases IV, pp. 29-41, 1996.

[17] R. Brnuelli, and O. Mich, “Histograms analysis for image retrieval,” Pattern Recognition, vol. 34, no. 8, pp. 1625-1637, Aug. 2001.

[18] K. C. Ravishankar, B. G. Prasad, S. K. Gupta, and K. K. Biswas, “Dominant color region-based indexing technique for CBIR,” Proc. the International Conference on Image Analysis and Processing (ICIAP), Venice, Italy, pp. 887–892, Sep. 1999. [19] G. Pass and R. Zabih, "Histogram refinement for

content-based image retrieval," in Proc. IEEE Workshop on Applications of Computer Vision, FL, USA, pp. 96-102, Dec. 1996.

[20] G. Pass, R. Zabih, and J. Miller, “Comparing images using color coherent vectors,” Proc. the ACM Multimedia Conference, Boston, USA, pp. 65-73, Nov. 1996.

[21] M. K. Mandal, T. Aboulnasr, and S. Panchanathan, "Image Indexing Using Moments and Wavelets," IEEE Transactions on Consumer Electronics, vol. 42, no. 3, pp. 557-565, Aug. 1996.

[22] Zhang Lei, Lin Fuzong, and Zhang Bo, “A CBIR method based on color-spatial feature,” in Proc. IEEE Region 10 Conference, Cheju, Korea, pp. 166-169, Sep. 1999.

[23] Y. K. Chan, and C. Y. Chen, “Image retrieval system based on color-complexity and color-spatial features,” Journal of Systems and Software, vol. 71, no.1-2, pp. 65-70, Nov. 2004.

[24] B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Transaction on Circuits and Systems for Video Technology, vol. 11, no. 6, pp. 703-715, June 2001. [25] L.-M. Po and K.-M. Wong, “A new palette histogram

similarity measure for MPEG-7 dominant color descriptor,” in Proc. International Conf. on Image Proc., pp. 1533-1536, Oct. 2004.

(8)

[27] H. Lee, S. Jeon, I, Yoon, J. Paik, “Recent advances in feature detectors and descriptors: A survey,” IEIE Transactions on Smart Processing and Computing, vol. 5, no. 3, June 2016

[28] D.-J. Jeong, S. Choo, W. Seo, N. I. Cho, “Regional deep feature aggregation for image retrieval,” in Proc. IEEE International Conf. on Image Proc. (ICASSP), 2017.

[29] J. W. Kwak, N. I. Cho, “Relevance feedback in content-based image retrieval system by selective region growing in the feature space,” Signal Processing: Image Communication, Vol. 18, No. 9, pp. 787-799, 2003

Jaehyun An received the B. S. degree in electronics engineering from Ewha woman’s University on 2009, and the M. S. and Ph.D degrees in electrical engineering from Seoul National University (SNU), Seoul, Korea, in 2011 and 2015, respectively. She is currently with the LG Electronics. Her research interests include image processing and computer vision.

Sang Hwa Lee received the B. S. and M. S. degrees in Electronics, and Ph. D. degree in Electrical Eng. from Seoul National University, Seoul, South Korea, in 1994, 1996, and 2000 respectively. He has joined Seoul National University as a research professor since 2005. His majors are image processing and computer vision. His main research topics include stereo camera system, MRF modeling, image restoration and enhancement, and pattern recognition.

Seoul National University, Seoul, Korea, in 1986, 1988, and 1992, respectively. From 1991 to 1993, he was a Research Associate of the Engineering Research Center for Advanced Control and Instrumentation, Seoul National University. From 1994 to 1998, he was with the University of Seoul, Seoul, Korea, as an Assistant Professor of Electrical Engineering. He joined the School of Electrical Engineering, Seoul National University, in 1999, where he is currently a Professor. His research interests include image processing, video processing, and adaptive filtering.

수치

Fig. 3. Spatial maps corresponding to (a) {c n } n=1,2,3,4,
Fig. 6 shows an application to video frame retrieval,  which finds similar objects within randomly selected video  frames
Fig. 6. Experiments for video frame retrieval on three sampled movies (a) Monsters Inc

참조

관련 문서

Output of the developed EAR system on sample images (a) Original input player images (Player and face not detected) (b) Player image enhancement using the MSR (c)

Information or Retrieval 높은 재현율, 낮은 정확율 좁은 질의:. Information adjacent Retrieval 높은

1 John Owen, Justification by Faith Alone, in The Works of John Owen, ed. John Bolt, trans. Scott Clark, &#34;Do This and Live: Christ's Active Obedience as the

Even though malicious damage to protected medical images can degrade features in the image and affect the verification information, the watermark for protection

Although image retrieval method using outline and vertex of figure is used to express spatial information, search speed is relatively slow and it is

In order to get the feature of pedestrian, a block-by-block histogram is created using the direction of the gradient based on HOG (Histogram of

Taubin, “The QBIC Project: Querying Image by Content using Color, Texture and Shape,” Proceedings of SPIE Storage and Retrieval for Image and Video Databases, pp.. Pala,

Creative activities of expressing internal world of human beings have meanings in expressing the internal meaning beyond expressing the image of an object,