• 검색 결과가 없습니다.

Inter-comparison of image fusion products against in-situ spectral measurements over a heterogeneous rice paddy landscape

N/A
N/A
Protected

Academic year: 2021

Share "Inter-comparison of image fusion products against in-situ spectral measurements over a heterogeneous rice paddy landscape"

Copied!
53
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

저작자표시-비영리-변경금지 2.0 대한민국 이용자는 아래의 조건을 따르는 경우에 한하여 자유롭게 l 이 저작물을 복제, 배포, 전송, 전시, 공연 및 방송할 수 있습니다. 다음과 같은 조건을 따라야 합니다: l 귀하는, 이 저작물의 재이용이나 배포의 경우, 이 저작물에 적용된 이용허락조건 을 명확하게 나타내어야 합니다. l 저작권자로부터 별도의 허가를 받으면 이러한 조건들은 적용되지 않습니다. 저작권법에 따른 이용자의 권리는 위의 내용에 의하여 영향을 받지 않습니다. 이것은 이용허락규약(Legal Code)을 이해하기 쉽게 요약한 것입니다. Disclaimer 저작자표시. 귀하는 원저작자를 표시하여야 합니다. 비영리. 귀하는 이 저작물을 영리 목적으로 이용할 수 없습니다. 변경금지. 귀하는 이 저작물을 개작, 변형 또는 가공할 수 없습니다.

(2)

1

Master’s Thesis of Landscape Architecture

Inter-comparison of image fusion

products against in-situ spectral

measurements over a

heterogeneous rice paddy

landscape

지표면이 이질적인 농경지를 대상으로

영상합성자료와 현장스펙트럼측정 간의 상호비교

February, 2019

Graduate School of Seoul National University

Department of Landscape Architecture and Rural

Systems Engineering, Landscape Architecture Major

(3)

2

Inter-comparison of image fusion

products against in-situ spectral

measurements over a heterogeneous

rice paddy landscape

Under the Direction of Adviser, Prof. Youngryel Ryu

Submitting master’s thesis of

Landscape Architecture

February, 2019

Graduate School of Seoul National University

Department of Landscape Architecture and Rural Systems

Engineering, Landscape Architecture Major

Kong, Juwon

Confirming the master’ thesis

Written by Kong, Juwon

February, 2019

Chair

_________________ (Seal)

Vice Chair _________________ (Seal)

Examiner _________________ (Seal)

(4)

i

Abstract

The advances from high spatiotemporal resolution images that are important for the monitoring of ecosystem dynamics across different scales demand the accuracy assessment. Previous evaluation studies of fusion products lack pixel-level in-situ spectral measurements that fit the systematic evaluation in a heterogeneous landscape. We conducted an inter-comparison of existing four image fusion products with systematically collected in-situ spectral measurements over heterogeneous rice paddy landscape during the whole growing season. The four image fusion products include enhanced spatial-temporal adaptive reflectance fusion model (ESTARFM), flexible spatiotemporal data fusion (FSDAF), satellite data integration (STAIR), and CubeSat enabled spatiotemporal enhancement method (CESTEM). We tested fusion NDVI products, Normalized difference vegetation index, in terms of spatial patterns and temporal variations. Fusion NDVI products showed the moderate to strong linear relationships and negative bias against ground NDVI (R2 range 0.73 to 0.93, bias up to -11%). Although fusion NDVI products captured the NDVI variation, ground NDVI had a larger amplitude than fusion NDVI products. The results indicated that the positive bias of input against in-situ measurement caused the overestimation on mixed land cover type,

(5)

ii

and the negative bias of input against in-situ measurement caused the underestimation on homogeneous rice paddy cover type. Furthermore, fusion NDVI products could underestimate the variations in vegetation activity. The temporal dependency of ESTARFM, FSDAF, and STAIR was not clearly detected on the mixed land cover type. By comparing fusion NDVI products to ground NDVI, the strength of each fusion product was confirmed. We expect our in-situ spectral measurement could be useful in quantifying uncertainty in image fusion products, improving fine resolution cropland mapping and monitoring, and choosing the algorithm depending on the research purpose.

Keyword: Image fusion, In-situ spectral measurement, Inter-comparison, High spatiotemporal resolution image, Accuracy assessment

(6)

iii

Table of Contents

Abstract ... i

Table of Contents ... iii

List of Figures ... v

List of Tables... vi

1.

Introduction ... 1

2.

Method ... 6

2.1

Study Site... 6

2.2

Image Fusion Algorithms ... 7

2.3

Satellite Data ... 10

2.4

Ground Measurements ... 11

2.5

Evaluation... 15

3.

Result ... 16

3.1

Comparison of Satellite NDVI Products Against Ground

NDVI ... 16

3.2

Comparison of Fusion NDVI Products Against Ground

NDVI ... 17

(7)

iv

4.

Discussion ... 24

4.1

Confirming the Strengths of Each Fusion Product with

Ground Measurements ... 24

4.2

Evaluation of Input Data ... 25

4.3

How the Bias of Input Data Against Ground

Measurements Affect Final Fusion Products? ... 26

4.4

Does Time Lag between Input-pair Date and Predicted

Data Always Affect the Fusion Performance? ... 28

4.5

Fusion NDVI Products for Detecting Vegetation Activity

29

4.6

Dataset for Assessment Accuracy ... 29

5.

Conclusion ... 31

Reference... 32

Supplementary Materials ... 36

국문 초록 ... 41

(8)

v

List of Figures

Figure 1. Overview of study site ... 7

Figure 2. Sampling strategy and distribution of plot 1 to

15 on study site ... 13

Figure 3. Simple linear regression analysis of (a)

Landsat-8 NDVI to ground NDVI (b) Sentinel-2

NDVI, against ground NDVI, and (c) MODIS NDVI

to mean ground NDVI (d) Scatter plot of MODIS

NDVI to ground NDVI values within MODIS

NDVI ... 17

Figure 4. Simple linear regression analysis of (a)

ESTARFM NDVI, (b) FSDAF NDVI, (c) STAIR

NDVI, and (d) CESTEM NDVI, against ground NDVI

... 19

Figure 5. Scatter plot of fusion product NDVI to ground

NDVI ... 20

Figure 6. Time series of NDVI across 15 plots (Ground,

Landsat-8, ESTARFM, FSDAF, CESTEM, and

STAIR) ... 23

(9)

vi

List of Tables

Table 1. The description of image fusion products in this

paper ... 9

Table 2. The description of satellite products in this

study ... 11

Table 3. Description of sampling tools: Jaz spectrometer

and cosine corrector ... 12

Table 4. Day-of-year (DOY) of in-situ spectral

measurements and growth stage of rice on relevant

DOY ... 13

Table 5. The Amplitude of NDVI during the whole

growing season ... 20

Table 6. Statistical analysis at each plot during the

(10)

1

1. Introduction

The advances of high spatiotemporal resolution images that are important for the monitoring of ecosystem dynamics across different scales demand not only the application in ecosystem monitoring but also the accuracy assessment. To produce the high spatiotemporal resolution images, previous studies developed the image fusion algorithms for blending fine spatial resolution images with high temporal resolution images (Gao et al. 2006; Hilker et al. 2009a; Luo et al. 2018; Zhu et al. 2010; Zhu et al. 2016), and products from constellation of CubeSat (Houborg and McCabe 2018a). However, uncertainties of fusion products are not fully revealed.

Quantifying uncertainties of fusion products required systematically measured dataset. Unfortunately, there are no standard datasets and no standard methods to assess fusion products yet (Zhu et al. 2018). (Walker et al. 2012) suggest that uncertainties of spatiotemporal fusion products could relate to the reference satellite data used in the fusion or either spatial or temporal gap-filling stage which can be only filled with another dataset. In addition, fusion products, that generally have higher spatial resolutions than input data spatial resolution, can be precise to have more details than input data; however, the higher spatial resolution does not guarantee the accuracy of fusion products

(11)

2

because bias can be caused by spatial resolution enhancement. As a result, the systematic evaluation of image fusion products requires the unbiased dataset

Previous efforts that quantified uncertainties in fusion products were, basically, the evaluations of similarity to satellite images not to ground measurements. The scope of evaluation studies that quantified the factors that affect the performance of image fusion product vary from spatial domain to temporal domain: land cover types (Emelyanova et al. 2013; Olexa and Lawrence 2014), temporal interval length between base pair and simulation date (Fu et al. 2015; Olexa and Lawrence 2014) and changing the sequence of blending images and deriving vegetation index (Jarihani et al. 2014). Furthermore, the evaluation studies showed the accuracy of fusion products in the monitoring of ecosystem dynamics: seasonal variation of NDVI (Singh 2012), dryland forest phenology (Walker et al. 2012), and vegetation indices and land surface temperature in semi-arid regions (Kim and Hogue 2012). Previous studies evaluated fusion products by matching with satellite images. A few Ground measurements for assessing fusion products (Aragon et al. 2018; Kim and Hogue 2012). However, the evaluation of satellite-like products need both in-situ measured data and finer spatial resolution satellite data needs (Morisette et

(12)

3

al. 2002; Song et al. 2001). Thus, systematic evaluation of fused products needs ground measurements.

Pixel level ground measurements are more reliable to evaluate fusion products than non-pixel level ground measurements. Unlike non-pixel-size ground measurements, which differs to the size of the image pixel, the ground measurements in pixel size can reduce potential errors from ground point-to-pixel inconsistency (Frederic et al. 2013; Xu et al. 2018). (Morisette et al. 2002; Song et al. 2001) stressed the evaluation of satellite products needs in-situ measured datasets that make extrapolation relationships of the point measurement against the area covered by the imagery. Consequently, pixel-level ground measurements fit the systematic evaluation of fusion products.

To monitor the ecosystem dynamics by fusion products throughout a year, it is important to test image fusion products in both non-clear sky condition and heterogeneous landscape. In the evaluation of fusion products, changes in landscape-scale heterogeneity are important to consider in the pixel-based analysis (Bisquert et al. 2015). (Fu et al. 2015) suggested that the evaluation of fusion products over cropland is challenging, especially, at the growing season. Temporal changes in spatial heterogeneity occurred over cropland (Ding et al. 2014). Moreover,

(13)

4

the limited number of the satellite images due to cloudy condition (Ju and Roy 2008) demand fusion product to interpolate time-series data. The rice cropland meets the requirements; heterogeneity in sub-pixel due to human management, and non-clear sky days due to summer precipitation.

The advances in spatiotemporal generated image fusion algorithms in both blending multiple satellite image method and using the constellation of CubeSat data method. An enhanced STARFM (ESTARFM), which is based on the spatial and temporal adaptive reflectance fusion model (STARFM) algorithm (Gao et al. 2006), improved the accuracy of predicted fine-resolution reflectance and spatial details in heterogeneous landscapes (Zhu et al. 2010). (Zhu et al. 2016) overcome the limitation of the number of input data requirements and the challenges in heterogeneous landscapes with abrupt land cover type changes by a Flexible Spatiotemporal DAta Fusion (FSDAF). ‘SaTellite dAta IntegRation (STAIR)’, presented by (Luo et al. 2018), produce the cloud-free and gap-free fusion products in daily step. A

CubeSat Enabled Spatio-Temporal Enhancement Method

(CESTEM), overcome the challenges in using the constellation of CubeSat: low radiometric quality and cross-sensor inconsistencies (Houborg and McCabe 2018a). Each fusion products had the strength in monitoring ecosystem dynamics; however, the fusion

(14)

5

products –ESTARFM, FSDAF, STAIR, and CESTEM- were not yet evaluated in pixel-size ground measurements.

The objective of this study was to evaluate the performance of four fusion products ESTARFM, FSDAF, STAIR, and CESTEM using ground measurements and satellite images over the rice paddy. For that, we (i) evaluate four products (ESTARFM, FSDAF, STAIR, and CESTEM) against pixel-size ground measurements, (ii) evaluate the seasonal pattern of the fusion NDVI products in time series against ground measurements, and (iii) analyze factors caused differences in spatial and temporal patterns of fusion products. After analyzing the spatial heterogeneity and seasonal variation of NDVI based on the ground measurements and finer satellite image, the uncertainties of the fused products were quantified at a pixel level.

(15)

6

2. Method

2.1 Study Site

The study site was conducted in a rice paddy Cheorwon, Republic of Korea (CRK; 38.2013°N, 127.2507°E) which is registered in Korea Flux Network (KoFlux) (Fig. 1). The predominant species, Oryza sativa L. ssp. Japonica, was cultivated at study site with a water depth of about 5 cm. The site has a temperate monsoon climate with a mean annual temperature of 10.2℃ and mean annual precipitation of 1390.8 mm (Korean Meteorological Administration). Because of the intensive summer precipitation, clouds often appear on the rice growing season from June to August (Korean Meteorological Administration). Temporal changes in spatial heterogeneity occurred on the study site in different rice growth stage (Table 4.). The study site, Cherwon rice paddy, meets both requirements for evaluating fusion products; often cloudy condition and heterogeneous landscape. We targeted the study site over the area of MODIS 250m pixel. Unsupervised clustering method (K-means) using Cubesat images (spatial resolution 3.5m) classified the study site: vegetation cover (rice and vegetation on the ridges between rice paddies) over 76% and non-vegetation cover (road, building, and ditch) under 24% at peak NDVI.

(16)

7

• Figure 1. Study site: (a) The location of the study site (labeled in red arrow) Korean Peninsula (Cheorwon-gun, Republic of Korea), (b) Rice paddy landscape image acquired on 5th

September 2017 (General land cover type is rice paddy: over 76%)

2.2 Image Fusion Algorithms

The algorithms that enable us to build time-series of images with fine spatial resolution were selected. The constellation of CubeSat provide high spatial resolution products but only recent data with limited bands (blue, green, red, and near-infrared); on the other hand, the blending method could produce not only past data but also products with shortwave infrared and thermal bands. Therefore, taking the advantages of the CubeSat and blending method enable us to build time-series of images with fine spatial resolution. Fusion algorithms that differ in types and resolution summarized in

(17)

8

(Table 1.). All Fused images were resampled to the 30m spatial resolution which is fine enough to detect t changes within canopy level.

 ESTARFM: Weight function-based. Introduced a conversion coefficient to improve STARFM’s accuracy in the heterogeneous area. (Zhu et al, 2010)

 FSDAF: Hybrid method (unmixing based and weight function based). The strengths in minimum input data (one-pair of fine- and coarse resolution images and one coarse-resolution image acquired on the prediction date), heterogeneous landscape, and prediction in both gradual change and land type change (Zhu et al, 2016).  STAIR: Weight function-based using all available Landsat and MODIS products. A generic and fully-automated method that generates cloud-/gap-free products at daily step (Luo et al, 2018). STAIR systematically integrate any number of Landsat-MODIS image pairs for missing-pixel imputation and automatically determine the weight pair-image for the target date

 CESTEM: Machine learning-based using CubeSat, Landsat and MODIS products. CESTEM is spectrally consistent to Landsat-8 atmospherically corrected

(18)

9

surface reflectance with CubeSat spatial resolution (3.5m) and temporal frequency. CESTEM use a multi-scale machine-learning technique for radiometric-correction of CubeSat image against Landsat and MODIS (Houborg and McCabe 2018a).

Table 1. The description of image fusion products in this paper. The types of fusion algorithms followed the category in (Zhu et al. 2018)

Algorithm Type Spatial resolution Temporal resolution ESTARFM Weight function-based 30m Depends on MODIS data (daily) FSDAF Hybrid method 30m STAIR Weight function-based 30m Gap free (daily) CESTEM Machine learning-based 3m → 30m CubeSat revisit time (daily)

(19)

10

2.3 Satellite Data

Satellite products with different spatial and temporal resolution were used for producing fusion algorithms. (Table. 2) summarized the satellite products used in this study for both fusion inputs and evaluation of fusion products and data dates (DOY 100 to 300 in 2017, Appendix 1, 2). For FSDAF product, MODIS surface reflectance daily products (MOD09GQ) and Landsat 8 OLI (Landsat 8 OLI/TIRS C1 Level-2) data were used for the data fusion. To compare with visited ground data, daily MODIS reflectance data (MOD09GQ) was used rather than 8-day MODIS reflectance data (MOD09Q1) which represents the observation during an 8-day period. For CESTEM product, Planet Scope Ortho Tile product (Planet team 2018), Landsat-8 OLI products, and Nadir BRDF adjusted MODIS daily surface reflectance at 500m resolution (version 6 product MCD43A4 product). For STAIR product, all available Landsat scenes are used from all platforms (Landsat 7 Enhanced Thematic Mapper and Landsat 8 OLI) and MODIS (version 6 product MCD43A4 product). Sentinel-2 surface reflectance products were atmospheric-corrected by Sen2Cor (version 255) using SNAP (version 6.0, SEOM). For evaluating fusion products, selected satellite images were MODIS (MOD09GQ), Landsat-8, and Sentienel-2. To match spatial resolution between satellite images and fusion products, we resampled satellite images to 30m spatial resolution. To align

(20)

11

satellite images and fusion products, all data were registered and re-projected in the same equal area projection (WGS 84, UTM zone 52N)

Table 2. The description of satellite products in this study Satellite Spatial

resolution

Temporal

resolution Reference

Landsat-8 OLI 30m 16 days https://eros.usgs.gov

Sentinel-2 A/B 10m

5days

/(10days) https://sentinel.esa.int

CubeSat 3-4m Daily https://www.planet.com

MODIS 250m/500m Daily https://lpdaac.usgs.gov

2.4 Ground Measurements Sampling tools

We collected the in-situ spectral measurements with the spectrometer and supplement equipment, which set the sensing height, to measure the target sampling area (10m). Jaz spectrometer (Ocean Optics, Dunedin, FL, USA) with cosine-corrected fore optic collected data. According to (Liu et al. 2017), the signal within a field of view of 72˚ contributes 90% of the total hemispherical irradiance for hemispherical measurements made with the cosine-corrected fore optic. Therefore, we adjusted the sensing height with bar-structured equipment to match with sampling area (10m).

(21)

12

Table 3. Description of sampling tools: Jaz spectrometer and cosine corrector (FWHW: full width at half maximum, OD: outside dimension)

Jaz spectrometer description

Wavelength Integration time Optical resolution

350 – 1000 nm 870 μm to 65 s 1 nm FWHM

Cosine corrector description: CC-3-UV-T Optical Diffuser Wavelength range Dimensions (OD) Diffuser (diameter) Field of view Spectralon 200-2500 nm 6.35 mm 3900 μm 180˚ Sampling strategy

In-situ spectral measurements were collected on selected sampling points that represent the study site. (Frederic et al. 2013) used concept of elementary sampling unit (ESU) for the validation of medium spatial resolution satellite image. Following the concept of ESU, we set 60 ESUs at the optimal location that represents target pixel size. Each ESU represents around 10m x 10m pixel; 4 ESU represent one Landsat pixel (30m x 30m); and total 60 ESUs represent one MODIS pixel (250m x 250m) (Fig. 2 (a)). Total 60 ESUs evenly distributed on 15 plots; each plot consisted of 4 ESUs (30m x 30m) which is the same size as fusion products pixel. Especially, plot 3, 4 and, 6 located on mixed land cover, which is the mixed land cover type of road, building, ditch, and rice paddy. To maintain the quality of in-situ spectral measurement, we collected spectral measurements 6 to 10 times at each ESU.

(22)

13

Considering satellites passing time around 10 am to 12:30 pm (Landsat-8, MODIS, and Sentinel-2 in this study), all ground measurements were done between 9 am and 1 pm in local time (UTC +9). To measure within a pixel, each point was positioned using a GPS machine (MONTANA 650TK, Garmin, Switzerland).

Figure 2. Sampling strategy and distribution of plot 1 to 15 on study site: (a) The location of ESU on the Landsat-8 pixel, (b) ESU over high spatial resolution satellite image (source: Google map)

Table 4. Day-of-year (DOY) of in-situ spectral measurements and growth stage of rice on relevant DOY.

DOY 155 187 194 206 215

Growth

stage Tillering Jointing Booting Heading Milking

DOY 223 239 248 291

Growth

stage Maturity Maturity Maturity After harvest

Sampled data to Landsat-like wavelength band and NDVI To evaluate fusion NDVI products values, in-situ spectral measurements were converted to satellite-like band data. All

(23)

14

fusion products in this study targeted Landsat-like band data. Therefore, we converted Jaz spectrometer data to Landsat-8 Operational Land Imager (OLI)-like band data (Red band: 636 nm - 673 nm, Near infrared(NIR) band: 851 nm - 879nm) using spectral response function (SRF) of OLI (Barsi et al. 2014). We also converted in-situ spectral measurements to Sentinel-2 MSI-like band data using Sentinel-2 MSI SRF (Band 4(red), band8 (NIR), and band8a (NIR). Comparing to surface reflectance, NDVI values, which is band ratio formulation (Tucker 1979), are less sensitive to solar geometry or sun viewing angle (Feng et al. 2002). (Hilker et al. 2009b) showed a stronger correlation between observed NDVI and fusion product NDVI than the correlation between the individual reflectance bands and fusion product bands. Following the previous study, we calculated NDVI using satellite-like band data (Eq(1)). We compared the converted Landsat-8 NDVI values to the converted Sentinel-2 NDVI values using both band 8 and band 8a. The test result showed tight linear relationship (𝑅2 > 0.997) and small bias (bias -0.6 % using Sentinel-2-like

band 8 and 0.7% using Sentinel-2-like band 8a). As a result, we selected the band 8 to match the spatial resolution of Sentinel-2 red band (band 4 in 10 m spatial resolution).

(24)

15

2.5 Evaluation

Simple linear regression analysis used to evaluate fusion product NDVI values against ground NDVI values. The statistical indicators – correlation coefficient (R–Eq(2)), bias (bias – Eq(3)), mean absolute difference(MAD), and root-mean-square error (RMSE –Eq(4)),- are used in this study following the evaluation studies on comparing fused images and reference satellite image (Chen et al. 2015; Zhu et al. 2018).

𝜌(𝐴, 𝐵) =𝐸[(𝐴−𝜇𝐴)(𝐵−𝜇𝐵)] 𝜎𝐴𝜎𝐵 – Eq(2) 𝑏𝑖𝑎𝑠 = 𝐸(‖𝑎 − 𝑏‖) – Eq(3) 𝑀𝑆𝐸(𝐴) = 1 𝑚𝑛∑ ∑ [𝐵(𝑖, 𝑗) − 𝐴(𝑖, 𝑗)] 2 𝑛−1 𝑗=0 𝑚−1 𝑖=0 𝑅𝑀𝑆𝐸(𝐴) = √𝑀𝑆𝐸(𝐴),– Eq(4)

(A= fusion NDVI products, satellite NDVI products / B= ground NDVI /

E = expectation, mean /𝜎𝐴, 𝜎𝐵 = the standard deviation of A,B) For simple linear regression analysis, we calculated the determination coefficient (𝑅2) using R. Although (Xie et al. 2018)

revealed the uncertainty of input data in blended fusion product by the variance of MAD, we used the bias of input satellite NDVI and fusion NDVI products against ground NDVI. Temporal variation within the same fusion product distinguished by colored points. We made the time-series of NDVI across 15 plots that demonstrate the seasonal variation. All NDVI data derived from ground

(25)

16

measurements, fusion products, and satellite products were marked by day-of-year (DOY). Simple linear analysis (𝑅2, bias)

of fusion products against ground NDVI at each plot were also conducted. The date of data that we used for the evaluation is listed in (Supplementary 1, Supplementary 2, and Supplementary 3)

3. Result

3.1

Comparison of Satellite NDVI Products Against

Ground NDVI

Simple linear regression analysis of satellite images against in-situ measurements had strong linear relationships and negative bias (R2> 0.91, bias up to -18%). Landsat-8 NDVI values showed high correlation (R2=0.93, bias=-4%) to ground NDVI values (Figure 3. (a)); MODIS NDVI also showed the high correlation (R2=0.95, bias=-7%) to mean ground NDVI that covered the whole MODIS pixel over the season (Figure 3. (c)); However, Sentinel-2 NDVI values showed negative bias (rbias=-18%) to ground NDVI values (Figure 3. (b)); NDVI values in each plot ranged –86% to 53% differences from the MODIS NDVI (Figure 3. (d)). Within MODIS NDVI, MODIS NDVI showed negative bias to ground NDVI values on rice paddy land cover (Figure 3. (d) lined circle ); positive bias to ground NDVI values on mixed land cover (Figure 3. (d) dashed circle). In the case of plot 5 (rice paddy land

(26)

17

cover), MODIS NDVI showed both negative and positive bias to plot NDVI (Figure 3. (d) yellow circle).

Figure 3. Simple linear regression analysis of (a) Landsat-8 NDVI to ground NDVI (b) Sentinel-2 NDVI, against ground NDVI, and (c) MODIS NDVI to mean ground NDVI (d) Scatter plot of MODIS NDVI to ground NDVI values within MODIS NDVI (Plot 5: yellow dots)

3.2

Comparison of Fusion NDVI Products Against

Ground NDVI

Simple linear regression analysis of fusion NDVI products against in-situ NDVI showed moderate to strong linear relationships and negative bias (R2 range 0.73 to 0.93, bias up to -11%). ESTARFM NDVI showed a moderate linear relationship

(27)

18

and negative bias to ground NDVI (R2=0.73, bias=-8%) (Figure 4. (a)); FSDAF NDVI showed a moderate linear relationship and small negative bias to ground NDVI (R2=0.74, bias=-4%) (Figure 4. (b)); STAIR NDVI showed high linear relationship and small negative bias to ground NDVI (R2=0.83, bias=-2%) (Figure 4. (c)); CESTEM NDVI showed the highest correlation to ground NDVI (R2=0.93, bias=-1.6%) (Figure 4. (d)). Blended fusion NDVI products - ESTARFM NDVI, FSDAF NDVI, and STAIR NDVI - showed negative bias to ground NDVI on rice paddy cover, and positive bias to ground NDVI on mixed land cover (Figure 5. (a)-(c)). CESTEM NDVI showed a marginal bias to ground NDVI (Figure 5 (d)). In the case of plot 5 (rice paddy land cover), fusion NDVI products showed both negative and positive bias to plot NDVI (Figure 5. (d) Yellow dots). This result was also shown in (Supplementary 4 and Supplementary 5) that compared fusion NDVI products to ground NDVI with the same number of date on same date. The amplitude of ground NDVI (~0.714) were well captured STAIR NDVI(~0.709) followed by CESTEM NDVI (~0.691), ESTARFM (~0.647), and FSDAF (~0.638) (Table 5.).

(28)

19

Figure 4. Simple linear regression analysis of (a) ESTARFM NDVI, (b) FSDAF NDVI, (c) STAIR NDVI, and (d) CESTEM NDVI, against ground NDVI (Ground NDVI dates distinguished by color)

(29)

20

Figure 5. Scatter plot of fusion product NDVI to ground NDVI (Rice paddy land cover type: red dots, Mixed land cover type: green dots, and Plot 5: yellow dots)

Table 5. The Amplitude of NDVI during the whole growing season. (The difference between the maximum value of NDVI (peak NDVI) and the minimum of NDVI (bottom NDVI))

NDVI

products Ground NDVI ESTARFM NDVI FSDAF NDVI STAIR NDVI CESTEM NDVI Amplitude of NDVI 0.714 0.647 0.638 0.709 0.691 peak NDVI 0.939 0.871 0.898 0.927 0.882 bottom NDVI 0.224 0.224 0.260 0.218 0.191

(30)

21

3.3 Time-series of NDVI on Each Plot

Fusion products showed high linear relationships and negative bias (mean R2 > 0.81, bias up to -10.8%) to each plot NDVI during the whole growing season but the biases varied negative to positive at each plot. On mixed land cover plots, fusion NDVI products generally showed positive bias to plot NDVI (Table 5. bold). Contrastively, on rice paddy land cover plots, fusion NDVI products generally showed negative bias to plot NDVI (Table 5.). Fusion NDVI products showed positive bias to the plot 5 NDVI, which located on rice paddy land cover (Table 5. underlined).

Time-series of NDVI datasets across 15 plots provided not only the seasonal NDVI variation but also the amplitude of NDVI variation (Figure.6). On the start of growing season, FSDAF NDVI and ESTARFM NDVI have lower values than STAIR NDVI and CESTEM NDVI. In plot 5, ESTARFM NDVI, FSDAF NDVI, and STAIR NDVI mismatched to ground NDVI around DOY 240, while CESTEM captured the correct harvesting event. Compare to ground peak NDVI, fusion product had smaller peak NDVI on rice paddy cover plots; larger peak NDVI on mixed land cover plots. Furthermore, the amplitude of fusion NDVI products was larger than that of ground NDVI on the mixed land cover plots (Figure 6.).

(31)

22

Table 6. Statistical analysis at each plot during the whole growing season

Plot

ESTARFM FSDAF STAIR CESTEM

R2 Rbias R2 rbias R2 rbias R2 rbias

Plot 1 0.91 -17.7 % 0.94 -12.6 % 0.97 -8.0 % 0.98 -4.8 % Plot 2 0.97 -10.8 % 0.96 -12.1 % 0.96 -3.4 % 0.97 -4.1 % Plot 3 0.75 11.7 % 0.80 23.1 % 0.63 10.5 % 0.94 10.2 % Plot 4 0.89 8.8 % 0.94 12.8 % 0.92 6.7 % 0.73 6.9 % Plot 5 0.51 3.5 % 0.71 3.2 % 0.61 9.8 % 0.86 6.6 % Plot 6 0.70 -15.9 % 0.83 2.3 % 0.92 4.2 % 0.97 3.7 % Plot 7 0.89 -14.6 % 0.95 -10.7 % 0.98 -7.6 % 0.98 -4.8 % Plot 8 0.83 -18.0 % 0.84 -17.1 % 0.92 -9.9 % 0.98 -9.1 % Plot 9 0.91 -9.5 % 0.80 -8.8 % 0.92 0.01 % 0.99 -2.6 % Plot 10 0.89 -13.5 % 0.92 -13.0 % 0.97 -3.8 % 0.98 -3.6 % Plot 11 0.52 -32.3 % 0.70 -19.9 % 0.80 -16.6 % 0.96 -7.0 % Plot 12 0.62 -23.7 % 0.63 -10.6 % 0.81 -6.3 % 0.99 -2.8 % Plot 13 0.88 -13.4 % 0.93 -7.5 % 0.98 -2.4 % 0.95 -2.2 % Plot 14 0.94 -7.2 % 0.92 -7.0 % 0.97 -2.1 % 0.95 -0.2 % Plot 15 0.99 -10.6 % 0.95 -7.5 % 0.97 -4.0 % 0.96 -3.6 % mean 0.81 -10.8 % 0.85 -5.7 % 0.89 -2.2 % 0.95 -1.2 %

(32)

23

Figure 6. Time series of NDVI across 15 plots (Ground, Landsat-8, ESTARFM, FSDAF, CESTEM, and STAIR)

(33)

24

4. Discussion

4.1 Confirming the Strengths of Each Fusion Product with Ground Measurements

Inter-comparison of fusion NDVI products against ground NDVI supported the strength of each fusion product. We inter-compared the accuracy of two fusion algorithm products (ESTARFM, FSDAF) against ground NDVI as both ESTARFM and FSDAF products used the same input data to produce fusion products. Generally, FSDAF NDVI well-agreed with ground NDVI (R2=0.75, bias=-4%) than ESTARFM NDVI (R2=0.73, -8%) (Figure 4. (a), (b)). Moreover, FSDAF NDVI clearly outperformed ESTARFM NDVI on the mixed land cover plot (Table 6. plot 6) - where the heterogeneity occurred by the mixture of rice paddy, road, and ditch - and on the drastic changes occurred in the plot (Table 6. plot 5) – where the rice paddies harvested between input-pair data. This results support the strength of FSDAF on heterogeneous landscape and prediction in drastic vegetation change with minimum input data (Zhu et al. 2016). STAIR products used both all available Landsat data (Landsat-7 and Landsat-8) and daily MODIS data (MCD43A4) to produce cloud-/gap-free data (Luo et al. 2018). The amplitude of STAIR NDVI, which is the difference between max NDVI and min NDVI during the whole growing season, was most similar to that of ground NDVI (Table 5.). Hence, well-estimated temporal fluctuation of STAIR

(34)

25

NDVI confirms the strength of cloud-/gap-free STAIR NDVI. CESTEM products that mainly estimated from CubeSat images (Planet team 2018) overcome low radiometric quality of Cubesat images (Houborg and McCabe 2018a). CESTEM products already proved as a promising dataset for land surface monitoring but the evaluation on pixel-size ground measurement does not exist (Aragon et al. 2018; Houborg and McCabe 2018b). CESTEM NDVI showed high agreement to Landsat-8-like ground NDVI (R2 = 0.93, bis = -1.6%) better than agreement of Landsat-8 to ground NDVI. (Figure 3. (a) and Figure 4. (d)). Therefore, this high agreement to ground NDVI support the consistency of CESTEM product to Landsat-8 surface reflectance.

4.2 Evaluation of Input Data

Satellite images that used as the input data of fusion products showed good agreement against ground NDVI (R2>0.93, bias up to -7%) (Figure 3.). Landsat-8 images, which fusion products target the spatial or spectral resolution, were used as input data of fusion products (Houborg and McCabe 2018a; Luo et al. 2018; Zhu et al. 2010; Zhu et al. 2016). Landsat-8 NDVI agreed well with ground NDVI (R2=0.93, bias=-4%) (Figure 3. (a)). In addition, mean ground NDVI over the study site showed high agreement to MODIS NDVI that covered the whole study site (R2=0.95, bias=-7%)

(35)

26

(Figure 3. (c)). Despite the high linear relationship (R2=0.91) between Sentinel-2 NDVI and ground NDVI, Sentinel-2 NDVI was negatively biased (up to -18%) to ground NDVI (Figure .3 (b)). Therefore, we excluded Sentinel-2 NDVI for evaluating fusion products.

4.3 How the Bias of Input Data Against Ground Measurements Affect Final Fusion Products?

Uncertainties of fusion products caused by input data differed in plots with a positive and negative bias to in-situ measurements. Our extensive in-situ spectral datasets in the landscape scale enabled us to test the propagation of uncertainty from input data into the final fusion NDVI estimates. MODIS NDVI showed little bias with high correlation (bias =-7%, R2=0.95) to in-situ data that covered the whole MODIS pixel over the season (Figure 3. (c)). However, we found that NDVI values in each plot ranged –86% to 53% differences from the MODIS NDVI value which suggests substantial landscape heterogeneity within the 250 m MODIS pixel (Figure 3. (d)). The blending fusion algorithms produce magnitudes of 30 m resolution pixel values from the MODIS NDVI value (Zhu et al. 2018). If plot level NDVI value is biased from the MODIS NDVI value, then the blending fusion algorithm will lead biases in fused NDVI estimate. We found that during the peak growing season, NDVI values in the mixed

(36)

27

land cover plots (Figure 2. plot 3, 4, and 6) were overestimated (Figure 5 (a)-(c) dashed circles). Within the MODIS pixel, rice paddy accounted for 76% (Figure. 1). Therefore, the MODIS NDVI values that represent the dynamics of rice canopy must be higher than the actual NDVI values in the three mixed land cover plots (dashed circle in Figure 3). We also found that during the peak growing season, NDVI values in the rice paddy land cover plots were underestimated (Figure 5. (a)-(c) lined circle). The MODIS NDVI value was negatively biased to ground NDVI (Figure 3 (d) line circle), therefore, MODIS NDVI that negatively biased to ground NDVI accounted for underestimated fusion NDVI products on the rice paddy land cover plots. Fusion NDVI products on the plot 5, which located in homogeneous rice paddy land cover (Figure 2.), underestimated during peak season but overestimated during harvest period (Figure 5. yellow dots; Figure 6. Plot 5). The harvested area was 5.5% of total rice paddy land cover that equals to 3.7% of total study site which is not enough to be reflected in MODIS NDVI (Figure 3. (d). yellow dots). MODIS NDVI that positively biased to ground NDVI caused the overestimation of fusion NDVI products. Therefore, when the plot 5 had been temporally heterogeneous compared to the dominant temporal condition of MODIS NDVI, blended fusion NDVI products - ESTARFM NDVI, FSDAF NDVI, and STAIR NDVI - could not detect the harvest event around DOY 240 (Figure 6. Plot 5).

(37)

28

4.4 Does Time Lag between Input-pair Date and Predicted Data Always Affect the Fusion Performance?

Temporal gap-filling of fusion products less affect the performance of fusion products on spatially heterogeneous plots than spatially homogeneous plots. The performance of ESTARFM and FSDAF fusion products against in-situ measurements generally improved when time lag between input-pair data date – distinguished by Landsat point - and predicted data date decreased (Figure 6.). In other words, the longer temporal distance between input-pair data date and predicted data date makes the lower performances in prediction (Gao et al. 2006; Olexa and Lawrence 2014; Walker et al. 2012; Zhu et al. 2010). This temporal distance between input-pair date and predicted data date cause the uncertainty of temporal gap-filling in fusion products (Fu et al. 2015). However, the time lag between input data date and predicted data date less affect the performance of blended fusion NDVI products - ESTARFM, FSDAF, and STAIR - against ground NDVI, especially, when the heterogeneity occurred in subplots scale(< 30m) (Figure 2. plot 3, 4, and 6) (Figure.6). Therefore, on mixed land cover plot, the temporal condition of MODIS affect blended fusion NDVI products than spatial condition from Landsat images.

(38)

29

4.5 Fusion NDVI Products for Detecting Vegetation Activity Our in-situ measurements provide the insight that fusion NDVI products underestimated NDVI seasonal variation. Compared to ground NDVI, fusion NDVI products underestimated the amplitude of NDVI that is difference between peak NDVI and bottom NDVI (Table. 5). This underestimation mainly derived from the difference between peak ground NDVI and peak fusion product NDVI (Table 5.). As discussed in (Discussion 4.3), the bias of the input data against ground measurement caused the underestimation of peak fusion product NDVI. The seasonal variation of in-situ NDVI detects the variations in vegetation activity (Kim et al. 2019; Ryu et al. 2010; Ryu et al. 2014). Therefore, compared to ground NDVI, fusion NDVI products could underestimate the variations in vegetation activity.

4.6 Dataset for Assessment Accuracy

We generated spatiotemporal comprehensive surface

reflectance dataset that can be evaluate fusion products. Indeed, in a recent review paper (Zhu et al. 2018), standard dataset that is used to evaluate spatiotemporal dataset was suggested as one of the major challenges in accuracy assessment. The harmonized Sentinel-Landsat dataset (Claverie et al. 2018) could be an option to evaluate fusion products. Still, there is the possibility of bias

(39)

30

between satellite images and ground measurement (Figure 3. (b)). As a result, fusion products need ground measurements to validate the relationship between fusion products and satellite images (Frederic et al. 2013; Morisette et al. 2002). Our in-situ spectral measurements showed good agreement to medium resolution satellite products in pixel level (Figure 2. and Figure 3.). Therefore, in further studies, this in-situ spectral measurements could be used as in-situ validation dataset for fusion products.

(40)

31

5. Conclusion

In this study, we reported the inter-comparison of image fusion products against in-situ spectral measurements over a heterogeneous, cloudy rice paddy landscape during the whole growing season in 2017. The evaluation of fusion products against pixel-level ground measurements revealed: 1) fusion NDVI products in the mixed land cover were overestimated due to the positive bias of input data against ground NDVI, 2) fusion NDVI products in homogeneous rice paddy cover type were underestimated due to the negative bias of input data against ground NDVI, 3) Temporal dependence of blended fusion NDVI products is not clear in mixed land cover type, and 4) fusion products NDVI could underestimate the variations in vegetation activity. Moreover, 5) the strength of each fusion products were confirmed by inter-comparison of fusion products. 6) We expect our in-situ spectral measurement could be useful in quantifying uncertainty in image fusion products, improving fine resolution cropland mapping and monitoring, and choosing the algorithm depending on the research purpose.

(41)

32

Reference

Aragon, B., Houborg, R., Tu, K., Fisher, B.J., & McCabe, M. (2018). CubeSats Enable High Spatiotemporal Retrievals of Crop-Water Use for Precision Agriculture. Remote Sensing, 10

Barsi, J., Lee, K., Kvaran, G., Markham, B., & Pedelty, J. (2014). The Spectral Response of the Landsat-8 Operational Land Imager. Remote Sensing, 6, 10232

Bisquert, M., Bordogna, G., Begue, A., Candiani, G., Teisseire, M., & Poncelet, P. (2015). A Simple Fusion Method for Image Time Series Based on the Estimation of Image Temporal Validity. Remote Sensing, 7, 704-724

Chen, B., Huang, B., & Xu, B. (2015). Comparison of Spatiotemporal Fusion Models: A Review. Remote Sensing, 7, 1798-1835

Claverie, M., Ju, J., Masek, J.G., Dungan, J.L., Vermote, E.F., Roger, J.-C., Skakun, S.V., & Justice, C. (2018). The Harmonized Landsat and Sentinel-2 surface reflectance data set. Remote Sensing of Environment, 219, 145-161

Ding, Y., Zhao, K., Zheng, X., & Jiang, T. (2014). Temporal dynamics of spatial heterogeneity over cropland quantified by time-series NDVI, near infrared and red reflectance of Landsat 8 OLI imagery. International Journal of Applied Earth Observation and Geoinformation, 30, 139-145

Emelyanova, I.V., McVicar, T.R., Van Niel, T.G., Li, L.T., & van Dijk, A.I.J.M. (2013). Assessing the accuracy of blending Landsat–MODIS surface reflectances in two landscapes with contrasting spatial and temporal dynamics: A framework for algorithm selection. Remote Sensing of Environment, 133, 193-209

Feng, G., Yufang, J., Schaaf, C.B., & Strahler, A.H. (2002). Bidirectional NDVI and atmospherically resistant BRDF inversion for vegetation canopy. Ieee Transactions on Geoscience and Remote Sensing, 40, 1269-1278

Frederic, B., Weiss, M., Allard, D., Garrigue, S., Leroy, M., Jeanjean, H., Fernandes, R., Myneni, R., Privette, J., Bohbot, H., Bosseno, R., Dedieu, G., Di Bella, C., Duchemin, B., España, M., Gond, V., Fa Gu, X., Guyon, D., Lelong, C., & Vintila, R. (2013). VALERI : a network

(42)

33

of sites and methodology for the validation of medium spatial resolution land products.

Fu, D.J., Zhang, L.F., Chen, H., Wang, J., Sun, X.J., & Wu, T.X. (2015). Assessing the Effect of Temporal Interval Length on the Blending of Landsat-MODIS Surface Reflectance for Different Land Cover Types in Southwestern Continental United States. ISPRS International Journal of Geo-Information, 4, 2542-2560

Gao, F., Masek, J., Schwaller, M., & Hall, F. (2006). On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. Ieee Transactions on Geoscience and Remote Sensing, 44, 2207-2218

Hilker, T., Wulder, M.A., Coops, N.C., Linke, J., McDermid, G., Masek, J.G., Gao, F., & White, J.C. (2009a). A new data fusion model for high spatial- and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sensing of Environment, 113, 1613-1627

Hilker, T., Wulder, M.A., Coops, N.C., Seitz, N., White, J.C., Gao, F., Masek, J.G., & Stenhouse, G. (2009b). Generation of dense time series synthetic Landsat data through data blending with MODIS using a spatial and temporal adaptive reflectance fusion model. Remote Sensing of Environment, 113, 1988-1999

Houborg, R., & McCabe, M.F. (2018a). A Cubesat enabled Spatio-Temporal Enhancement Method (CESTEM) utilizing Planet, Landsat and MODIS data. Remote Sensing of Environment, 209, 211-226 Houborg, R., & McCabe, M.F. (2018b). Daily Retrieval of NDVI and LAI at 3 m Resolution via the Fusion of CubeSat, Landsat, and MODIS Data, 10, 890

Jarihani, A.A., McVicar, T.R., Van Niel, T.G., Emelyanova, I.V., Callow, J.N., & Johansen, K. (2014). Blending Landsat and MODIS Data to Generate Multispectral Indices: A Comparison of "Index-then-Blend" and "Blend-then-Index" Approaches. Remote Sensing, 6, 9213-9238

Ju, J.C., & Roy, D.P. (2008). The availability of cloud-free Landsat ETM plus data over the conterminous United States and globally. Remote Sensing of Environment, 112, 1196-1211

(43)

34

coupled Landsat-MODIS downscaling method for land surface temperature and vegetation indices in semi-arid regions. Journal of Applied Remote Sensing, 6, 17

Kim, J., Ryu, Y., Jiang, C., & Hwang, Y. (2019). Continuous observation of vegetation canopy dynamics using an integrated low-cost, near-surface remote sensing system. Agricultural and Forest Meteorology, 264, 164-177

Liu, X., Liu, L., Hu, J., & Du, S. (2017). Modeling the Footprint and Equivalent Radiance Transfer Path Length for Tower-Based Hemispherical Observations of Chlorophyll Fluorescence. Sensors (Basel), 17

Luo, Y., Guan, K., & Peng, J. (2018). STAIR: A generic and fully-automated method to fuse multiple sources of optical satellite data to generate a high-resolution, daily and cloud-/gap-free surface reflectance product. Remote Sensing of Environment, 214, 87-99 Morisette, J.T., Privette, J.L., & Justice, C.O. (2002). A framework for the validation of MODIS Land products. Remote Sensing of Environment, 83, 77-96

Olexa, E.M., & Lawrence, R.L. (2014). Performance and effects of land cover type on synthetic surface reflectance data and NDVI estimates for assessment and monitoring of semi-arid rangeland. International Journal of Applied Earth Observation and Geoinformation, 30, 30-41

Ryu, Y., Baldocchi, D.D., Verfaillie, J., Ma, S., Falk, M., Ruiz-Mercado, I., Hehn, T., & Sonnentag, O. (2010). Testing the performance of a novel spectral reflectance sensor, built with light emitting diodes (LEDs), to monitor ecosystem metabolism, structure and function. Agricultural and Forest Meteorology, 150, 1597-1606

Ryu, Y., Lee, G., Jeon, S., Song, Y., & Kimm, H. (2014). Monitoring multi-layer canopy spring phenology of temperate deciduous and evergreen forests using low-cost spectral sensors. Remote Sensing of Environment, 149, 227-238

Singh, D. (2012). Evaluation of long-term NDVI time series derived from Landsat data through blending with MODIS data. Atmosfera, 25, 43-63

(44)

35

(2001). Classification and Change Detection Using Landsat TM Data. Remote Sensing of Environment, 75, 230-244

Planet team, (2018). Planet Satellite Imagery Products. In. [WWW Document]. https://www.planet.com/products/planet-imagery/

Tucker, C.J. (1979). Red and photographic infrared linear combinations for monitoring vegetation. Remote Sensing of Environment, 8, 127-150

Walker, J.J., de Beurs, K.M., Wynne, R.H., & Gao, F. (2012). Evaluation of Landsat and MODIS data fusion products for analysis of dryland forest phenology. Remote Sensing of Environment, 117, 381-393

Xie, D., Gao, F., Sun, L., & Anderson, M. (2018). Improving Spatial-Temporal Data Fusion by Choosing Optimal Input Image Pairs, 10, 1142

Xu, B., Li, J., Park, T., Liu, Q., Zeng, Y., Yin, G., Zhao, J., Fan, W., Yang, L., Knyazikhin, Y., & Myneni, R.B. (2018). An integrated method for validating long-term leaf area index products using global networks of site-based measurements. Remote Sensing of Environment, 209, 134-151

Zhu, X., Cai, F., Tian, J., & Williams, T. (2018). Spatiotemporal Fusion of Multisource Remote Sensing Data: Literature Survey, Taxonomy, Principles, Applications, and Future Directions. Remote Sensing, 10, 527

Zhu, X.L., Chen, J., Gao, F., Chen, X.H., & Masek, J.G. (2010). An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sensing of Environment, 114, 2610-2623

Zhu, X.L., Helmer, E.H., Gao, F., Liu, D.S., Chen, J., & Lefsky, M.A. (2016). A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sensing of Environment, 172, 165-177

(45)

36

Supplementary Materials

Supplementary 1. Data Day-of-year (DOY) for time-series of NDVI (Except DOY 268 (Sentinel-2B), all Sentinel-2 data observed from Sentinel-2A) Data (Number of Data) Day of Year Ground (9) 155 187 194 206 215 223 239 248 291 FSDAF(66) 92 97 100 102 103 106 109 111 113 116 117 120 121 122 123 127 131 134 139 140 141 146 152 153 154 155 156 159 161 162 163 166 168 169 186 187 195 200 213 223 243 244 245 246 256 257 259 263 264 268 269 271 272 275 278 280 286 290 293 294 296 297 298 300 303 304 CESTEM (60) 92 98 101 102 110 111 112 113 114 119 120 122 126 138 139 141 142 144 147 154 155 161 163 167 168 169 171 174 178 185 187 195 206 215 223 224 238 244 247 258 259 261 263 265 268 269 271 275 277 278 287 290 293 294 296 300 301 303 304

STAIR (214) 91 to 304 (Daily data, No data gap

in STAIR) ESTARFM(65) 92 97 100 102 103 106 109 111 113 116 117 120 121 122 123 127 131 134 139 140 141 146 152 153 154 155 156 159 161 162 163 166

(46)

37

168 169 186 187 195 200 213 223 243 244 245 246 256 257 259 263 264 268 269 271 272 275 278 280 290 293 294 296 297 298 300 303 304 MODIS (73) (MOD09GQ) 92 93 97 100 102 103 106 109 111 113 116 117 120 121 122 123 126 127 131 134 139 140 141 146 147 152 153 154 155 156 159 161 162 163 166 168 169 173 186 187 195 200 206 213 223 243 244 245 246 256 257 259 263 264 268 269 271 272 275 278 280 286 287 290 293 294 296 297 298 300 301 303 304 Landsat-8 OLI (6) 94 134 174 206 286 302 Sentinel -2 (8) 103 113 123 154 173 243 268 293

Supplementary 2. Day-of-year in the comparison of satellite products against ground NDVI (Underlined date: data was replaced to another data within +- 3 days / ‘*’: no data or cloud contaminated within +-3 days) Ground MODIS (MOD09G Q) MODIS (MYD09G Q) Landsat-8 Sentinel -2 155 155 155 * 154 187 187 187 * * 194 195 * * * 206 206 * 206 * 215 213 * * * 223 223 * * *

(47)

38

239 243 238 * 243

248 246 246 * *

291 290 290 286 293

Supplementary 3. Day-of-year in the comparison of fusion products against ground NDVI (Underlined date: data was replaced to another data within +- 3 days / ‘*’: no data or cloud contaminated within +-3 days)

Ground FSDAF CESTEM STAIR ESTARFM

155 155 155 155 155 187 187 187 187 187 194 195 * 194 195 206 206 206 206 * 215 213 * 215 213 223 223 223 223 223 239 243 238 239 243 248 246 247 248 246 291 290 290 291 290

(48)

39

Supplementary 4. Simple linear regression analysis of (a) ESTARFM NDVI, (b) FSDAF NDVI, (c) STAIR NDVI, and (d) CESTEM NDVI, against ground NDVI (Ground NDVI dates distinguished by color) with the same number of data points on same date.

(49)

40

Supplementary 5. Scatter plot of fusion product NDVI to ground NDVI (Rice paddy land cover type: red dots, Mixed land cover type: green dots, and Plot 5: yellow dots) with the same number of data points on same date

(50)

41

요약(국문 초록)

지표면이

이질적인 농경지를 대상으로 영상합성자료와 현장스펙트

럼측정

간의 상호비교

공주원(Juwon Kong) 생태조경학과(Landscape Architecture major)

다양한 규모의 생태계 역동성을 관찰하기 위해 중요한 시공간 해상도가 높은 영상에 대한 기술의 발전과 함께 그 정확도 평가가 요구된다. 기존의 영상합성 자료를 평가하는 연구들은 지표면이 이질적인 경관에서 체계적인 평가를 하기 알맞은 픽셀단위의 현장 스펙트럼 측정값이 결여되었다. 본 연구는 식물의 생장기 동안 지표면이 이질적인 벼논 경관에서 현존하는 네가지 영상합성 자료- ESTARFM 와 FSDAF, STAIR, CESTEM -를 현장 스펙트럼

측정값과 상호비교하였다. 영상합성의 NDV (정규식생지수)가 시공간적으로 변화하는 것에 대해 실험하였다. 영상합성의 NDVI 는 현장 NDVI 에 대해 알맞거나 높은 선형성을 보였고, 그 값들이 음의 방향으로 편향되었다 (0.73< R2 <0.93, -8 %<bias <-1.6%). 영상합성들의 NDVI 는 NDVI 변동성을 포착하였지만, 현장 NDVI 보다 낮은 변동성을 예측하였다. 이러한 결과로 현장자료에 비해 영상합성에 입력하는 자료의 값이 편향된 경우 영상합성 자료에 주는 영향은 표지피복의 분류에 따라 다르다는 것을 도출하였다. 또한 영상합성의 NDVI 자료는 식생활동의 변동성을 과소평가 할 수 있음을

(51)

42

알 수 있었다. 기존의 연구결과와 달리 ESTARFM 과 FSDAF, STAIR 는 입력자료에 대한 시간적 의존도가 명확하게 관찰되지 않은 토지피복이 있었다. 현장 NDVI 와 비교를 통해 각 영상합성의 NDVI 의 강점을 확증하였다. 본 연구에 사용된 현장 스펙트럼 자료는 영상합성 자료의 불확실성을 평가하고, 경작지를 모니터링하기 적합한 영상자료를 향상시키고, 연구자의 목적에 맞게 영상합성기법을 선택하기에 유용할 것으로 전망된다. 주요어: 영상합성, 현장스펙트럼측정, 상호비교, 높은 시공간 영상, 정확도 평가, 정규식생지수 학번: 2017-26230

(52)

43

Acknowledgements

First, I would like to thank my advisor, Prof. Youngryel Ryu of Department of Landscape Architecture and Rural System Engineering at Seoul National University. Whenever I ask about the writings to Prof. Ryu, he always provide not only the writing guidelines but also the decent comments about my research. He led me to the science world and the scientific network for next step of my career. I also acknowledge Prof. Xiaolin Zhu, Ph.D Rasmus Houborg, and Prof. Kaiyu Guan to provide fusion products for this study.

The chairman and member of the committee, Prof. Dongkun Lee and Prof. Heeyeun Yoon, encouraged me insightful comments and question throughout 2018. I would also like to acknowledge Ph.D. Huang Yan as the second reader of my research results, and I am gratefully indebted to her comments on this study. I also want to thank all my current and previous lab members – Ph.D. Benjamin Dechant, Soyoun Kim, Jongmin Kim, Yourum Huang, Jeehwan Bae, Kaige Yang, Ph.D. Chongya Jiang, Yulin Yan, Jane Lee, and Yunah Lee – for constructive discussions and field works during the past two years.

Last, and the most importantly, I must express gratitude to my parents –Ph.D. Dukam Kong, Ph.D. Haesun Song- and my sister - Hyunwook Kong – for their support through prayers and

(53)

44

encouragement. With their love and guidance, I could accomplish every step of my life as a son and student.

This research was funded by the National Research Foundation of Korea (NRF-2016M1A5A1901789).

수치

Table 1. The description of image fusion products in this paper. The  types of fusion algorithms followed the category in (Zhu et al
Table 2. The description of satellite products in this study  Satellite  Spatial
Table 3.  Description  of  sampling  tools: Jaz  spectrometer and  cosine  corrector (FWHW: full width at half maximum, OD: outside dimension)
Table 4. Day-of-year (DOY) of in-situ spectral measurements and  growth stage of rice on relevant DOY
+6

참조

관련 문서

During the term of this Agreement, Manufacturer shall not, either directly or indirectly, sell the products in the Territory without the prior consent of distributor...

The index is calculated with the latest 5-year auction data of 400 selected Classic, Modern, and Contemporary Chinese painting artists from major auction houses..

1 John Owen, Justification by Faith Alone, in The Works of John Owen, ed. John Bolt, trans. Scott Clark, &#34;Do This and Live: Christ's Active Obedience as the

Phase profile of conventional DOE Obtained diffraction image. Optical vortices appear in the diffraction image generated by

Distribution of Cd in stream sediments, paddy soils, rice stalks and grains from the Kubong

In addition, Wallace and Froum 20) reported that the bone formation rate was superior in the cases using a barrier membrane in comparison with the

Likewise, other regions where highly contributed to the growth of national agriculture showed a rapid changes in compositions of the agricultural production from paddy

Although Korea has been playing a leading role in the development of fusion power generation and Korea is playing a leading role in the world stage,