• 검색 결과가 없습니다.

저작자표시

N/A
N/A
Protected

Academic year: 2022

Share "저작자표시"

Copied!
47
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

저작자표시-비영리-변경금지 2.0 대한민국

이용자는 아래의 조건을 따르는 경우에 한하여 자유롭게

l 이 저작물을 복제, 배포, 전송, 전시, 공연 및 방송할 수 있습니다. 다음과 같은 조건을 따라야 합니다:

l 귀하는, 이 저작물의 재이용이나 배포의 경우, 이 저작물에 적용된 이용허락조건 을 명확하게 나타내어야 합니다.

l 저작권자로부터 별도의 허가를 받으면 이러한 조건들은 적용되지 않습니다.

저작권법에 따른 이용자의 권리는 위의 내용에 의하여 영향을 받지 않습니다.

이것은 이용허락규약(Legal Code)을 이해하기 쉽게 요약한 것입니다.

Disclaimer

저작자표시. 귀하는 원저작자를 표시하여야 합니다.

비영리. 귀하는 이 저작물을 영리 목적으로 이용할 수 없습니다.

변경금지. 귀하는 이 저작물을 개작, 변형 또는 가공할 수 없습니다.

(2)

2016년 2월 석사학위 논문

Predi ct i onofMi ni mum DNBR i n aReact orCoreUsi ng Cascaded

Fuzzy NeuralNet works

조 선 대 학 교 대 학 원

원 자 력 공 학 과

김 동 영

(3)

Predi ct i onofMi ni mum DNBR i n aReact orCoreUsi ng Cascaded

Fuzzy NeuralNet works

CFNN을 이용한 원자로 노심 내 최소 DNBR 예측

2016년 2월 25일

조 선 대 학 교 대 학 원

원 자 력 공 학 과

(4)

Predi ct i onofMi ni mum DNBR i n aReact orCoreUsi ng Cascaded

Fuzzy NeuralNet works

지도교수 나 만 균

이 논문을 원자력석사학위 신청 논문으로 제출함

2015년 10월

조선대학교 대학원

원자력공학과

김 동 영

(5)

조대인의 석사학위논문을 인준함

위원장 조 선 대 학 교 교수 송 종 순 (인) 위 원 조 선 대 학 교 교수 나 만 균 (인) 위 원 한국원자력연구원 박사 김 창 회 (인)

2015년 11월

(6)

CONTENTS

Li stofTabl es· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·ⅰ Li stofFi gur es· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ⅱ Abst r act· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ⅲ

Ⅰ.I nt r oduct i on· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·1

Ⅱ.React orPr ot ect i onandMoni t or i ng Syst ems · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·3

Ⅲ.CascadedFuzzy Neur alNet wor ks· · · · · · · · · · · · · · · · ·8

A.CFNN Met hodol ogy· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 8

B.Opt i mi zat i on oft heCFNN· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·15

Ⅳ.Appl i cat i ont ot heMi ni mum DNBR Pr e di ct i on · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·19

Ⅴ.Concl usi onsandFur t herSt udy· · · · · · · · · · · · · · · · · · · ·32

(7)

Ref er ences· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·33

(8)

Li stofTabl es

Table1.Rangesofinputandoutputsignals···16 Table2.DNBR calculationresultsbytheCFNN model···16 Table3.DNBR calculationresultsbyFNN andFSVR models···20 Table4.DNBR calculationresultsbyCFNN models···20 Table5.ComparisonoftheCFNN modelandCOLSS···24 Table6.MeasurementerrorsforDNBR calculations···25

(9)

Li stofFi gur es

Fig.1.CPC,COLSS,andSAFDL···4

Fig.2.CPC/COLSS MonitoredVariables···5

Fig.3.CPC inputs& outputs···6

Fig.4.COLSS inputs& outputs···7

Fig.5.CascadedFuzzyNeuralNetwork(CFNN)model···5

Fig.6.FuzzyNeuralNetwork(FNN)ofthefirstandsecondstages···8 Fig.7.DatastructurefordevelopingCFNN models···10 Fig.8.Trendofthe valuesaccordingtothestagenumber···11 Fig.9.OptimizationprocedureoftheCFNN model···14 Fig.10.Minimum DNBR valuesversuseachinputdata···17 Fig.11.RMSerrorversusthestagenumberoftheCFNNforcomparisonof

theoptimizationmethods(positiveASI)···21 Fig.12.RMSerrorversusthestagenumberoftheCFNNforcomparisonof

theoptimizationmethods(negativeASI)···21 Fig.13.RMS errorversusthestagenumberoftheCFNN forthe

developmentandtestdata(positiveASI)···22 Fig.14.RMS errorversusthestagenumberoftheCFNN forthe

developmentandtestdata(negativeASI)···22 Fig.15.Relativemaximum errorversusthestagenumberoftheCFNN forthe

testdata(positiveASI)···23 Fig.16.Relativemaximum errorversusthestagenumberoftheCFNN forthe

(10)

ABSTRACT

Pr e di c t i on ofMi ni mum DNBR i n aRe a c t orCor eUs i ng Ca s c a de dFuz z yNe ur a lNe t wor ks

김 동 영

지 도 교 수 : 나 만 균

조선대학교 일반대학원 원자력공학과

가압경수로 (PWR)노심을 설계할 때 고려되는 주요 사항중의 하나는 핵연료

의 허용설계한계를 초과하는 일이 발생하지 않도록 하는 것이다.연료봉의 건전 성을 위협하는 요소 중에 하나인 피복재 파손은 최소 DNBR이 1보다 작거나 같 을 때 발생한다.따라서 DNB 현상을 방지하기 위하여 DNB 현상의 발생을 정확 히 지시해주는 정량적인 지표인 DNBR 값을 정확히 평가하는 것이 요구된다.

본 논문에서는 최소 DNBR을 예측하기 위하여 데이터-기반 인공지능 방법 중 에 하나인 Cascaded Fuzzy NeuralNetwork (CFNN)을 사용하였다.이러한 CFNN 모델은 Fuzzy NeuralNetwork (FNN)모듈이 직렬로 다수 연결되어 추 론을 반복적으로 고도화시킴으로써 목표값에 근접해 가도록 설계된 모델이다.이 모델에는 여러 종류의 파라미터들을 포함하고 있으며,이들 파라미터들을 최적화 하는 것이 필요하다.FNN에는 멤버쉽함수를 포함하는 퍼지 추론이 이용되며,멤

버쉽 함수에 관여되는 조건부 (Antecedent) 파마리터들은 유전자알고리즘

(Genetic Algorithm)을 이용하여 최적화하였으며, 퍼지추론의 결론부

(Consequent)파라미터들은 최소자승법을 사용하여 최적화하였다.CFNN 모델은

진행과정 중에 오버피팅 (Over-fitting)현상이 발생할 수 있다.학습과정에 이용

되지 않은 데이터들을 이용한 crosschecking을 통하여 오버피팅 현상의 발생여

부를 확인할 수 있으며,이런 현상의 발생이 확인될 경우에 FNN 모듈의 추가를 멈추게 하여 오버피팅 현상을 방지할 수 있다.

(11)

이렇게 제안된 DNBR 예측 알고리즘은 한국표준형발전소 (OPR1000)의 시뮬레 이션으로부터 획득된 수많은 데이터를 사용하여 검증하였다.또한 다른 모델 (FNN,FSVR;FuzzySupportVectorRegression)을 이용한 결과와 비교를 수행 하였으며,더욱 우수한 것으로 확인되었다.따라서 CFNN 모델을 사용함으로써 효과적으로 원자로 노심 내의 DNBR 값을 예측할 수 있을 것으로 기대된다.

(12)

Ⅰ.I NTRODUTI ON

In order to safely operate the Pressurized Water Reactor (PWR), the temperature of the fuel surface has to be controlled so that overheat does not happen. If the temperature of the fuel surface is more increased in the nucleate boiling region, the bubbles coalesce and begin to form a vapor film across the surface of the rods. The phenomena of boiling crisis like this type is said a Departure from Nucleate Boiling (DNB). The DNB phenomena can influence that the fuel will partially melt, the cladding will rupture, and fission products will be released into the coolant [1]. The ratio between the thermal flux on the fuel rod surface likely to cause DNB and the actual thermal flux of the fuel rod surface is called DNB Ratio (DNBR). The DNBR needs to be monitored and predicted to prevent the boiling crisis and clad melting. Therefore, a lot of studies have been carried out on the prediction of DNBR values [2-9].

The performance limit of minimum DNBR is set so that DNB phenomena is avoided with 95% possibility at a 95% confidence level. And, uncertainties in the values of process parameters, core design parameters, and calculation method, used in the assessment of thermal margin should be treated with at least 95% probability at a 95% confidence level.

The reactor core monitoring and protection system is required that predicts a minimum DNBR to prevent a boiling crisis by monitoring the DNBR. The Optimized Power Reactor 1000 (OPR1000) employs the Core Protection Calculator System (CPCS) and Core Operation Limit Supervisory System (COLSS) for protection and monitoring, respectively.

These systems calculate the Local Power Density (LPD) and DNBR of critical unmeasurable safety-related parameters. The COLSS is a program that runs in the Plant Monitoring System (PMS) computer that helps plant operators to monitor Limiting Conditions for Operations (LCOs) specified in the technical specifications.

The objective of this study is to predict the minimum DNBR in a reactor core using the measured signals of the Reactor Coolant System (RCS) by applying Cascaded Fuzzy Neural Networks (CFNNs) according to operating conditions. The CFNN presents the prediction value of a minimum DNBR value through a repeatedly performed analysis using serially connected Fuzzy Neural Network (FNN). The used output data are minimum DNBR values in a reactor core in a lot of operating conditions and the input data are reactor power, core inlet temperature, pressurizer pressure, coolant flowrate of a reactor

(13)

core, Axial Shape Index (ASI), and a variety of control rod positions.

The proposed DNBR prediction algorithm was verified by using the nuclear and thermal data acquired from many numerical simulations of the OPR1000. Besides, the proposed CFNN model was compared to previously developed models and was compared to COLSS being used in current OPR1000 reactors.

(14)

Ⅱ.Rea c t orPr ot e c t i on a nd Moni t or i ngSys t e ms

The Reactor Protection System (RPS) is to protect that plant safety limits are not exceeded during Anticipated Operational Occurrences (AOOs). The plant safety limits are DNB, peak Linear Heat Rate (LHR), and Reactor Coolant System (RCS) pressure [10]. The safety limits are described as follow:

1) In order to prevent the DNB and fuel clad cracking, the actual calculated DNBR of reactor core must be higher than the design limit DNBR.

2) In order to prevent the melting of the center of the fuel, the peak LHR must be lower than 689 W/cm (21 kW/ft).

3) In order to maintain the integrity of the RCS pressure barriers, RCS pressure must be lower than the 2750 psia (operating pressure is 2250 psi).

The OPR1000 employs the CPCS and COLSS for core protection and monitoring, respectively. The Core Protection Calculators (CPCs) are to generate trip signals based on LPD and DNBR which prevents these limits from being exceeded during AOOs. The CPCS as part of the Plant Protection System (PPS) of four separate channel configuration is composed of four CPCs, two Control Element Assembly Calculators (CEACs), four operator modules and the Control Element Assembly (CEA) position indication system. In addition, the Test Cart is included in the CPCS. The Test Cart is used for periodic testing and maintenance of the CPCS and CEAC. Each channel is composed of the CPC and the operator module and the CEAC has two-out-of-four channels.

The CPC calculates the DNBR and LPD receiving two CEAC output signals and Reactor Coolant System (RCS) monitoring factors. If the value gets out of the scope of the pre-set DNBR and LPD, it will generate three signals. The signals are pretrip, trip, and CEA Withdrawal Prohibit (CWP) signal. Also, the CPC generates the analogue output and necessary alarm in order to monitor the operating state of plant. In the DNBR calculation, the CPCs are able to provide opportune protection for DNBR only if the operator maintains the steady state DNBR in accordance with technical specifications. This allows ample margin between steady state DNBR and the trip set point so that DNBR protection is assured, even in the event of a rapid DNBR reduction [10].

(15)

COLSS SAFDL’s CPC’s

Fig. 1. CPC, COLSS, and SAFDL

Figure 1 shows that the Core Protection Calculator (CPC) is included on the COLSS.

The concept of COLSS involves the CPC patent application. The CPCs and COLSS collaborate on to assure that the CPCs can protect the Specified Acceptable Fuel Design Limits (SAFDLs) [10]. The DNBR must not drop below 1.3, nor peak Linear Heat Rate (LHR) rise above 689 W/cm in the event of Anticipated Operational Occurrences (AOOs).

The COLSS continually calculates DNB margin, peak LHR, ASI, total core power, and

azimuthal tilt magnitude and compares the calculated values to the LCOs on these parameters. In the COLSS, the DNBR affecting the margin to DNB is continually monitored, and a core power operating limit based on margin to DNB is computed. It has not to result in a DNB reduction to a value less than 1.3. The azimuthal tilt is provided to assure that design safety margins are maintained. The azimuthal flux tilt is calculated through the COLSS. The azimuthal flux tilt is not directly monitored by the Plant Protection System (PPS). The CPCs each only monitor one of the four ex-core safety channels because the CPCs make ex-core channel cross comparison impossible. And, the actual value of ASI is maintained in the range assumed in the safety analysis [10].

Figure 2 shows monitored variables of the CPC and COLSS. The circle in the figure means the CPC input signals. The CPC input signals are Reactor Coolant Pump (RCP) speed, cold-leg and hot-leg temperatures, pressurizer pressure, ex-core linear power, CEA position, and CEA penalty. The DNBR is a function of RCS pressure, RCS flow, RCS temperature, reactor power, and flux distribution. The CPC calculations must be biased to be conservative to offset their inaccuracy, so that the CPCs will normally calculate a value of DNBR that is lower, and a value of LPD that is higher than those calculated by COLSS [10]. Figures 3 and 4 show the CPC and COLSS inputs and outputs, respectively.

(16)

RPM

RPM

TC TCTH TC

TC THRPM Reactor vessel

High Pressure Turbine Moisture Separators & Reheaters

Low Pressure Turbines & CondensersHeatersHeaters

PPRI Pressurizer Main Steam Line 2Main Steam Line 1

SFLOWSFLOW SGPA 1, 2, 3, 4 SG 1 BDF FWF FWP Feedwater Line 1PDP

PDP

RCP RCP

RCP RCP RPMCEA Position LREG, etc.

In-Core Detectors

TFSP SGPB 1, 2, 3, 4 SG 2 BDF FWF FWP Feedwater Line 2 FWT FWT

PDP

PDP Feedwater PumpsCondensate Pumps

Hot-legHot-leg

Cold-leg leg ld- C o

d -le C ol g

Cold-leg

Ex-Core Linear Power

: COLSS monitored variables : CPC Inputs

Fig. 2. CPC/COLSS Monitored Variables

(17)

CPC

10 5 0 DNBR

KW/FT

0 10 20

0 200 100 POWER

Calibrated Neutron Flux

Power Margin to the

DNBR Trip

Margin to the LPD Trip

RPS

LOW DNBR PRETRIP LOW DNBR TRIP

HIGH LPD PRETRIP HIGH LPD TRIP Operator

Module Digital Display

Alarm

Plant Computer RCS Pressure

CEAC Penalty Factor CEA RSPT Signal Ex-core Detector Signal RCP Speed Signal Hot-leg Temperature

Cold-leg Temperature

Fig. 3. CPC inputs & outputs

(18)

Operators Console

125%

+0.7

KW/FT Power Limit DNBR Power Limit

Axial Shape Index (ASI)

Core Power 125%

-0.7 125%

Power Margin (Digital)

-50 To -125%

COLSS Program Plant Compulater

Alarms

CRT Display Printer

COLSS Mode Select COLSS Status Report

T(Hot)

Turbine First Stage Pressure Steam Pressure

Steam Flow FW Flow FW Temperature

CEA Position In-core Flux

RCP Head RCP Speed Pressurizer Press

T(Cold)

COLSS Power Margin Alarm COLSS “CPC” Azimuthal Tilt Alarm COLSS “Tech Spec.” Azimuthal Tilt Alarm Fig. 4. COLSS inputs & outputs

(19)

Ⅲ.Cas c a ded Fuz zyNeur a lNe t wor k

The fuzzy system has been developed based on “learning” and “inference” intelligently.

This fuzzy theory has been attempted in order to prove by approach mathematically about inaccuracy in thought and action of human. The FNN is a Fuzzy Inference System (FIS) equipped with a training algorithm [11].

A.CFNN Me t hodol ogy

Likewise, the CFNN is based on FNNs. The CFNN has been produced based on

“learning” and “inference” intelligently. Most of the past studies on FNN models have been proposed to implement different types of single-stage fuzzy reasoning mechanisms.

However, single-stage fuzzy reasoning was only the most basic among a human being’s various types of reasoning mechanisms. Syllogistic fuzzy reasoning, where the consequence of a rule in one reasoning stage is passed to the next stage as a fact, is essential to effectively build up a large scale system with high level intelligence [12]. Because the fusion of syllogistic fuzzy logic and neural networks has not been applied in the nuclear engineering field and is expected to provide performance superior to that of a simple FNN model. Therefore, this study applied a CFNN model based on syllogistic fuzzy reasoning.

The CFNN model contains two or more inference stages, where each stage corresponds to a single stage FNN module. Each single-stage FNN module contains fuzzification, fuzzy inference, and training units. Figure 5 shows the architecture of the CFNN model, generally. In this study, the target value of the CFNN model is predicted through the process of repeatedly adding FNN modules. Here, the second stage FNN module utilizes the initial input variables and the output of the first stage FNN module as the input variables.

(20)

First stage FNN

x

1

x

2

x

m

ˆy

1

Second stage

FNN Gth stage

FNN

ˆy

2

ˆ

G1

y

-

ˆ

G

y

Fig. 5. Cascaded Fuzzy Neural Network (CFNN) model

FNN module of the next stage is one in which it was simply expanding the FNN module of the first stage. The FNN is a FIS equipped with a training algorithm. Also, the FIS generally uses the conditional rules that is comprised of if/then rules of the antecedent part and consequent part, and is one of the methods of artificial intelligence. Both the antecedent and consequent parts have membership function. The formula of the membership function uses the Gaussian, triangular, trapezoid and bell-shaped functions. The FIS output should be a real value that requires a defuzzifier to the FIS output. Using the Takagi-Sugeno-type that does not require the membership function in the consequent part, an arbitrary -th rule can be expressed as follows [13]:

 

  

  ⋯  

 

     

  ⋯   

(1)

 

          



  ⋯   ⋯   

(21)

where

 input value of FNN module     ⋯ 

  output value of the  fuzzy rule

  number of fuzzy rules

  number of input variables

  number of stages

  fuzzy sets of the  fuzzy rule

The number of  input and output training data of the fuzzy model

    ⋯      (here, they are

    ⋯      ⋯ were assumed to be available and the data point in each dimension was normalized. And Gaussian membership function was used because Gaussian membership function reduced the number of the parameters to be optimized.

   

  

(2)

In equation (3), the function  is expressed by the following first-order polynomial of input variables:

  ⋯   ⋯   

  

    

   

(3)

(22)

  weight of the  fuzzy input variable

 bias of the  fuzzy rule

The output  of FIS is calculated by summing the weighted fuzzy rule outputs 

as shown in Eq. (4):

   

  ⋯  (4)

where

 

    (5)

  

  





(6)

Figure 6 shows the calculation method of the FIS [14]. The first layer indicates the input nodes that directly transmit the input values to the next layer. Each output from the first layer is transmitted to the input of a membership function. The second layer indicates a fuzzification layer that calculates membership function values. The third layer indicates a product operator on the membership functions that is expressed as Eq. (5). The fourth layer performs a normalization operation that is expressed as Eq. (6). The fifth layer generates the output of each fuzzy if/then rule. Finally, the sixth layer performs an aggregation of all the fuzzy if/then rules and is expressed as Eq. (4).

(23)

S

N N

P P

A11 A12 A1m

´ ´

x1 x2 xm

( )

1 x1, ,xm

f

L

f

n

(

x1, ,Lxm

)

1 st layer2 nd layer3 rd layer4 th layer5 th layer6 th layer

1

st

stage

1

An An2 Anm

w1

w1

2

nd

stage

x1 x2 xm

1 st~ 6 th layer Input

Input

FNN

wn

wn

ˆ1

y

(24)

Finally, the output  of the first FNN module by Eq. (4) is expressed as the vector product as given in Eq. (7):

   (7)

where

  ⋯ ⋯⋯ ⋯ ⋯ 

  

 ⋯  ⋯⋯ 

⋯   ⋯ 

The vector  is called a consequent parameter vector that has   dimensions, and the vector  consists of input data and membership function values. The estimated output of  input and output data pairs induced from Eq. (7) can be expressed as follows:

  (8)

where

  ⋯ 

 ⋯ 

The matrix  has the dimension of × .

The CFNN model may have over-fitting problem. This over-fitting can be found through cross checking. That is, the FNN module-adding process is stopped when the over-fitting is found. The over-fitting problem can be resolved through cross checking using the data structure shown in figure 7 [15].  represents the number of checking data points.

(25)

1 2

1 2

1 2

1 2

1 2

1 2

training data set

checking

(1) (1) (1) (1)

(2) (2) (2) (2)

( ) ( ) ( ) ( )

( 1) ( 1) ( 1) ( 1)

( 2) ( 2) ( 2) ( 2)

( ) ( ) ( ) ( )

m m

m m m

m

t t t t

t t t t

t t t t

c c c c

y x x x

y x x x

y N x N x N x N

y N x N x N x N

y N x N x N x N

y N x N x N x N

+ + + +

+ + + +

üï ïý ïï üþ ïï ýï ïþ L

L

M M M M M

L L L

M M M M M

L

1 2

1 2

1 2

development data set data set

test data set

( 1) ( 1) ( 1) ( 1)

( 2) ( 2) ( 2) ( 2)

( ) ( ) ( ) ( )

m m

m

c c c c

c c c c

y N x N x N x N

y N x N x N x N

y N x N x N x N

+ + + +

+ + + +

üï ïï ïï ýï ïï ïï ü þ

ïï ýï ïþ L

L

M M M M M

L

Fig. 7. Data structure for developing CFNN models

A criterion used to evaluate whether or not an over-fitting problem occurs at stage  is the sum of fractional errors for the checking data, the fractional errors , which is expressed as follows:

  

 

 



 

 

 

(9)

Figure 8 shows that the training and checking processes stop if    .

(26)

1 2 3 4 5 G G+1

r(g)

g

Fig. 8. Trend of the  values according to the stage number

Furthermore, the complexity of the CFNN structure is defined as the element number of the consequent parameter vector  of all FNN modules included in the CFNN and is calculated as follows:

complexity  (10)

where  is the number of all FNN modules in the CFNN,  is the number of the original input variables, and  is the number of fuzzy rules.

B.Opt i mi z a t i onoft heCFNN

The FNN modules of the CFNN model consist of an FIS and its neuronal training system. In this study, the training data was used to optimize the antecedent and consequent parameters of the fuzzy inference system. The test data was used to check the developed

(27)

model and is different from the training data set. The antecedent parameters related to the membership function can be a back-propagation method or a genetic algorithm. The fitness function in the genetic algorithm was intended to minimize the maximum error and Root Mean Square (RMS) error:

 exp   (11)

where

  

 

 max      ⋯

 weightvalueof RMS error

 weightvalueof maximum error

 number of training data

The variable  means the actual measured value, and  in stage  is its value predicted using the FNN module in stage . If the antecedent parameters are fixed by using a genetic algorithm, the output of the proposed model can be explained by the expansion of some functions. Therefore, the consequent parameter  can be easily calculated by using the least-squares method. That is, the consequent parameter  is chosen to minimize an objective function. The objective function consists of the squared error between the actual value  and its estimated value :

  

   

 wq y y (12)

(28)

where

y ⋯  y ⋯ 

In Eq. (12), a solution for minimizing the above objective function can be obtained using Eq. (8). The parameter vector  is solved easily from the pseudo-inverse as shown below:

  y (13)

In Eq. (13), the parameter vector  can be calculated from a series of input and output data pairs.

Figure 9 shows the optimization procedure of the CFNN model. The antecedent parameters of the proposed CFNN model was optimized by using the genetic algorithm parameter and its consequent parameters was using the least square method.

(29)

Start g=1

g-th stage FNN

Generate initial chromosomes

Evaluate chromosomes

Max. generation approached ?

Check the structure optimization

CFNN structure optimized ?

Stop

Genetic operation such as selection, crossover and

mutation

Least squares method for consequent parameter

calculation FNN module

optimization

No

Yes

Yes No

1 g+ ®g

Fig. 9. Optimization procedure of the CFNN model

(30)

Ⅳ.Appl i c a t i on t ot heMi ni mum DNBR Pr e di c t i on

The proposed algorithm was applied to the first fuel cycle of the OPR1000. The DNB data were obtained using the MASTER (Multipurpose Analyzer for Static and Transient Effects of Reactor) and COBRA codes [16], [17]. The MASTER reactor analysis code developed by KAERI (Korea Atomic Energy Research Institute) is a nuclear analysis and design code. It can simulate the PWR and BWR core in 1-, 2-, and 3-dimensional geometry. This code was designed to have a variety of capabilities, such as static core design, transient core analysis, and operation support, and is interfaced with the COBRA code for thermo-hydraulic calculations. Because these two codes are best-estimated codes, additional margin should be provided to set up safety limits for DNB protection or set up the alarm setpoints for DNB monitoring [9].

The DNB data are composed of totally 18816 pairs of input-output data (  ⋯  ). Table 1 shows the ranges of the input-output signal that describe the reactor core states appropriately[9]. The used DNB data were composed of 200 pieces of test data, and the remaining data where test data were removed, were used to develope the CFNN model; a development data set include the training data and checking data. The development data and test data were selected randomly without using any specific selection logic. 80% among the development data were used to train each FNN module. And the remaining data where the training data were removed from the development data was used to optimize the CFNN structure and to check the over-fitting.   ⋯  are the input signals that represent the reactor power, core inlet temperature, coolant pressure, mass flowrate, axial shape index (ASI), and R2, R3, R4, and R5 control rod positions, respectively, and  is the output signal that indicates the minimum DNBR in the reactor core. ASI is defined as , where  is the bottom-half power of a nuclear reactor and  is the top-half power.

(31)

NO.

of rules

NO.

of stage

Comp- lexity

Development data Test data NO.

of data

RMS error (%)

Relative maximum error (%)

NO.

of data

RMS error (%)

Relative maximum error (%) Positive

ASI

2 51 3570 9308 0.1112 0.5813 100 0.1131 0.3271

4 30 2940 9308 0.1048 0.5236 100 0.1376 0.3819

6 34 5406 9308 0.0839 0.4351 100 0.1138 0.3339

8 13 1664 9308 0.1305 0.7763 100 0.1425 0.4185

Input signals Nominal values Ranges

Reactor power (%) 100% 80 ~ 103

Inlet temperature (oC) 295.8 290.5 ~ 301.7

Pressure (bar) 155.17 131.0 ~ 160.0

Mass flowrate (kg/m2-sec) 3565.0 2994.6 ~ 4135.4

ASI - -0.432 ~ 0.534

R2 control rod positions (cm) - 0 ~ 381

R3 control rod positions (cm) - 0 ~ 381

R4 control rod positions (cm) - 0 ~ 381

R5 control rod positions (cm) - 0 ~ 381

Output signals Nominal values Ranges

DNBR value - 0.853 ~ 5.176

Table 1. Ranges of input and output signals

In table 2, The DNB data were divided into the development data and test data sets.

The CFNN is trained for two DNBR development data sets divided into both the positive (relatively high at a bottom part of a reactor core) ASI and the negative ASI because these results had smaller errors, compared with results with only one data set. Also, this table summarizes the DNBR calculation results by the CFNN model, the number of the FNN stage finally stopped due to the occurrence of the over-fitting phenomena, and the complexity of the CFNN model. As the number of fuzzy rules increase, the number of FNN stage decreases. The parts with the lowest value in each ASI and RMS error are marked in bold. The RMS error and relative maximum error are 0.11% and 0.33% for positive ASI, respectively. And the RMS error and relative maximum error are 0.04% and 0.08% for negative ASI, respectively.

Table 2. DNBR calculation results by the CFNN model

(32)

80 90 100 0

1 2 3 4 5

Negative Positive

DNBR

Reactor power (%)

(a) Minimum DNBR versus the reactor power

285 290 295 300 305

0 1 2 3 4

5 Negative

Positive

DNBR

Inlet temperature (oC)

(b) Minimum DNBR versus the core inlet temperature

(33)

3000 3500 4000 0

1 2 3 4 5

Negative Positive

DNBR

Mass flowrate (kg/m2-sec)

(c) Minimum DNBR versus the RCS coolant flowrate

1 2 3 4 5

Negative Positive

DNBR

(34)

-0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 1

2 3 4 5

DNBR

DNBR

Axial shape index

(e) Minimum DNBR versus the axial shape index

Fig. 10. Minimum DNBR values versus each input data

Figure 10 shows the relationship between the minimum DNBR values and each input data for the test data set. The values of minimum DNBR were represented through each input data.

Table 3 shows the result of the FNN model and the FSVR model developed previously with the same data[3], [4]. In FNN model, RMS errors are 0.375% and 0.257% for positive ASI and negative ASI, respectively. RMS errors are 0.320% and 0.255% for the FSVR model for positive ASI and negative ASI, respectively.

(35)

FNN FSVR RMS error (%)

Relative maximum error (%)

RMS error (%)

Relative maximum error (%)

Positive ASI 0.375 10.542 0.320 1.969

Negative ASI 0.257 7.697 0.255 0.854

Back-propagation Genetic algorithm RMS error (%)

Relative maximum error (%)

RMS error (%)

Relative maximum error (%)

Positive ASI 0.1800 0.8147 0.1138 0.3339

Negative ASI 0.1173 0.4718 0.0436 0.0753

Table 3. DNBR calculation results by FNN and FSVR models

Also, Table 4 shows the results of the CFNN models optimized using the back-propagation method and the genetic algorithm [8]. In case of using the back-propagation method, RMS errors are 0.1800% and 0.1173% for positive ASI and negative ASI, respectively. In case of using the genetic algorithm, RMS errors are 0.1138%

and 0.0436% for positive ASI and negative ASI, respectively. Therefore, the results of the CFNN model using the genetic algorithm are even better than the result of the CFNN model using the back-propagation.

Table 4. DNBR calculation results by CFNN models

Figures 11 and 12 show RMS error versus the stage number for positive ASI and negative ASI, respectively. In these figures the results using the genetic algorithm is compared with ones using the back-propagation method. These figures show the prediction

(36)

0 5 10 15 20 25 30 35 0.1

0.2 0.3 0.4 0.5 0.6

RMS error(%)

stage number of CFNN

Genetic algorithm Back-propagation

Fig. 11. RMS error versus the stage number of the CFNN for comparison of the optimization methods (positive ASI)

0 5 10 15 20 25 30

0.1 0.2 0.3 0.4 0.5

RMS error(%)

stage number of CFNN

Genetic algorithm Back-propagation

Fig. 12. RMS error versus the stage number of the CFNN for comparison of the optimization methods (negative ASI)

(37)

0 10 20 30 40 50 0.0

0.2 0.4 0.6 0.8 1.0

RMS error(%)

stage number of CFNN

2 fuzzy rules 4 fuzzy rules 6 fuzzy rules 8 fuzzy rules

Fig. 13. RMS error versus the stage number of the CFNN for the development and test data (positive ASI)

0 10 20 30 40 50

0.0 0.2 0.4 0.6 0.8 1.0

RMS error(%)

2 fuzzy rules 4 fuzzy rules 6 fuzzy rules 8 fuzzy rules

(38)

0 10 20 30 40 50 0.0

0.5 1.0 1.5 2.0 2.5 3.0 3.5

Maximum error(%)

stage number of CFNN

2 fuzzy rules 4 fuzzy rules 6 fuzzy rules 8 fuzzy rules

Fig. 15. Relative maximum error versus the stage number of the CFNN for the test data (positive ASI)

0 10 20 30 40 50

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5

Maximum error(%)

stage number of CFNN

2 fuzzy rules 4 fuzzy rules 6 fuzzy rules 8 fuzzy rules

Fig. 16. Relative maximum error versus the stage number of the CFNN for the test data (negative ASI)

(39)

ASI value Power MASTER

(target) COLSS

Proposed Algorithm (CFNN)

Proposed Algorithm (CFNN)1)

0.081 80 4.203 2.921 4.203 3.953

0.094 90 3.671 2.494 3.673 3.454

0.069 100 3.243 2.135 3.243 3.050

0.073 103 3.130 2.039 3.132 2.945

-0.525 80 2.833 2.028 2.833 2.664

-0.504 90 2.487 1.736 2.488 2.340

Figures 13 and 14 show the RMS errors versus the stage number for positive ASI and negative ASI, respectively. These figures show the estimation results of the CFNN model with 6 fuzzy rules. As the number of stages of the CFNN is increased, the RMS errors are reduced gradually and the dropping rate decreases. Figures 15 and 16 show the relative maximum errors versus the stage number of CFNN for positive ASI and negative ASI, respectively.

In table 5, the developed CFNN model is compared with the COLSS being used in current OPR1000 and MASTER code. The COLSS is designed to assist operators by providing the Limiting Conditions for Operations (LCOs) in the technical specifications in OPR1000 nuclear power plants. LCOs are as follows: peak linear heat rate, DNBR, total power level, azimuthal tilt, Axial Shape Index (ASI). Additionally, the value of DNBR is calculated by the COLSS through measured hot-leg temperature, cold-leg temperature, steam pressure, RCP speed, in core flux, feedwater flowrate, steam flowrate etc.

The DNBR values of the proposed CFNN model are almost the same as those of the MASTER code that are the target values of the proposed method, which implies that the proposed method is reliable. The DNBR values predicted by the CFNN model are even larger than those of the COLSS. This difference is because the power distribution of a hot channel is assumed to be extremely conservative in the COLSS calculations. And, the small part about the difference is because of measurement uncertainty. The DNBR value predicted by the CFNN model have not reflected the measurement uncertainty of the measured signal until now. Therefore, the measurement uncertainty should be considered.

Table 5. Comparison of the CFNN model and COLSS

(40)

Error parameters Range R Variance ()

Calibration

Calorimetric (2%) 4.0% 1.3

 (±) 4.9% 2.01

Pressure

(±) 1.5% 0.19

Signal linearity, reproducibility and

bistable error 10.73% 9.6

Total Variance 13.10 (  )

Setpoint Uncertainty 5.96% ()

Table 6 shows the data regarding the measurement uncertainty [18]. Considering measurement uncertainty, the prediction values of DNBR by the proposed CFNN model were lowered to 94.04%(    ) of the predicted values because of the measurement errors. But the decreased portion is not large. The minimum DNBR values to reflect these uncertainties are shown in the rightmost column position of this table.

Table 6. Measurement errors for DNBR calculations

Figure 17 shows the distributions of the actual minimum DNBR values used in the present study and the predicted minimum DNBR values. As shown in the figure, the distributions of the predicted minimum DNBR values for positive ASI and negative ASI are almost the same shape as those of actual minimum DNBR values.

(41)

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0

300 600 900 1200 1500

mean = 2.21 std = 0.45

counts

minimum DNBR

(a) Actual minimum DNBR histogram for negative ASI

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0

300 600 900 1200 1500

counts

minimum DNBR

mean = 2.20 std = 0.46

(42)

1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 0

300 600 900 1200 1500

mean = 3.13 std = 0.64

counts

minimum DNBR

(c) Actual minimum DNBR histogram for positive ASI

1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 0

200 400 600 800 1000 1200 1400

counts

minimum DNBR

mean = 3.14 std = 0.65

(d) Predicted minimum DNBR histogram for positive ASI

Fig. 17. Distribution of actual minimum DNBR values and estimated minimum DNBR values

(43)

Ⅴ.Conc l us i on a nd Fur t he rSt udy

In order to prevent the radiation leak and cladding crack, the minimum DNBR has to be predicted under any circumstance. Therefore, the limit value of DNBR should be maintained lower than 1.3.

In this study, the CFNN model was developed to predict the minimum DNBR. The proposed algorithm was applied to the first fuel cycle of the OPR1000. The RMS errors were 0.11% and 0.04% for the positive ASI and negative ASI, respectively, which is sufficiently accurate. The standard criterion on the RMS error for the prediction value of the minimum DNBR is not set in Nuclear Power Plants (NPPs) operation. However, in order to apply to NPPs, uncertainty analysis on the calculated value of the minimum DNBR value that is not 1.3, should be conducted. The detailed uncertainty analysis will be performed in the future.

Through comparison of CFNN, FNN, and FSVR, it is shown that the proposed CFNN model has better performance compared to the FNN and FSVR models. Also, the RMS error of the CFNN using the genetic algorithm is lower than the RMS error of the CFNN using the back-propagation method. However, the genetic algorithm in the process of the optimization requires larger computational burden.

As a result, the CFNN model is sufficiently accurate to be used in the DNBR prediction. Also, by comparing the performances of the developed CFNN model and COLSS, it was known that the DNBR values predicted by the proposed method are even larger than those by COLSS. Therefore, the proposed CFNN model with larger and accurate DNBR values is expected to provide wider operating window.

(44)

Re f e r e nc e s

[1] L.S. Tong, “An evaluation of the departure from nucleate boiling in bundles of reactor fuel rods,” Nucl. Sci. Eng., vol. 33, pp. 7-15, Jan. 1968.

[2] H.C. Kim and S.H. Chang, “Development of a back propagation network for one-step transient DNBR calculations,” Ann. Nucl. Energy, vol. 24, no. 17, pp.1437-1446, Nov. 1997.

[3] S. Han, U.S. Kim, and P.H. Seong, “A methodology for benefit assessment of using in-core neutron detector signals in core protection calculator system (CPCS) for Korea standard nuclear power plants (KSNPP),” Ann. Nucl. Energy, vol. 26, no. 6, pp. 471-488, April 1999.

[4] M.G. Na, “On-line estimation of DNB protection limit via a fuzzy neural network,”

Nucl. Eng. Tech., vol. 30, no. 3, pp. 222-234, June 1998.

[5] M.G. Na, “DNB limit estimation using an adaptive fuzzy inference system,” IEEE Trans. Nucl. Sci., vol. 47, no. 6, pp. 1948-1953, Dec. 2000.

[6] W.K. In, D.H. Hwang, Y.J. Yoo, and S.Q. Zee, “Assessment of core protection and monitoring systems for an advanced reactor SMART,” Ann. Nucl. Energy, vol. 29, no. 5, pp. 609-621, March 2002.

[7] G.C. Lee and S.H. Chang, “Radial basis function networks applied to DNBR calculation in digital core protection systems,” Ann. Nucl. Energy, vol. 30, no. 15, pp. 1561-1572, Oct. 2003.

[8] M.G. Na, S.M. Lee, S.H. Shin, D.W. Jung, K.B. Lee, and Y.J. Lee, “Minimum DNBR monitoring using fuzzy neural networks,” Nucl. Eng. Des., vol. 234, no. 1-3, pp. 147-155, Dec. 2004.

[9] S.W. Lee, D.S. Kim, and M.G. Na, “Prediction of DNBR using fuzzy support vector regression and uncertainty analysis,” IEEE Trans. Nucl. Sci., vol. 57, no. 3, pp.

1595-1601, June 2010.

[10] Untied State Nuclear Regulatory Commission, CE Technology Cross Training R325C, http://pbadupws.nrc.gov/docs/ML1125/ML11251A006.html (2011).

(45)

[11] E. H. Mamdani and S. Assilian, “An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller,” Int. J. Man-Machine Studies, vol. 7, pp. 1-13, 1975.

[12] J.C. Duan and F.L. Chung, “Cascaded fuzzy neural network model based on syllogistic fuzzy reasoning,” IEEE Trans. Fuzzy Systems, vol. 9, no. 2, pp.293-306, Apr. 2001.

[13] T. Takagi and M. Sugeno, Fuzzy identification of systems and its applications to modeling and control, IEEE Trans. Systems, Man, Cybern. SMC-1 (1985) 116-132.

[14] D.Y. Kim, K.H. Yoo, J.H. Kim, M.G. Na, S. Hur, and C-H. Kim, Prediction of leak flow rate using fuzzy neural networks in severe post-LOCA circumstances, IEEE Trans. Nucl. Sci. 61 (2014) 3644-3652.

[15] D.Y. Kim, K.H. Yoo, and M.G. Na, Estimation of minimum DNBR using cascaded fuzzy neural networks,” accepted, IEEE Trans. Nucl. Sci., July 15, 2015.

[16] B.O. Cho, H.G. Joo, J.Y. Cho, and S.Q. Zee., “MASTER: reactor core design and analysis code,” in Proc. 2002 Int. Conf. New Frontiers of Nuclear Technology:

Reactor Physics (PHYSOR 2002), Seoul, Korea, Oct. 7-10, 2002.

[17] C.L. Wheeler, C.W. Stewart, R.J. Cena, D.S. Rowe, and A.M. Sutey, “COBRA-IV-I;

An interim version of COBRA for thermal hydraulic analysis of rod bundle nuclear fuel elements and cores,” BNWL-1962, March 1976.

[18] G.S. Auh, D.H. Hwang, S.H. Kim, “A steady-state margin comparison between analog and digital protection system,” Nucl. Eng. Tech., vol. 22, no. 1, pp. 45-57, March 1990.

참조

관련 문서

Illustration of the sample connections used for taking van der Pauw transport data configuration (a)-(d) are employed for collecting resistivity data while

In this study, we developed a noble biotransformation reactor for α-ketoglutarate production using enzyme aggregated nanofiber reactor, micro reactor, monolith

The data are corrected for temperature error due to solar radiation and the effect of low temperature on the anerod capsule and curvature of the surface of

In the heating method by line heating, the initial properties of steel are changed by variables such as temperature, time, and speed. The experimental data

Four volumes of the text are used as a core text to instruct third and fourth year elementary school students in the rudiments of Korean language related to daily

The materials used for this research are the average temperature, maximum temperature, minimum temperature, precipitation, humidity, and sunshine duration in Honam

Third, for the change in balance ability according to core training and plyometric training, the core group showed a significant difference in travel area

„ classifies data (constructs a model) based on the training set and the values (class labels) in a.. classifying attribute and uses it in