• 검색 결과가 없습니다.

Correlates of the Unit Nonresponse in Survey Research: Evidence from the KGSS 2012 Nonresponse Survey*

N/A
N/A
Protected

Academic year: 2022

Share "Correlates of the Unit Nonresponse in Survey Research: Evidence from the KGSS 2012 Nonresponse Survey*"

Copied!
37
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

연구논문

1)

Correlates of the Unit Nonresponse in Survey Research:

Evidence from the KGSS 2012 Nonresponse Survey*

2)Sang-Wook Kima)

In spite of an ever-increasing usage and heavy reliance on the traditional, face-to-face, in-depth interview survey data in the social science nowadays, an undue attention has been paid in Korea concerning a few critically important issues of survey nonresponses:

how do unit nonresponses in general deviate systematically from valid or substantive responses; how do unit nonresponses vary by certain characteristics of interviewers and interviewees; what are some of the homogeneities and heterogeneities associated with unit nonresponses of social survey data in Korea in the cross-cultural perspective. In an effort to address these issues, this study tries to provide the results of data analysis working on the latest (2012) nonresponse survey data of KGSS (Korean General Social Survey).

Results of data analysis provide a few intriguing findings that might provide an important clue as to the diagnosis and prescription of

* An earlier version of this paper was presented at the special session,

“Improving the Quality of Survey Research Data,” of the Korean Association for Survey Research annual conference, May 31, 2013, Seoul, Korea. The author would like to express sincere gratitude to three anonymous reviewers for their comments and suggestions to help improve this paper. This study was supported by the ‘National Research Foundation of Korea Grant’ funded by the Korean government (NRF-2011-322-B00011).

a) Professor, Department of Sociology, Sungkyunkwan University.

E-mail: swkim@skku.edu

(2)

potential systematic bias, as opposed to random error, underlying most large-scale national sample surveys in Korea.

Key words: unit nonresponse, quality data, random error vs. systematic bias, correlates, KGSS, nonresponse survey

An ever-increasing number of social scientists nowadays are relying more and more on survey research data1) in exploring, describing, and explaining social phenomena of their own interests.

This reliance on and heavy use of survey protocol data is itself a reflection and manifestation of the relative advantage of survey research method, as one of the methods to collect primary data in the social science, over other methods (e.g., experimental design, historical/comparative method, content analysis, etc.). Most important, survey data tend to be pretty much efficient and effective in obtaining responses from a large number of populace in a relative short period of time, thereby enhancing the generalizability of research findings, in spite of a few disadvantages or limitations (e.g., potential superficiality of the responses, constraint in identifying the true underlying causality, and the like) associated with it.

Interestingly enough, however, the heavy reliance on survey data currently goes alongside with the constantly declining response

1) Among the major types of survey research ― political poll survey, marketing survey, government policy survey, and academic social survey ― what is referred to primarily in this study is the academic social survey.

(3)

rates over time almost all over the world. According to Dillman and his colleagues (2002), to illustrate, nonresponse2) rates, plus the nonresponse bias, in most ‘traditional’ household surveys have been increasing steadily for the last few decades in the U.S. and Western Europe. Although the reason for such decline remains to be seen and explored systematically, the traditional surveys, which are normally based on face-to-face, in-depth interviews with the household respondents that are visited by a trained group of interviewers, are truly becoming less and less likely to be implemented with ease and limited resources (personnel, time, and fund) in modern times. The increasing attention to and emphasis lately upon the so-called

‘mixed modes,’ which try to make use of non-traditional methods (i.e., online, telephone, and mail) along with the traditional method, is indeed an reactionary strategy to counteract the declining response rates and rebound valid responses by combining different, supplementary survey modes together. Not to mention the heated arguments and controversies surrounding the possible ‘modes effect’

underlying the mixed modes surveys, (Tourangeau and Smith 1996;

de Leeuw and van der Zouwen 1998) methodologists and practitioners of social surveys nowadays do not hesitate to point out the response-related issue as one of the top future challenges, as well as perennial concerns, faced by survey protocol data. (Rudolph and Greenberg 1994; Smith 2002)

Interestingly enough once again, however, what attracts the attention of most serious leading-edge survey methodologists worldwide in the most recent decades is not so much the valid 2) As will be made clear shortly, what is meant here is the unit nonresponse, not

the item nonresponse.

(4)

response rates per se as the so-called ‘nonresponse bias’3) that stems from systematic differences or discrepancies between responses and nonresponses in terms of a few critical, non-negligible characteristics (e.g., gender, age, working status, income, educational attainment, etc.) of the initial samples. As a matter of fact, concern about and attention to the nonresponse bias is not so old (Groves et al. 2002) and dates back merely to ten years or so even in the mainstream survey methodology research in the West, too. The reason and rationale underlying the argument against the possible nonresponse bias tends to be rather simple and straightforward: in the absence of evidence demonstrating the approximate equivalence between responses and nonresponses, the valid responses, no matter how high their rates could be, are unlikely to properly represent the carefully selected initial samples and thus are likely to endanger the statistical inference from sample statistics into population parameters, which, in turn, operates to jeopardize the generalizability of research findings. This is precisely the reason why Groves and his colleagues (2004: 59) resolutely declare that “response rates alone are not quality indicators.” To put it simply, insofar as no systematic differences exist between survey responses and nonresponses, no nonresponse bias is existing there, often irrespective to the response rates. Insofar as systematic differences, if any, do exist, however, nonresponse bias is existing out there irrespective to the rates. (Groves and Couper 1998) In line with this argument, some survey methodologists (Groves et al. 2002, 2004) are

3) The notion of nonresponse ‘bias’ is often distinguished from that of nonresponse

‘error’ in that the former typically indicates systematic differences between responses and nonresponses, while the latter indicates non-systematic or randomly distributed differences between the two.

(5)

adamant to argue that surveys containing less nonresponse bias with lower response rates are even better than those containing more bias with higher rates.

To say that focus on the nonresponse bias is still in its infancy in the survey methodology research even in the West is tantamount to saying that almost nothing is known about the matter in the East. Granted that survey researches in general tend to be pretty much country-specific due mostly to different survey ‘climates’

or ‘environments’4) associated with different countries, and that no universal or unequivocal homogeneities and heterogeneities could be readily identified across different societies, (Johnson et al.

2002) it is hardly strange to say that no single systematic and comprehensive study has yet been reported in Korea demonstrating the extent to which survey research is plagued with the nonresponse bias.5) To emphasize, an elaborate diagnosis and comprehension of the bias is not so easy in the survey research because of two major reasons, each being physical and methodological, respectively. The physical reason concerns the sheer absence or paucity of empirical data exhibiting the socio-demographic or socio-economic characteristics of the non-respondents. In order to identify such characteristics, the so-called ‘nonresponse survey,’ in which carefully designed

4) These are the terms used to indicate the degree to which implementation of social surveys is easy, feasible, and amenable in a specific country. (Groves and Coupe 1998)

5) The recent studies of Kim and Ahn (2010) and Han and Byun (2014) would probably be a few exceptions in this respect. Despite their triggering efforts in Korea, however, they appear to be far from being systematic and comprehensive with their limited usage of the full-fledged nonresponse survey data that are analyzed in this study.

(6)

supplementary questionnaires are prepared a priori and administered subsequently to the non-respondents asking for their gender, age, occupation, income, residence, and the like, should somehow be conducted afterwards for each unit nonresponse. Expectedly, this is far from an easy and simple task for almost all surveys all over the world, which explains why such data cannot but be scanty and the response rates itself become the only visible part of a survey in most cases. Another, a more methodological, reason relates to the sheer fact that a fair assessment of the existence and extent of the nonresponse bias necessitates the survey to be fielded with stringently sticking to all kinds of rules, procedures, and protocols ― most important, no substitution of initial samples and the repeated visit attempts ― as designed and prescribed from the outset. Without a strict adherence to such rules and/or protocols, it goes almost without saying that one cannot even begin to try such evaluation.

This study is an attempt to identity the correlates6) of the unit nonresponse in traditional household surveys of Korea. Obviously, this is made possible, not merely because the nonresponse survey is carried out constantly and datasets become readily available for each

6) Note that the notion of correlates, instead of predictors, is used throughout this study since a set of characteristics pertaining to interviewers and target respondents that are set forth as explanatory variables for unit nonresponses tend to lack, in a lot of cases, detailed causal reasoning and rationale leading to such responses. With a logical and empirical reasoning and rationale, rather than a more substantive and causal one, the variables would be better called the correlates than predictors. Apparently, this delimitation of the notion is itself a reflection of the limitation of the extant research in this area: a rather practitionist interest in and orientation to unit nonresponses during the time tended to shed relatively less light on a causal and theoretical approach to the matter.

(7)

year round’s KGSS (Korean General Social Survey), the survey framework on which the current study is predicated, but also because KGSS keeps maintaining the reputation of truly one of the fewest exemplary surveys in Korea that are implemented by sticking to all sorts of methodological rules, procedures, and protocols as faithfully as possible. In doing so, this study endeavors to address the following research questions: to what extent survey responses in Korea deviate from nonresponses; what kinds of characteristics of the interviewers and target respondents, combined together, operate as correlates of the unit nonresponse. The latest (2012) KGSS nonresponse survey is the source of data analyzed to provide answers to these questions. An empirical answer sought in this study is oriented, of course, to the ultimate question concerning the assessment of existence and extent of the nonresponse bias, as opposed to the random error, in traditional household surveys of Korea.7) The answer, once addressed appropriately, is expected to contribute a lot to the understanding, in the eyes of survey methodologists as well as practitioners, of the significance, impacts, and future remedies associated with the potential nonresponse bias in Korea, on the one hand, and further to the enhanced understanding of some of the homogeneities and heterogeneities associated with survey nonresponses in Korea as compared to those in Western societies, on the other hand.

7) Granted that unit nonresponses do vary considerably by different modes of survey (e.g., mail, web, phone, and face-to-face), (Tourangeau and Smith 1996) no argument is made, of course, in this paper as to the non-traditional surveys.

(8)

Correlates of Unit Nonresponses

Unit nonresponses refer to the total failure to obtain any measurement from a given case in the initial sample. (Groves et al.

2004) As such, they differ from item nonresponses that indicate the partial failure to obtain valid and usable observation values for some part of a successfully completed case.8) (Little and Rubin 2002) Although the sharp line demarcating the two sorts of most typical nonresponses remains somewhat obscured and the classification should depend on the researcher’s personal choice and interests in the long run, (Madow and Olkin 1983; Dillman et al. 2002) unit nonresponses, once classified as such, fail surely to be counted as valid cases and excluded from the statistical analysis.

Non-contacts and refusals are two major sub-types of unit nonresponses. (Groves et al. 2004) Although the literature also talks about the third type of ineligibility, which indicates the failure of initial samples to provide the requested data due to language problem, mental and/or physical disorder, literacy limitation, absence during the fielding period, and the like, this usually is not a major concern to survey researchers since it ordinarily is beyond their reach and control and the cases are often deducted from the calculation of valid response rates. At any rates, non-contacts and refusals are of major concern, and methodologically speaking, this stems from two reasons: the two are regarded as distinct nonresponses behaving in different ways by different factors 8) An extensive investigation of the correlates of item nonresponses in Korea is

available in Kim (2009).

(9)

(Dillman et al. 2002); the way they are under the rubric of similar or differential factors tends to be pretty specific to socio-cultural environment of a society. (Groves et al. 2004)

From the perspective of the so-called ‘total survey error,’9) (Groves et al. 2004) unit nonresponses, as one of the most representative non-sampling errors, constitute a major source of total survey error.

(Assael and Keon 1982) As indicated already, however, a precise identification of the existence and extent of such error or bias tends to be pretty difficult and cumbersome, not only because seldom are values known physically for nonresponses, but because such identification necessitates a high level of methodological stringency in the course of fielding.10) Methodologists and practitioners of a survey research lately have come to be increasingly interested in unit nonresponses, since they are known to affect both descriptive statistics (e.g., means and percentages) and inferential statistics (e.g., regression coefficients), which, in turn, leads to the inflation of variances of customary estimators. (Groves et al. 2004) To compare the impacts of unit and item nonresponses, the former is ordinarily

9) Total survey error is alleged to be a ‘quality’ perspective which tries to focus on how well population parameters are represented by different components (i.e., sampling frame, samples, respondents, and post-survey adjustments) of survey statistics in successive phases of a survey (Groves et al. 2004) and is usually distinguished from the conventional ‘design’ perspective that simply focuses on each phase without a due attention to major sources of measurement errors.

10) The conventional detection and diagnosis of the potential nonresponse bias by means of some kind of non-parametric (ex. Chi-square) tests for the difference between the characteristics of responses and its population is inherently not a rigorous test but can only be a crude check that is often attempted in the absence of separate nonresponse survey data.

(10)

known to be more harmful and far-fetching than the latter, since harms and damages associated with them are not necessarily limited to statistics produced by a few affected items. Although both pre-survey designing and post-survey adjustments (i.e., weighting and imputation) are normally recommended and implemented in order to prevent and treat unit nonresponses, they are far from being a complete remedy due primarily to their little attention allegedly to questions of unobserved heterogeneity in the data, (Mathiowetz 1988) and a careful detection and diagnosis of the possible nonresponse bias, as emphasized, requires a scrutinized analysis of separate, afterwards, nonresponse survey data.

Perhaps the most important reason why unit nonresponses are regarded so problematic lately concerns the relatively recent and consistent observation and argument that such nonresponses do not vary randomly but do indeed vary systematically. (Dillman et al.

2002; Groves et al. 2002) To reiterate, certain target respondents always tend to become respondents, while others always tend to be non-respondents. Likewise, certain interviewers are more capable of and motivated to retrieve valid responses than others. In a similar vein, certain settings of a survey (e.g., survey administration techniques, incentive provision, sponsorship, saliency of topic, etc.) tend to produce more such nonresponses than others. To emphasize, as long as unit nonresponses are varying at random, they might not be such a huge problem since one can feel relieved to expect the so-called cancel-out effect in which the incongruence, if any, between responses and nonresponses in terms of some of the most salient non-ignorable characteristics of initial samples is offset with each other. As long as unit nonresponses are indeed

(11)

varying systematically, however, one cannot quite feel relieved to expect such effect, since the incongruence is destined to be slanted somehow towards either over- or under-representation of the non-ignorable characteristics, thereby leading to the systematic misrepresentation of population parameters, or the bias of sample statistics, in the statistical inference. Apparently, most approaches to unit nonresponses to date tended to simply assume that they might be varying at random, or even completely at random. (Dillman et al. 2002; Groves et al. 2002) This assumption, however, has lately turned out to be a myth and has now become something that has yet to be empirically tested on the basis of ‘comparable’ nonresponse survey datasets, particularly when it comes to the non-negligible nonresponses.

Extensive studies so far conducted mostly in the West that argue for the non-random nature of unit nonresponses could identify numerous broad correlates of such responses. In general, they might be classified into three overarching groups, characteristics pertaining to interviewers, target respondents, and survey protocols, respectively.

Unlike the first two characteristics, survey protocols, which relate to methodological design features of a given survey, (e.g., administration technique, incentive provision, questionnaire design, fielding mode, etc.) are not the variable but constant factor of a specific survey and cannot quite work as correlates for a single survey framework like the KGSS, one that is analyzed in this study.

Interviewer characteristics refer literally to socio-demographic characteristics of interviewers. (e.g., gender, age, etc.) Included in interviewer characteristics, however, are not merely such characteristics, but also their capability and motivation factors. In other words,

(12)

not only are variances in responses and nonresponses produced by interviewer’s gender and age, for instance, but they are also produced by their capability and motivation to obtain valid and substantive responses. (Groves and Couper 1998) Although socio- demographic characteristics of, say, gender and age tend to be too sample-specific to generalize to broader social, or cross-national, contexts, survey methods literature documented in the West pretty consistently demonstrates that interviewers with much experience, professional competence, social skills, and achievement motivation indeed retrieve more valid responses than their counterparts. (Snijkers et al. 1999) In this vein, three characteristics of interviewers ― gender, number of surveys completed, and number of visit attempts ― are introduced as correlates of unit nonresponses in this study and this requires some detailed elaboration, one by one, preferably in the context of KGSS framework and protocols.

Interviewer’s gender, to begin with, is important since more than 10 years of KGSS implementation countrywide suggests that female interviewers tend to provoke less intimidated feeling and atmosphere on the part of target respondents in the course of initial and subsequent contacts, thereby generating less non-contacts and refusals compared to their male counterparts. Note that, apart from gender, other socio-demographic characteristics of interviewers, notably their age, educational attainment, labor market status, are not introduced in this study since the KGSS fielding network, which consists of hundreds of college students from approximately 25~30 universities all around the country, tends to be too homogeneous in these respects to work as correlates.

The rest two variables of number of surveys completed and

(13)

number of visit attempts are introduced into the estimation equation in order to represent interviewer’s capability and motivation factors, respectively. To be more precise, each KGSS interviewer in each year round is normally assigned 12~13 households, from which one household member is supposed to be selected finally, in a specific block out of a total of 200 sampled blocks throughout the country.

The interviewer is then encouraged to complete the assigned interviews as much as possible in a given period (usually from the 4th week of June to the end of Aug.) with adhering strictly to two of the most prominent rules of ‘no substitution (into other households or other household members) at all’ and ‘multiple (up to 10) repeated visit protocol.’ The repeated visit protocol, in particular, contains a few specific guidelines in it that include, most important, a detailed specification of the notion of visit (i.e., in-person, independent visits to the residence of target respondent made in different days, excluding other means of contacts, such as telephone, email, and the like) and the final classification of the case as either response or nonresponse by the staff in the headquarters.

Expectedly and naturally, some interviewers can somehow complete more interviews than others, and some of them should somehow try more visit attempts than others in the course of fielding. In this context, number of surveys completed is likely to represent the interviewer’s capacity or competence to retrieve valid responses. Unlike the number of surveys completed, however, multiple visit attempts appear to be less unequivocal and tend to be subject to two different sorts of interpretations, the heightened motivation of the interviewer, on the one hand, and the continued inaccessibility to and constant refusal by target respondents, on the

(14)

other. The heightened motivation will, of course, lead to more responses, whereas the inaccessibility and refusal will lead to more nonresponses. Although it remains to be seen how number of visit attempts relates to responses and nonresponses, net of all other factors in the equation, multiple visits, ceteris paribus, might probably be interpreted as representing a crucial impediment to access and gain cooperation that is likely to curb down valid responses, assuming that probabilities associated with the interviewers’ competence and motivation tend to be rather randomly distributed across different interviewers in different households and sampled blocks.11)

Characteristics pertaining to target respondents, another broad correlates of unit nonresponses, are more numerous and diverse than those of interviewers, and they tend to include socio-demographic, socio-economic, socio-psychological, and even cognitive components.

Although none of these are certainly trivial or unimportant, it is pretty difficult to retrieve all this information in the nonresponse survey simply because the information interviewers can best try to obtain or follow up after the target case has been finally classified as a nonresponse ordinarily concerns some of the most visible

11) The reviewers for this paper suggested to drop the two variables (number of surveys completed and number of visit attempts) in the estimation equation since they tend to endogenous to unit nonresponses. While not disagreeing with such indication, this study still wanted to keep them for two reasons:

(1) the survey methods literature quite consistently emphasize the interviewer’s capacity and motivation factors as crucial correlates of unit nonresponses; (2) the empirical relationships they exhibit with both types of nonresponses in this study turned out to be moderate at best, in which the bivariate correlation coefficients vary from -.239 to .376 (see Table 3).

(15)

socio-demographic or socio-economic characteristics. The most recalcitrant characteristics in this respect would probably be the case’s educational attainment and labor market earnings. Ideally speaking, these two characteristics are enormously important since people with higher education and earnings are reported quite consistently to be very inaccessible and uncooperative to survey requests. (Groves et al. 2002) In reality, however, it is very difficult or even impossible to keep track of that information in the nonresponse survey since the interviewer has somehow to keep in contact with the target respondent afterwards and ask those confidential or obstructive questions in person. The nonresponse survey in this study was not an exception, either, and could identify only a few salient characteristics of target respondents, which include their gender, age, residential area, type of dwelling unit, economic standing of the household, and economic standing of the sampled block. Obviously, in the absence of a more direct measure, socio-economic characteristics, in particular, among them tend to be a proxy or surrogate measure of their true underlying traits.

Before elaborating on these characteristics of target respondents in details, an argument or claim running through them needs to be introduced and explicitly accounted for. The so-called opportunity cost hypothesis (Groves et al. 2002) suggests that the ‘socio-economically active population’ (i.e., males, younger, urbanites, educated, employed, professional or white-collar workers, high incomers, etc.) fails to be readily accessed and easily responded, compared to their inactive counterparts, due primarily to the opportunity costs (i.e., time, expenses, commitment, etc.) incurred by acceding to the survey request. To borrow Groves and Couper’s (1998) expression, they are people possessed

(16)

with the two meta-components of cognitive comprehension and normative motivation. In other words, once this active population is somehow managed to be accessed and determined to concede to the request, they tend to understand the questions pretty well, no matter how difficult and complicated they might be, and they also feel obliged and motivated normatively to provide full and complete answers for each question, which eventually contributes to less item nonresponses. (Kim 2009) On the contrary, however, the same cognitive and normative components curbing down item nonresponses for this population are known to operate as a factor to promote unit nonresponses. (Groves et al. 2002, 2004) To put it simply, the very fact that they are able to provide full and complete answers calls for all sorts of socio-psychological commitments on their sides and indeed works as a critical impediment to unit responses. These commitments, in effect, are alleged to be translating into opportunity costs in their time, expenses, and the like that are incurred in acceding to survey requests, thereby leading to higher rates of non-contacts and refusals. In short, respondent characteristics set forth as correlates of unit nonresponses in this study are predicated, to a large extent, upon the opportunity cost hypothesis.

To try now to elaborate the reason and rationale underlying each respondent characteristic one by one, target respondent’s gender is important since males in general are more likely to be active socio-economically and they tend to be less open and cooperative to survey requests than their female counterparts, thereby leading to higher non-contacts and refusals. (Smith 1983; Hox and de Leeuw 2002; Merkle and Edelman 2002) Similar to the gender effect, younger people tend to be engaged in more diverse activities (job, education, entertainment, etc.) with higher frequency and intensity than older

(17)

people, which also results in higher non-contacts and refusals. (Hox and de Leeuw 2002; Lynn et al. 2002) Respondent’s residential area matters, too, since people living in metropolitan area are known to be less accessible than those in non-metropolitan or rural area and, even after a successful access, if any, they are more likely to decline survey requests. (Groves and Couper 1998) Admittedly, the area effect could probably be an outcome of a few interrelated socio-demographic, socio-economic, and socio-psychological characteristics associated with the urbanites: they are more likely to be males, younger, employed, educated, high incomers, and less open-minded, all of which, in turn, contributes to higher non-contacts and refusals.

Type of dwelling unit indicates if the target respondent is residing in the apartment complex or not. The variable, as such, is pretty much unique and sample-specific to the survey setting in Korea.

Apartment complexes in Korea, often regardless of its ownership status, tend to be outstanding for its socio-economic affluences and conveniences, compared to other dwelling types. Unlike the situation in most Western societies, apartment complexes usually cost more and those who live there are more likely to be socio- economically active population that has higher income, education, and better jobs.12) This suggests that apartment complexes, placed often with security guards and/or equipped ordinarily with the door-lock system or other access impediments to central entrance of the building, are unlikely to be easily accessed, and, even after a successful access, residents there are likely to be reluctant to participate in the survey.

12) Perhaps apartment complexes in Korea resemble the condominium in the West and could be more aptly called in that way.

(18)

Now that some variations in the socio-economic status of dozens of sampled households could indeed exist even within a specific residential area (say, metropolitan) or dwelling type (say, apartment complex), however, the nonresponse survey in this study has tried to have the interviewer subjectively evaluate the economic standing of each household for each in-accessed and refused case. Depending on the interviewer’s own assessment, economic standing of the household was classified into three (low, middle, high) categories.

The expectation here, of course, was that households rated higher for its economic standing would generate more non-contacts and refusals.

Economic standing of the sampled block, the final respondent characteristic on the list, refers to standard prices of the real estate as assessed and reported annually by the concerned government ministry in Korea. The variable is important since, even after controls for residential area, dwelling type, and economic standing of the household, the possibility still remains that some unobserved heterogeneities in the degree to which initial samples are readily contacted and cooperated could exist for different blocks. This variable, as such, is distinguishable from the three variables above in that it is designed to tap more objective economic status of each sampled block that is not readily tapped by a certain area (metropolitan or not), dwelling type (apartment complex or not), and the interviewer’s subjective assessment of economic standing of a given household. Similar to those variables, however, the expectation was that the higher the economic standing of sampled blocks is, the higher the non-contacts and refusals are likely to be.

(19)

Methods

Data

The latest (2012) nonresponse survey of the KGSS, as indicated already, is the source of data used to evaluate the relationships postulated between each correlate and responses/nonresponses. The KGSS, one of the most prominent national sample surveys in Korea implemented every year since 2003, has the target population of all Korean adults aged 18 or over who live in households of Korea. A representative sample is drawn from this population by means of multi-stage area probability sampling procedures. Structured face- to-face, in-depth interviews that are administered by a trained group of student interviewers recruited from approximately 25~30 universities all around the country are then carried out for the selected sample.13)

The nonresponse survey, as noted, is a supplementary survey designed to be conducted only for the cases or units to which a final declaration is made as a total failure to obtain valid responses due either to non-contacts or constant refusals even after a series of repeated visit trials and persuasions. Information about such nonresponses, especially the characteristics of target respondents, is gathered subsequently by means of afterward call-backs by the original interviewer or independent research staff.

13) Further details on the KGSS, plus the internationally coordinated module surveys of the ISSP and EASS in Korea, are available in Lulu et al. (2012) and Kim et al. (2013).

(20)

Classification of unit responses and nonresponses from the 2012 KGSS tends to be somewhat complicated and this requires some detailed mapping. Out of 2,500 initial samples, 24 cases were eventually classified as ineligibles (e.g., mental or physical disorder, literacy limitation, absence during the fielding period, etc.). From the remaining 2,476 cases, valid responses could be obtained for 1,396 cases, with the valid response rates of 56.4 percent. Nonresponse surveys were then conducted for the remaining 1,080 (2,476−1,396) cases. Among those cases, however, successful nonresponse surveys were done for 640 cases (success rates of 59.3%), with the rest 440 cases being classified as a final failure to nonresponse survey, too.14) And, among these 640 cases, 378 cases (59.1%) were classified finally as non-contacts, and the remaining 262 cases (40.9%) were classified as refusals. In terms of the sorts of characteristics of those non-respondents of the two sub-types,15) their gender and age could be identified only for refusals (there should be no way to identify this information for non-contacts), with the rest four characteristics (residential area, type of dwelling unit, economic standing of the

14) Admittedly, this response rates (59.3%) of the nonresponse survey could be a problem in its own right, since the exactly same question and concern as to the random or systematic nature of responses and nonresponses could be raised, quite legitimately, about the nonresponse survey itself, not to mention the main part of the survey. Suffice it to mention now, however, that albeit the constant monitoring and encouragement to complete the nonresponse survey for every nonresponse as much as possible, some interviewers found it pretty difficult or even impossible to keep track of the required information for some of the nonresponses.

15) Note that all characteristics of the interviewers illustrated above could, of course, be obtained from all cases, irrespective to unit responses or nonresponses, for one thing, and to non-contacts or refusals, for another.

(21)

household, and economic standing of the sampled block) being identified for both types.

Measurement

Table 1 contains descriptive statistics for the variables and cases used in the analysis. Unit nonresponses, the outcome variable, is a binary variable containing only two values of 0 (response) and 1 (nonresponse), and, in addition to overall nonresponses, they are also broken down into two sub-types of non-contacts and refusals. Since the measurement of interviewer and respondent characteristics in the table has already been introduced above to a large extent and most of them tend to be pretty much self-explanatory, no further account is deemed necessary here. Suffice it to indicate, however, that the two variables of number of visit attempts and economic standing of the sampled block are each log-transformed in order to accommodate their skewed distributions observed in the sample.

Analysis

The empirical relationships between the correlates and unit nonresponses postulated in this study are analyzed by -tests and binomial logistic regression analyses. Specifically, a series of -tests for the mean differences, if any, between responses and nonresponses (overall, non-contacts, refusals, respectively) are done for each correlate of interviewer and respondent characteristics (see Table 2).

This bivariate analysis is expected to exhibit if and to what degree mean values of responses deviate significantly from those of nonresponses (and each two sub-type, too) for each suggested correlate. The bivariate analysis is then reinforced finally by the

(22)

multivariate analysis of binomial logistic regressions for each sub- type of nonresponses (see <Table 3>). This multivariate analysis is to estimate the impacts of each correlate upon non-contacts and refusals, respectively, after controlling for the other correlates in the estimation equation. Now that some of the correlates in the equation are unlikely to be orthogonal in estimating their impacts on each sub-type, the statistical assumption of multicollinearity has been tested by the eigenvalue decomposition method. (Gunst 1983) And the result indicated that the smallest eigenvalue turned out to exceed .05, a conventionally accepted criterion to determine the symptom, thereby suggesting no severe such symptom in the estimation.

Results

Prior to addressing the results of bivariate and multivariate analyses in details, a preliminary attention needs to be paid to univariate results of analysis for the variables in the equation. Of particular interest would probably be the outcome variable of unit nonresponses. Descriptive statistics of the outcome variable in Table 1 illustrate that overall nonresponse rates is 43.62% (1,080/2,476), with the rates for non-contacts and refusals being 21.31%

(378/1,774) and 15.80% (262/1,658), respectively. Interestingly enough, unit nonresponses in Korea are produced more by non-contacts than by refusals. Apparently, this is an interesting observation that tends to be pretty unique to Korea and warrants a special attention since it differs from findings in the West in which traditional surveys are bothered more by refusals than non-contacts. (Dillman et al. 2002;

Groves et al. 2004)

(23)

Variables Valid Mean Min~Max Std. Dev. Skewness Unit Nonresponses

Overalla 2,476 .4362 0~1 .4960 .258

Type 1: Non-Contactsb 1,774 .2131 0~1 .4096 1.403

Type 2: Refusalsc 1,658 .1580 0~1 .3649 1.877

Interviewer Characteristics

Genderd 2,476 .6256 0~1 .4841 -.519

No. of Surveys Completed 2,476 7.9600 0~32 5.2740 2.428 No. of Visit Attempts 2,476 5.3372 1~22 3.3804 .622 LN [No. of Visit Attempts] 2,476 1.4281 0~3.09 .7593 -.494 Respondent Characteristics

Genderd 1,658 .5576 0~1 .4969 -.228

Agee 1,658 3.6586 1~7 1.7845 .112

Residential Areaf 2,476 .4600 0~1 .4985 .161

Type of Dwelling Unitg 2,036 .3527 0~1 .4779 .617 Economic Standing of

the Householdh 2,036 1.7063 1~3 .6674 .417

Economic Standing of

the Sampled Blocki 2,476 118.8582 .14~1,080 143.3211 2.890 LN [Economic Standing of

Sampled Block] 2,476 3.9614 -1.97~6.98 1.5997 -1.042

<Table 1> Descriptive Statistics for the Variables

a) 0 = Responses (1,396); 1 = Nonresponses (1,080).

b) 0 = Responses (1,396); 1 = Non-Contacts (378).

c) 0 = Responses (1,396); 1 = Refusals (262).

d) 0 = Male; 1 = Female.

e) 1 = 20’s (18-29) (15.1%); 2 = 30’s (14.1%); 3 = 40’s (19.2%); 4 = 50’s (17.5%); 5 = 60’s (15.4%); 6 = 70’s (13.0%); 7 = 80’s or above (5.7%).

f) 0 = Non-Metropolitan; 1 = Metropolitan.

g) 0 = Others; 1 = Apartment Complexes.

h) 1 = Low (41.3%); 2 = Middle (46.9%); 3 = High (11.9%).

i) Standard prices of the real estate as assessed and reported by the Ministry of Land, Transport, and Maritime Affairs (2012) in Korea. The unit is 10,000 won, Korean currency, per square meters, with 10,000 won in 2012 being equivalent approximately to 8.68 U.S. dollars.

(24)

Breakdown of the entire cases in terms of interviewer characteristics indicates: 62.56% are female interviewers; an average interviewer completed 7.96 cases out of 12~13 assigned samples; an average interviewer made 5.34 visits during the fielding period (<Table 1>).

And the same breakdown in terms of respondent characteristics illustrates: 55.76% are female respondents; they are in their 20s (15.1%), 30s (14.1%), 40s (19.2%), 50s (17.5%), 60s (15.4%), 70s (13.0%), and 80s or above (5.7%); about half (46.0%) of the respondents are living in metropolitan area; slightly more than one-third of them (35.27%) are residing in apartment complexes; economic standing of the household is assessed by the interviewers as low (41.3%), middle (46.9%), and high (11.9%); economic standing of the sampled block turned out to have an average of approximately 1,189 thousand won per square meters (<Table 1>). Taken together, the characteristics of target respondents illustrated here appear to be pretty much typical of the most representative national sample surveys in Korea, as well as the total population in Korea.

The univariate analysis above is now required to be reinforced by the bivariate analysis of -tests exhibited in <Table 2>. To focus on interviewer characteristics first: those interviewers who could complete more surveys did indeed end up with less number of non-contacts and refusals; successful responses had an average of 4.12 visits, whereas non-contacts and refusals used to have 3.37 and 7.23 visits, respectively. Not surprisingly, refusals turn out to contain almost double visits compared to successful responses, on the one hand, and more than double visits compared to non- contacts, on the other. Turning the focus into respondent characteristics, next, significant mean differences are observed for four correlates (residential area, type of dwelling unit, economic

(25)

Variables Responses (Valid )

Overall Nonres- ponses (Valid )

(prob.)

Type 1:

Non- Contacts (Valid )

(prob.)

Type 2:

Refusals (Valid )

(prob.)

Interviewer Characteristics

Female .6146

(1,396)

.6398 (1,080)

1.287 .6349

(378)

.721 .6718

(262) 1.79

No. of Surveys Completed 8.82 (1,396)

6.84 (1,080)

9.434*** 5.88 (378)

13.951*** 6.81 (262)

5.885***

No. of Visit Attempts 4.1218 (1,396)

6.9083 (1,080)

21.832*** 3.3730 (378)

17.422*** 7.2328 (262)

14.131***

Respondent Characteristics

Female .5580

(1,396)

.5496 (262)

.251 - - .5496

(262) .251

Age 3.6511

(1,396)

3.6985 (262)

.394 - - 3.6985

(262) .394

Metropolitan Area .4305 (1,396)

.4981 (1,080)

3.351*** .5529 (378)

4.257*** .5458 (262)

3.453***

Apartment Complexes .3138 (1,396)

.4375 (640)

5.328*** .4339 (378)

4.231*** .4427 (262)

3.890***

Economic Standing of the Household

1.6418 (1,396)

1.8469 (640)

7.441*** 1.8492 (378)

6.572*** 1.8435 (262)

5.432***

Economic Standing of the Sampled Block

94.3258 (1,396)

150.5684 (1,080)

9.398*** 149.3117 (378)

7.087*** 165.7518 (262)

5.880***

<Table 2> Mean Values of Interviewer and Respondent Characteristics: Comparison between Responses and Nonresponsesa

a) See <Table 1> for detailed measures.

* p < .05, two-tailed test. ** p < .01, two-tailed test. *** p < .001, two-tailed test.

standing of the household, and economic standing of the sampled block). Specifically, interview attempts were less likely to succeed in the metropolitan area, and people in this area are indeed less

(26)

accessible and refuse more often than their non-metropolitan counterparts. Likewise, people residing in apartment complexes turn out to emanate more overall nonresponses, and, most important, they are less likely to be accessed and more likely to refuse to survey requests. Similar to the pattern observed for residential area and type of dwelling unit, the rest two variables (economic standing of the household and economic standing of the sampled block) exhibit that those who have a higher economic standing in some affluent blocks do indeed have higher nonresponse rates with substantially higher non-contacts and refusals.

The bivariate -tests above, albeit pretty much meaningful, are still likely to be spurious since they represent not so much a unique net effect of a correlate on unit nonresponses as a total association between the two without controlling for the other correlates in the equation. In order to find out non-spurious relationships, a multivariate analysis is required, and the results of binomial logistic regression analyses, which are a major interest in this study, are presented in <Table 3>. Although the analysis was done separately, of course, for the two sub-types of unit nonresponses, the two might as well be described at the same time here since the results associated with each type tend to be pretty similar. When non- contacts and refusals are each regressed on the suggested correlates altogether, significant correlates turn out to be number of surveys completed, number of visit attempts, respondent’s age, economic standing of the household, and economic standing of the sampled block. To reiterate, after controls for all other characteristics, it turns out that those interviewers who could complete more surveys indeed face less non-contacts and refusals; non-contacts and

(27)

refusals do indeed necessitate substantially higher number of visit attempts; the older the target respondents are, the higher the refusals is; compared to the households in which economic standing is high or low, those households that have a middle level of economic standing generate more non-contacts and refusals; the higher the economic standing of the sampled block is, the higher the refusals, if not non-contacts, is.

Although none of these findings from the logistic regression analysis is unexpected or surprising, it is pretty interesting to notice the curvilinear relationship, or quadratic function, between economic standing of the household and both types of nonresponses. As shown in <Table 3>, those who are located in the middle layer of economic standing have as much as 5.0 times higher probability of non-contacts when compared to those who are located in the low layer, and the same middle layer has even higher, 6.8 times, such probability when compared to those who are in the high layer. In a similar vein, the probability of refusals by the middle layer people is as much as 4.8 times higher than that of the low layer people, and 4.3 times higher than that of the high layer people. Taken together, the interviewer and respondent characteristics that are specified in the two estimation equations help to account for a higher proportion of variances in non-contacts (.275/.426) than refusals (.196/.336)

<Table 3>.16) Substantial interpretation and discussion of these findings taken together will now follow immediately.

16) It is interesting to note that, with fewer number of an almost identical list of correlates specified for non-contacts (8 correlates) as compared to refusals (10 correlates), the former turns out to yield a higher proportion of explained variances in it as compared to the latter.

(28)

DV=Non-Contacts DV=Refusals

Variables b c Exp (b) b c Exp (b)

Interviewer Characteristics

Female .017 -.139 .871 .043* .057 1.059

No. of Surveys Completed -.239*** -.248*** .781 -.143*** -.111*** .895 LN [No. of Visit Attempts] .376*** 1.498*** 4.474 .323*** 1.444*** 4.240 Respondent Characteristics

Female - - - -.006 -.100 .905

Age - - - .010 .275*** 1.316

Metropolitan Area .101*** .208 1.231 .085*** .025 1.025 Apartment Complexes .104*** -.092 .912 .100*** .041 1.042 Economic Standing of

the Household: Middled .327*** 1.613*** 5.015 .272*** 1.572*** 4.818 Economic Standing of

the Household: Highd -.116*** -.300 .741 -.088*** .101 1.106 LN [Economic Standing of

the Sampled Block] .200*** .081 1.085 .168*** .222** 1.249

Constant -3.091*** .045 -5.806*** .003

Cox & Snell/ Nagelkerke .275 / .426 .196 / .336

-2 Log Likelihood 1,268.126 1,086.145

(d.f.) () 569.759 (8) ( < .001) 360.871 (10) ( < .001)

<Table 3> Binomial Logitistic Regression Estimates for the Types of Unit Nonresponsesa

a) See Table 1 for detailed measures.

b) Pearson Product-Moment correlation coefficients.

c) Unstandardized logit coefficients.

d) Omitted is Low standing.

* < .05, two-tailed test. ** < .01, two-tailed test. *** < .001, two-tailed test.

(29)

Discussion and Conclusions

This study was prompted by a relatively simple and straightforward question concerning if and to what extent survey responses deviate from nonresponses in terms of some of the most prominent and non-ignorable characteristics of interviewers and target respondents in traditional face-to-face, household interview surveys. The idea underlying this research question, however, was not so simple and straightforward: in spite of an ever-increasing reliance on and heavy use of survey protocol data in recent decades, the data are constantly plagued with two sorts of perennial problems: the steady decline in valid response rates over time in almost all over the world, on the one hand, and the potential nonresponse bias stemming from the systematic incongruence between survey responses and nonresponses, on the other. The declining response rates were deemed problematic, not only because it casts a serious doubt on the representativeness of carefully selected samples, but also because it, in effect, is wasting all kinds of invaluable resources (fund, personnel, time, commitment, etc.) in collecting primary data in the social science. Apart from valid response rates per se, the nonresponse bias, if any, was deemed even more problematic and acute, since the bias, no matter how high response rates could be, would critically threaten the sample representativeness, endanger the statistical inference from sample statistics to population parameters, and eventually jeopardize the generalizability of research findings.

With all these impending concerns about declining response rates

(30)

and potential nonresponse bias, however, methodologists and practitioners of survey research used to simply assume the random nature of the difference between responses and nonresponses, and it was only in recent decades even among survey methodologists in the West to pay serious attention to the issue of nonresponse bias, in particular, and begin to carefully diagnose if the difference qualifies a random error or systematic bias. To say that the issue is still in its infancy even in the mainstream methodology research in the West is also to point out that no single comprehensive study has yet been conducted in Korea, either, and almost nothing is known and reported about the problem there. To emphasize, a scrutinized diagnosis and evaluation of the problem necessitates two kinds of prerequisites, a separate follow-up nonresponse survey and a methodological stringency in the course of fielding. Expectedly, both of the two conditions, whether physical or methodological, are far from a simple or easy task to be performed, and this explains why virtually no data or only scanty and piecemeal ones, if any, were available during the time and a due attention could not be paid in Korea, as well as in the West, to the issue.

Equipped with the nonresponse survey data of the latest (2012) KGSS, an annually conducted national sample survey that keeps maintaining a notable reputation of ‘quality survey’ in Korea, this study tried to evaluate if and to what extent unit responses deviate from unit nonresponses in terms of a few critical characteristics of interviewers and target respondents with a view to provide an eventual clue to delicately diagnose whether the extent really qualifies a random error or systematic bias. Results of bivariate and multivariate analyses performed above provided a few interesting

(31)

findings that could operate as an important clue and they need to be discussed now in further details.

Perhaps the most intriguing finding in this study would concern the result that unit responses do indeed differ from nonresponses in terms of some of the most prominent characteristics of interviewers and respondents. This finding has, in fact, been observed pretty consistently from the univariate, bivariate, and multivariate analyses.

Given that no surveys can be completely immune to such discrepancies, and that almost all surveys, as long as some sort of misuses and abuses are not utilized in the course of sampling (e.g., quota sampling) and fielding (e.g., substitution, cheating, etc.), are likely to be slanted towards an underrepresentation of the socio-economically active population, what is important would be the degree, instead of existence, of such discrepancies. In this sense, the diagnosis of random error or systematic bias should be better regarded as a matter of degree rather than an either-or one, since all surveys are always prone to some discrepancies to a lesser or greater degree. To reiterate, consistent with the prediction of opportunity cost hypothesis (Groves et al. 2002), the findings in this study demonstrate that the active population ― urbanites residing in some affluent facilities and places of the metropolitan area ― are indeed less accessible, even after controls for several crucial characteristics of themselves and interviewers, and, even after being barely managed to be accessed, they refuse more often to participate in the survey, too, compared to their inactive counterparts. Aside from the characteristics of target respondents, interviewer characteristics of capability and motivation also work there to end up with substantively higher valid responses by more capable and motivated interviewers. As such, these are

(32)

evidence to demonstrate the soundness of latest claims of survey methodologists in the West (Dillman et al. 2002; Groves et al. 2002, 2004) suggesting the non-random nature of survey nonresponses.

Taken together, findings in this study, it should be emphasized, appear to suggest ‘some amount’ of systematic bias, as opposed to random error. Apparently, this way of characterizing the problem, with daring to brave the risk of devaluating the quality data of KGSS, has some underlying reasoning and rationale in it. As indicated already, this study was based on a physically ‘hard-to-do’

nonresponse survey that is carried out afterwards to supplement the main KGSS, a survey renowned in Korea for its methodological rigor and stringency. Even with such physical scarcity and methodological sophistication, however, certain prominent characteristics of interviewers and target respondents turned out to be operating in favor of substantive responses, or against nonresponses. Although it certainly is not easy to say in a word if this qualifies a random error or systematic bias, this amount of difference is hardly likely to be said to be non-excessive and might as well be aptly called a systematic bias rather than a random error.

Perhaps, insofar as this diagnosis of the current study is concerned, what would attract the attention of most survey methodologists and practitioners alike might not so much the lenient admittance of such bias even for a carefully designed and implemented survey as the vigilant warning that other surveys are not entitled to feel relieved, either, but are required to feel pretty sensitive and nervous to the potential nonresponse bias as long as their methodological rigor is not going beyond some outstanding criteria. To put it simply, in the absence of firm evidence to show the statistical

(33)

equivalence between responses and nonresponses, no single survey can be quite confident to believe that they are immune to such nonresponse bias. As indicated already, this study was, in fact, almost the first full-blown endeavor in Korea to pursue comprehensively the issue of unit nonresponses and has demonstrated that most surveys, including carefully designed and implemented ones, are likely to be plagued with the nonresponse bias. Although no conclusive argument can be certainly made beyond the KGSS, the survey framework that is possessed with a variety of designs and protocols unique to itself, this study is still believed to go beyond previous studies both domestically and cross-culturally in providing a good case to suggest that a scrupulous alertness, at the minimum, to such potential bias in survey data be required on the part of handlers of primary and secondary data, and the virtue, if any, of the current study should probably be found in this respect.

To say a brief final word of notice for future studies in this area, it is strongly recommended that further investigation be sought to try to find and account for some of the homogeneities and heterogeneities associated with different designs and protocols of social surveys in different countries. Recalling that each survey tends to be pretty much specific to the sample at hand and social climate or environment in which it is conducted, it is not surprising that different modes, visit protocols, and varieties of surveys in different countries are destined to generate differential patterns and degrees of nonresponse bias. In this sense, some of the findings in the current study might well represent the uniqueness to survey design and protocol utilized in this study. To illustrate, the quadratic association observed between economic standing of the household

(34)

and each sub-type of nonresponses17) could probably be one of such unique findings in Korea. The pretty similar and identical lists and patterns of the correlates observed for both types of nonresponses, unlike the argument and prediction of Lynn et al. (2002), would be another such unique finding. Still one more such uniqueness could be observed for the finding that inaccessibility, not refusal, is a more serious concern and trouble in Korea. Although it certainly would not be easy to explain the reason underneath this possible uniqueness in Korea, a fruitful understanding of the nature of unit nonresponses, for one thing, and the assessment of a random error or systematic bias, for another, is to pave the way, after all, to address those homogeneities and heterogeneities in further details.

17) Although it surely is not easy to try to explain why people located in the middle layer of the economic standing in Korea generate more non-contacts and refusals when compared to those in the two utmost layers, a brave interpretation provides a case to suggest that their busy socio-economic activities with some degree of economic discretion might probably count. In other words, the middle-layer people tend to be so preoccupied with their own socio-economic activities in everyday lives that they are not fully motivated to concede to survey requests with the amount of time (approximately 50 minutes) and incentives (approximately 10 U.S. dollars) that is suggested to be devoted and provided to participate in the survey. A more convincing, plausible interpretation and explanation might be certainly needed in future studies, preferably in the cross-cultural perspective.

(35)

REFERENCES

Assael, H. and J. Keon. 1982. “Nonsampling vs. Sampling Errors in Survey Research.” Journal of Marketing 46: 114-123.

de Leeuw, E. and J. van der Zouwen. 1998. “Data Quality in Telephone and Face-to-Face Surveys: A Comparative Meta-Analysis.” Pp. 283-299 in R. Groves, P. Biemer, L. Lyberg, J. Massey, W. Nicholls II, and J.

Waksberg (Eds.), Telephone Survey Methodology. New York: Wiley.

Dillman, D., J. Eltinge, R. Groves, and R. Little. 2002. “Survey Nonresponse in Design, Data Collection, and Analysis.” Pp. 3-26 in R. Groves, D.

Dillman, J. Eltinge, and R. Little (Eds.), Survey Nonresponse. New York: Wiley.

Groves, R. M. 2004. Survey Errors and Survey Costs. 2nd Edition. New York: Wiley.

Groves, R. M. and M. P. Couper. 1998. Nonresponse in Household Interview Surveys. New York: Wiley.

Groves, R. M., D. Dillman, J. Eltinge, and R. Little (Eds.). 2002. Survey Nonresponse. New York: Wiley.

Groves, R. M., F. J. Fowler, Jr., M. P. Couper, J. M. Lepkowski, E. Singer, and R. Tourangeau. 2004. Survey Methodology. Hoboken, New York:

Wiley.

Gunst, R. F. 1983. “Regression Analysis with Multicollinear Predictor Variables: Definition, Detection, and Effects.” Communications in Statistics: Theory and Methods 12: 2217-60.

Han, H. E. and J. S. Byun. 2014. “Callbacks Effects on Nonresponse Bias.”

Survey Research 15 (1): 21-45. [In Korean]

Hox, J. and E. de Leeuw. 2002. “The Influence of Interviewers’ Attitude and Behavior on Household Survey Nonresponse: An International Comparison.” Pp. 103-120 in R. Groves, D. Dillman, J. Eltinge, and

(36)

R. Little (Eds.), Survey Nonresponse. New York: Wiley.

Johnson, T. P., D. O’Rourke, J. Burris, and L. Owens. 2002. “Culture and Survey Nonresponse.” Pp. 55-69 in R. Groves, D. Dillman, J. Eltinge, and R. Little (Eds.), Survey Nonresponse. New York: Wiley.

Kim. S. W. 2009. “Correlates of the Item Nonresponse in Survey Research:

Analysis of the KGSS Cumulative Data, 2003-2007.” Korean Journal of Sociology 43 (6): 147-176.

Kim, S. W., J. B. Kim, and S. B. Shin. 2013. Korean General Social Survey 2012. Seoul: Sungkyunkwan Univ. Press. [In Korean]

Kim, S. Y. and D. Y. Ahn. 2010. “Nonresponse Rates and Nonresponse Bias in Household Interview Surveys.” Research Report of Statistical Research Institute, Statistics Korea 2: 111-164. [In Korean]

Little, J. A. R. and B. E. Rubin. 2002. Statistical Analysis with Missing Data. 2nd Edition. New York: Wiley.

Lulu, L. Y. (CGSS), Y. C. Fu (TSCS), N. Iwai (JGSS), and S. W. Kim (KGSS).

2012. East Asian Social Survey (EASS). Cross-National Survey Data Sets: Network Social Capital in East Asia. ICPSR. Ann Arbor, Michigan:

Inter-University Consortium for Political and Social Research / Seoul, Korea: EASSDA [Distributors].

Lynn, P., P. Clarke, J. Martin, and P. Sturgis. 2002. “The Effects of Extended Interviewer Efforts on Nonresponse Bias.” Pp. 135-147 in R. Groves, D. Dillman, J. Eltinge, and R. Little (Eds.), Survey Nonresponse. New York: Wiley.

Madow, W. G. and I. Olkin (Eds.). 1983. “Incomplete Data in Sample Surveys.”

Vol. 3, Proceedings of the Symposium. New York: Academic Press.

Mathiowetz, N. A. 1998. “Respondent Expression of Uncertainty: Data Source for Imputation.” Public Opinion Quarterly 62 (2): 47-56.

Merkle, D. and M. Edelman. 2002. “Nonresponse in Exit Polls: A Comprehensive Analysis.” Pp. 243-285 in R. Groves, D. Dillman, J.

Eltinge, and R. Little (Eds.), Survey Nonresponse. New York: Wiley.

Rudolph, B. A. and A. G. Greenberg. 1994. “Surveying of Public Opinion:

참조

관련 문서

1 John Owen, Justification by Faith Alone, in The Works of John Owen, ed. John Bolt, trans. Scott Clark, &#34;Do This and Live: Christ's Active Obedience as the

Its reputation as the priciest real estate market in the world has guaranteed Hong Kong a place in the lower half of our investment rankings survey almost every year since

In an effort to survey the level of communication that children feel in communicating with their mothers, 'Parents Facilitated Communication Test' was used..

Popular music in the lives of youth and popular music occupies a large portion of the preference for youth is high throughout the survey were more clearly. Survey for

It analyzes the 'Unification Attitude Survey 2010' survey dataset culled by the Institute for Peace and Unification Studies(IPUS) at Seoul National University

In the university, the university conducted a survey of 400 persons and college students, conducted a questionnaire survey of 400 people through

The data from the questionnaire survey were analyzed to prove the effects of Myung Power Therapy on such scales of the relaxation of pain as improvement

Study of diabetes and the nutrition education and awareness of nutrition labels in Korean adults: Using data from the fifth Korea National Health and Nutrition Examination