• 검색 결과가 없습니다.

Online reviews As a Tool For Communicating Reputation: Promise and Perils The present research also contributes to our understanding of broader issues related to the

문서에서 Incentives Can Reduce Bias in Online Reviews (페이지 26-46)

5. Discussion and Conclusion

5.4. Online reviews As a Tool For Communicating Reputation: Promise and Perils The present research also contributes to our understanding of broader issues related to the

advent of online reviews as a means of quickly propagating reputation in the marketplace. There are obvious advantages to online reviews. They are easily accessible and often free to use, and therefore have the potential to increase the efficiency of markets and allow consumers to make more informed and more optimal decisions.

However, online reviews also have less obvious disadvantages. First, consumers’ reviews of products and services lack objectivity, and often diverge from the opinions of experts (De Langhe, Fernbach, and Lichtenstein 2015). Second, the consumption of online reviews may not be systematic. Consumers may engage in selective or incomplete information search when evaluating reviews (Ariely, 2000; Urbany, Dickson, and Wilkie 1989). For example, existing research suggests that the characteristics of the choice set affect how people search for information. Larger choice sets (corresponding to websites with large amounts of consumer reviews) lead people to stop information search earlier in part to conserve cognitive resources (Diehl 2005; Payne, Bettman, and Johnson 1988; Iyengar, Wells, and Schwartz 2006). The order of the reviews people read can also affect when they stop searching for more information, as existing research finds that search strategies adopted in initial search environments tend to persist into different search environments (Broder and Schiffer 2006; Levav, Reinholtz, and Lin 2012).

The interpretation of online reviews may also be susceptible to information-processing biases observed in other domains (Kahneman and Tversky 1982). For example, consumers may not intuit the selection biases inherent in employer reviews, failing to appreciate the

non-representative polarity of the typical J-shaped review distribution. As another example, consumers may myopically focus on the “star” rating of a product or a service while failing to take into account the reference point upon which the rating is based. For example, a financial start-up company that has an average review score of 4.5 stars will likely differ from an established investment bank that has the same average review, because the reference point of employees working in these two companies differ in many ways. However, prospective employees may not fully account for these inherent differences when evaluating the two companies, and may instead myopically focus on their similar average ratings. Overall, while online reviews are an important tool for communicating reputation in the marketplace, their limitations can be consequential and warrant further study.

5.5. Conclusion

We assess the reliability of online employer reviews and the role that incentives can play in reducing bias in the distribution of employee opinions. Using data from a leading online

employer rating website Glassdoor, we have shown that voluntary reviews are more likely to be one star or five stars relative to incentivized reviews. On average, this difference in the

distribution leads to voluntary reviews being slightly more positive than incentivized reviews.

Using an experiment on Amazon’s Mechanical Turk, we show that voluntary employer reviews are biased relative to forced reviews. Forced reviews in our experimental setting provide an unbiased distribution of reviews that is not available in observational data from Glassdoor, and allow us to rigorously demonstrate bias in voluntary reviews. Our experiment allows us to show that certain monetary and pro-social incentives can increase the response rate and also reduce the bias in voluntary reviews.

Our results reinforce the conclusion from the existing word-of-mouth literature that users should know that the distribution of voluntary online reviews can be biased and should be taken with a grain of salt. At the same time, we also demonstrate that bias in reviews can be reduced by using adequate incentives. Our results suggest that websites and government entities alike should experiment with the use of incentives – monetary and pro-social – to increase survey response rates and thereby reduce bias. Such experimentation has the promise to both improve the quality of information at consumers’ disposal and allow companies and governments to optimize the informative signal contained in their surveys.

References

Ariely D (2000) Controlling information flow: Effects on consumers’ decision making and preference. J. Consumer Res. 27(2):233-248.

Bandiera O, Barankay I, Rasul I (2010) Social incentives in the workplace. Rev. of Economic Studies 77(2):417-458.

Benson, Alan, Aaron Sojourner, and Akhmed Umyarov. 2015. “Can Reputation Discipline the Gig Economy? Experimental Evidence from an Online Labor Market.” SSRN Scholarly Paper ID 2696299. Rochester, NY: Social Science Research Network.

https://papers.ssrn.com/abstract=2696299.

Berger J (2011) Arousal increases social transmission of information. Psychological Sci., 22(7):891-893.

Berger J, Milkman K (2012) What makes online content viral? J. Marketing Res. 49(2):192-205.

Berger J., Schwartz EM (2011) What drives immediate and ongoing word of mouth? J. Marketing Res. 48(5):869-880.

Broder A, Schiffer S (2006) Adaptive flexibility and maladaptive routines in selecting fast and frugal decision strategies. J. Experimental Psychology: Learning, Memory, and Cognition 32(4):904-918.

Burmester M, Kwang T, Gosling SD (2011) Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives Psychological Sci. 6(1):3-5.

Buss DM (1989). Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures. Behavioral and Brain Sci. 12:1–49.

Card, D, Mas A, Moretti E, Saez E (2012) Inequality at work: The effect of peer salaries on job satisfaction. American Economic Rev. 102(6):2981–3003.

Chatterjee, P (2001) Online reviews: Do consumers use them? in NA - Advances in Consumer Res.

Vol. 28, eds. Mary C. Gilly and Joan Meyers-Levy, Valdosta, GA : Association for Consumer Research, 129-133.

Chen Z (2017). Social acceptance and word of mouth: How the motive to belong leafs to divergent WOM with strangers and friends. J. Consumer Res. 44(3):613-632.

Chintagunta PK, Gopinath S, Venkataraman S (2010) The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets.

Marketing Sci. 29(5):944–957.

Dana J, Cain DM (2015) Advice versus choice. Current Opinion Psych. 6:173-176.

De Langhe B, Fernbach PM, Lichtenstein DR (2015) Navigating by the stars: Investigating the actual and perceived validity of online user ratings. J. Consumer Res. 42:817-833.

Diehl K (2005) When two rights make a wrong: Searching too much in ordered environments. J.

Marketing 42(3):313-322.

Dillman, Don A., Jolene D. Smyth, and Leah Melani Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4 edition. Wiley.

Dunn EW, Aknin, LB, Norton, MI (2008). Spending money on others promotes happiness. Science, 319:1687–1688.

Dunn EW, Aknin, LB, Norton, MI (2014). Prosocial spending and happiness: Using money to benefit others pays off. Current Directions Psychological Sci. 23(1):41-47.

Filippas, Apostolos, Horton, John, and Joseph Golden. 2017. “Reputation in the Long-Run” Work.

Pap., NYU.

Floyd, K, Freling R, Alhoqail S, Young Cho H, Freling T (2014) How online product reviews afect retail sales: A meta-analysis. J. Retailing 90:217–232.

Forbes 2014: https://www.forbes.com/sites/jaysondemers/2014/09/09/how-negative-online-company-reviews-can-impact-your-business-and-recruiting/#1409f0c91d9b

Fradkin, Andrey, Elena Grewal, Dave Holtz, and Matthew Pearson. 2015. “Bias and Reciprocity in Online Reviews: Evidence from Field Experiments on Airbnb.” In Proceedings of the Sixteenth ACM Conference on Economics and Computation, 641–641. ACM.

Gneezy U, Meier S, Rey-Biel P (2011) When and why incentives (don’t) work to modify behavior.

J. Econom. Perspectives 25(4):191-209.

Gneezy U, Rustichini A (2000). A fine is a price. J. Legal Studies 29(1):1-17.

Heyman J, Ariely D (2004) Effort for payment: A tale of two markets. Psychological Sci.

15(11):787-793.

Hu N, Zhang J, Pavlou PA (2009) “Overcoming the J-shaped distribution of product reviews.

Commun. ACM 52, no. 10 (October 2009): 144–147.

Huang M, Li P, Meschke F, Guthrie J (2015) Family firms, employee satisfaction, and corporate performance. Journal of Corporate Finance, 34: 108-127.

Huang TK, Ribeiro B, Madhyastha HV, Faloutsos M (2014) The socio-monetary incentives of online social network malware campaigns. Proceedings of the second ACM conference on Online social network 259-270.

Iyengar SS, Wells RE, Schwartz B (2006) Doing better but feeling worse: Looking for the ‘best’ job undermines satisfaction. Psychological Sci. 17(2):143–150.

Kahneman D, Tversky A (1982) On the study of statistical intuitions. Cognition, 11:123–141.

King RA, Racherla P, Bush VD (2014) What we know and don’t know about online word-of-mouth:

A review and synthesis of the literature. J. Interactive Marketing 28(3):167-183.

Klein N, Grossman I, Uskul AK, Kraus AA, Epley N (2015) It generally pays to be nice, but not really nice: Asymmetric reputations from prosociality across 7 countries. Judgment and Decision Making 10:355-364.

Klein N (2017) Prosocial behavior increases perceptions of meaning in life. J Positive Psychology 12(4):354-361.

Levav J, Reinholtz N, Lin C (2012) The effect of ordering decisions by choice-set size on consumer search. J. Consumer Res. 39:585-599.

Lu SF, Rui, H (2017) Can we trust online physician ratings? Evidence from cariac surgeons in Florida. Management Sci. (published online June 13, 2017).

Luca M (2016). Reviews, reputation, and revenue: The case of Yelp.com. Harvard Business School NOM Unit Working Paper No. 12-016.

Mayzlin D, Dover Y, Chevalier J (2013) Promotional reviews: An empirical investigation of online review manipulation. American Economic Rev. 104(8):2421-2455.

Meyer, BD, Mok WKC, Sullivan JX (2015) Household surveys in crisis. J. Economic Perspectives 29 (4):199–226.

Moe WW, Trusov M (2011) The value of social dynamics in online product ratings forums. J.

Marketing Res. 49:444–456.

Pallais, Amanda. 2014. “Inefficient Hiring in Entry-Level Labor Markets.” American Economic Review 104 (11):3565–99. https://doi.org/10.1257/aer.104.11.3565.

Payne JW, Bettman JR, Johnson EJ (1988) Adaptive strategy selection in decision making. J.

Experimental Psychology 14(3):534–52.

Philipson, Tomas. 2001. “Data Markets, Missing Data, and Incentive Pay.” Econometrica 69 (4):

1099–1111. https://doi.org/10.1111/1468-0262.00232.

Senecal S, Nantel J (2004) The influence of online product recommendations on consumers’ online choices. J. Retailing 80:159-169.

Simonson I (2016) Imperfect progress: An objective quality assessment of the role of user reviews in consumer decision making: A commentary on de Langhe, Fernbach, and Lichtenstein. J.

Consumer Res. 42(6):840-845.

Urbany JE, Dickson PR, Wilkie WL (1989) Buyer uncertainty and information search. J. Consumer Res. 16(2):208-215.

Mturk Glassdoor

Variable Observations Mean SE Observations Mean SE

Age 639 35.462 10.887 188,623 34.314 10.550

Female 639 0.518 0.500 188,623 0.419 0.493

More than 1000 employees 639 0.421 0.494 188,623 0.703 0.457

Education (years) 639 15.283 2.006 188,623 15.505 1.343

Tenure (years) 639 3.563 4.344 188,623 3.543 4.790

Table 1: Summary statistics: Mturk vs. Glassdoor datasets

Rating Rating Is 1 star Is 1 star Is 5 stars Is 5 stars

(1) (2) (3) (4) (5) (6)

Voluntary 0.0240*** 0.0345*** 0.0190*** 0.0143*** 0.0467*** 0.0432***

(0.00800) (0.00798) (0.00169) (0.00169) (0.00301) (0.00302)

Age -0.00881*** 0.00225*** -0.000635***

Observations 188,623 188,623 188,623 188,623 188,623 188,623

R-squared 0.000 0.022 0.001 0.011 0.001 0.035

Robust standard errors in parentheses

*** p<0.01, ** p<0.05, * p<0.1

Table 2: Glassdoor selection bias: more polarized ratings

No

Table 3: Impact of selection and incentives on average ratings in MTurk sample (relative to no incentive, forced review)

Positive

Table 4: Impact of selection and incentives on average ratings for incentives that did not increase response rates in the MTurk sample (relative to no incentive, forced review)

Rating Rating

Negative prosocial -0.325 -0.319

(0.198) (0.199) Nonspecific prosocial -0.332* -0.345*

(0.190) (0.194)

Positive prosocial -0.332* -0.331*

(0.181) (0.182) Choice*

Control (no incentive) -0.574** -0.574**

(0.225) (0.227)

Monetary high -0.294 -0.297

(0.212) (0.215)

Monetary low -0.311 -0.323

(0.241) (0.243)

Negative prosocial 0.278 0.273

(0.241) (0.236)

Nonspecific prosocial 0.382* 0.402*

(0.212) (0.213)

Positive prosocial 0.043 0.029

(0.252) (0.253)

*** p<0.01, ** p<0.05, * p<0.1

Table 5: Framing effects: do incentives affect forced average ratings in MTurk sample?

Figure 1: Glassdoor GTG vs. voluntary reviews

Figure 2: Glassdoor: changes in rankings of frequent industries due to bias

Figure 3: MTurk experiment: efficacy of different incentives in increasing response rates

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Control Positive Prosocial Negative Prosocial Nonspecific Prosocial

Monetary Low Monetary High

Proportion Choosing to Provide Review

Experimental Condition

Figure 4: Bias in reviews in the absence of incentives in the MTurk experiment

Figure 5: No bias in reviews with high monetary incentive (75% payment increase) in the MTurk experiment

Figure 6: No bias in reviews with nonspecific prosocial incentives in the MTurk experiment

Figure 7: Glassdoor GTG vs. Mturk choice, high monetary: similar distributions

문서에서 Incentives Can Reduce Bias in Online Reviews (페이지 26-46)

관련 문서