• 검색 결과가 없습니다.

“Artificial intelligence (AI) is the capability of com-puter systems to perform tasks that normally require human intelligence such as . . . decision-making.”6 Within AI, automation is:

The level of human intervention required by a system to execute a given task(s) in a given environment. The highest level of automation (full) is no immediate human intervention.7

Autonomy, different from automation, is the “level of independence that humans grant a system. . . . to achieve an assigned mission . . . [with] planning, and decision-making.”8 Looking at human-machine deci-sion-making, experts from industry and the DoD fore-see AI capability maturity in 2050 at a level where machines have functional autonomy (machine learn-ing and improvlearn-ing within a specific role), otherwise known as narrow AI. This does not reduce the promi-nence of the human element.

The Army expects to be challenged by a global mil-itary peer power where all domains (land, air, mari-time, space, and cyberspace) are contested. The speed of recognition, speed of decision, and speed of action will strain human abilities, so more human tasks will be aided by autonomous systems.9 The Army’s Chief Information Officer/G6, in the Army Network Strat-egy, envisions that:

augmented humans, autonomous processes and automated decision making, will permeate the battlefield.

The speed at which data are dispersed will create an information-rich environment . . . [where] extraction of mission-relevant content may be challenging.10

The Army’s robotic and autonomous systems (RAS) strategy also emphasizes that machines will improve decision-making, but might also overwhelm human decision management ability.11 A human-machine team, collaborating in the operations process, can be exceedingly responsive to changes in the fast-paced, complex, and adaptive future operating environment while maintaining the human dimension. As with any relationship, a level of trust is required to be depen-dent on another teammate and still be effective.

TRUST

Trust is “assured reliance on the character, ability, strength, or truth of someone or something.”12 Prudent trust is a competitive advantage that increases effi-ciency and effectiveness of teams and organizations.

There are many components of trust that are relevant to man-machine interaction—trust between individual humans as trustee and trustor, between humans and computer automation, and between cultures in order

112

to analyze the implications of trust on human-machine collaborative decision-making.

Trust between two entities, the trustee and the trus-tor, is a dynamic at the personal level. Trustee vari-ables include integrity, intent, abilities, and results.

The absolute value of these variables is not important, but rather how the trustor perceives the value of these variables in the trustee. A trustor’s propensity to trust is based on their biases, beliefs, and experiences—and is the lens through which they view trustworthiness.13 Trust studies by Stephen H. R. Covey compare high trust and low trust factors in relationships. High trust builds confidence, resulting in faster decisions and lower resultant costs, whereas low trust causes suspi-cion and negative effects.14 A “no trust” leader loses opportunities and opens windows for adversaries to exploit friendly vulnerabilities due to indecision. An

“absolute trust” leader appears to be effective, but simply relinquishes their leader role by excessive trust. A “prudent trust” leader sensibly balances trust relationships to leverage dividends from trust. The propensity to trust generates synergy without relin-quishing leadership.

Research on human interaction with automation and robots provides similar results in the human-ma-chine trustor-trustee relationship. People trust auto-mation to a level commensurate with their confidence in the machine—and its ability to complete the task at least as well as they could on their own. This is tempered by how well they feel they can control the machine system.15 In general, the trustor gives trust when they perceive it will result in a beneficial outcome.16

Another meaningful study involved analyzing automation trust across cultures.17 The study grouped cultures into dignity, face, and honor culture groups.

Dignity cultures emphasize individual self-worth and are more prevalent in Western Europe and North America where laws are important aspects that govern interpersonal transactions. Face cultures, primarily in East Asia, centered on stable social hierarchies and norms that cherish other’s views of them with high trust for in-group and lower trust for out-groups.

Honor cultures, primarily in Middle East and Latin America, have more unstable social hierarchies that require significantly longer experience to develop trust.18 The research suggests that interpersonal trust within these cultures translates into trust in automation also. Dignity cultures have the highest relative trust of automation and AI while honor cultures have the lowest relative trust of automation, with face cultures in between. Operators in honor cultures required more extensive training with the automation than operators from dignity and face cultures to develop an equal degree of trust in automation.19 This suggests that, at least culturally, the United States has an advantage in adopting autonomous systems with human-ma-chine relationships. The caveat is that individuals may exhibit traits of other cultures based on their personal beliefs, biases, and experiences.

A 2016 Defense Science Board study described barriers to trust in autonomous systems that empha-sized inputs, processing, and outputs. Human inputs, especially sensory functions, are not easily replicated for machines, but machines do have the potential for a much higher number of more varied input types. In decision-making, this input variance can create dif-ferences between how either the human or machine understands the environment or defines the problem.

During processing, even if both humans and machines receive exactly the same inputs, each may assign

114

different degrees of relevance to each of those inputs, resulting in differences in the underlying reasoning.

Moreover, even if those same inputs are weighed with the same value, machine learning, with deeper and more rapid cycles, may lead to different results than a human—who might weigh a single significant life experience very highly when he or she makes decisions.

A machine may lack other contextual learning that humans gain from more broad experiences. The output barriers may be ineffective human-machine computer interfaces (keyboard, mouse, screen, etc.) that slow communications in situations requiring speed. While enhanced language processing and visual interfaces may make the experience richer, it could still paralyze the human with overwhelming amounts of complex information. Human-machine trust barriers, includ-ing cognitive disparity or even resentment, have the potential to be significant as machines learn and retain information much faster, broader, (and better?) than human teammates. There are not only great opportu-nities to leverage autonomous system capabilities, but also challenges in fielding capabilities to leaders who do not trust the full capability.