Résumé
This PhD thesis presents a transdisciplinary research work that is part of a cognitive engineering approach. At the intersection of Artificial Intelligence (AI) and Human Factor, we explore the interaction principles that allow the implementation of a powerful collaboration between a user and an intelligent recommendation system in a decision-making context. We focus on the application framework of planning, for which we have developed a simulation environment that stages an air operator in charge of supervising a drone with a high level of decision-making autonomy. The mission scenario, played on different terrains, leads the operator to determine a new flight plan assisted by intelligent recommendations. An analysis of the literature allows us to characterize three scientific problems that we investigate through three experimental studies in this environment.The first problematic focuses on the consequences of a change in the AI’s participation in the decision making process on the operator’s feeling. Our results indicate that when the AI reduces its participation by stopping to propose plan suggestions, the operator’s feeling of being responsible and at the origin of the solution increases significantly. However, this effect is found to be asymmetric. When the AI increases its participation by introducing plan suggestions that were not initially proposed, the operator’s feeling of being responsible and at the origin of the solution decreases little even though the validated plans become more homogeneous.The second problem examines the potential links between the compromise criteria of the plans constructed with the AI by the operator and the elements of his personality that could predict it, in terrains where there is no conceivable plan that has an acceptable quality. We highlight an individual preference for degrading one of the three plan trade-off criteria: some participants prefer to abandon targets instead, others consume a lot of fuel, others take a high risk. Nevertheless, we do not observe a correlation between these participants’ decision profiles and their Big Five model traits or their self-confidence.The third issue concerns the development of AI that better accounts for human decision making. Self-confrontation interviews with participants allowed us to build a model of the operator’s decision making process for the replanning task in the environment. We analyzed how the way the operator uses the different tools reveals in which phase of the solution construction he is in, which allowed us to automate the monitoring of this decision-making process. We derive examples of situations in which the AI system can adapt its recommendations differently, depending on whether the operator is identified as being involved in an exploration phase or a flight plan exploitation phase.These results allow us to identify human factor issues related to future uses of AI in human-IA teams even before these systems are available, and to propose design principles that rely on the cognitive mechanisms underlying human decision making to lay the foundations for interaction with the AI system.
Source: http://www.theses.fr/2022BORD0342
.