EARLY ACCESS

Luca Introzzi Paolo Cherubini Carlo Reverberi

Human-AI interaction as cooperation: Towards a theory of artificial mind

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

A recent development in Psychology of Thought is about decisional processes in hybrid teams, composed by a human agent and an artificial agent. Interaction in hybrid teams can be conceptualized as cooperation: support systems are not developed for substitution but for collaboration. The human cognitive system manifests several blind spots, limits that could make performance in a task not optimal. A cognitive analysis of hybrid teams in medical context can show the impact of these limits and their management. Some of these limits, of perceptual and attentional nature, can be partially compensated by using information provided by an artificial agent as support of the decision process. Other limits manifest themselves in the effective use of information provided. The fundamental problem is calibration: correct weighting of human opinion and of AI opinion, and then to rationally integrate both. The calibration process might be systematically distorted by cognitive biases and by lacking a proper understanding of the AI’s way of contributing to the team, leading to suboptimal performance. Development of a theory of the artificial mind allows the human decision maker to represent the correct functioning of the AI, of the human limits that AI is meant to address, and of the human strengths on which concentration of resources would be better, facilitating calibration. A rationally calibrated interaction will provide information allowing improved decisions and diagnoses

Keywords

  • artificial intelligence
  • hybrid intelligence
  • human-AI interaction
  • theory of artificial mind
  • cognitive bias
  • hybrid teams

Preview

Article first page

What do you think about the recent suggestion?