Author: Hyowon Gweon1, Judith Fan1,2, Been Kim3
1 Department of Psychology, Stanford University, Stanford, CA 94305, USA.
2 Department of Psychology, University of California, San Diego, CA 92093, USA.
3 Google Research, Mountain View, CA 94043, USA.
Conference/Journal: Philos Trans A Math Phys Eng Sci
Date published: 2023 Jul 24
Other: Volume ID: 381 , Issue ID: 2251 , Pages: 20220048 , Special Notes: doi: 10.1098/rsta.2022.0048. , Word Count: 222
A hallmark of human intelligence is the ability to understand and influence other minds. Humans engage in inferential social learning (ISL) by using commonsense psychology to learn from others and help others learn. Recent advances in artificial intelligence (AI) are raising new questions about the feasibility of human-machine interactions that support such powerful modes of social learning. Here, we envision what it means to develop socially intelligent machines that can learn, teach, and communicate in ways that are characteristic of ISL. Rather than machines that simply predict human behaviours or recapitulate superficial aspects of human sociality (e.g. smiling, imitating), we should aim to build machines that can learn from human inputs and generate outputs for humans by proactively considering human values, intentions and beliefs. While such machines can inspire next-generation AI systems that learn more effectively from humans (as learners) and even help humans acquire new knowledge (as teachers), achieving these goals will also require scientific studies of its counterpart: how humans reason about machine minds and behaviours. We close by discussing the need for closer collaborations between the AI/ML and cognitive science communities to advance a science of both natural and artificial intelligence. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.
Keywords: artificial intelligence; cognitive science; communication; social intelligence; theory of mind.
PMID: 37271177 DOI: 10.1098/rsta.2022.0048