Anna M. Borghi Chiara De Livio Francesco Mannella Luca Tummolini Stefano Nolfi

Exploring the prospects and challenges of large language models for language learning and production

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

The success of Large Language Models (LLMs) in many application domains suggests that they may also change how we conceive cognition. LLMs possess capabilities traditionally considered exclusively human. Can the experience with language alone facilitate the acquisition of other complex cognitive abilities? Do linguistic, sensorimotor, and interoceptive experiences need to be integrated? Are there domains, like that of abstract concepts (e.g., freedom), where linguistic experience suffices to capture meaning? After introducing what LLMs are, we address their potential impact, discussing five differences from human cognition: they are not grounded, lack action, hardly capture pragmatics, are culturally biased, and do not reflect individual characteristics

Keywords

  • large language models
  • grounded cognition
  • abstract concepts
  • ChatGPT
  • cultures

Preview

Article first page

What do you think about the recent suggestion?

Trova nel catalogo di Worldcat