David Weinberger

AI doesn’t think like us

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

Artificial intelligence in the form of Large Language Models (LLM) such as chatGPT and Gemini can make true statements and provide reasonable justifications for them. This seems to meet the classical criteria of knowledge as justified true belief, with just a little fiddling. But if we look past the criteria of knowledge and instead focus on our experience of knowledge, we find two elements lacking in LLMs but implicit even in Socrates’ own discussion of knowledge: we want to understand our justified true beliefs, and, related to this, we want them to be part of an extensive architecture of knowledge in which each piece rests upon others and is connected to many more. But we usually cannot understand how LLMs as a technology produce their claims, and they do not have any architecture of knowledge. Instead, they have a highly complex set of weighted relationships among words, phrases, and parts of words

Keywords

  • AI
  • knowledge
  • ideas
  • philosophy
  • phenomenology

Preview

Article first page

What do you think about the recent suggestion?

Trova nel catalogo di Worldcat