#31: Jacob Browning: Unmasking the Fake Minds of Large Language Models

Have you ever wondered if AI models actually understand the words they generate, or if they are just really good at faking it?

On this episode of The AITEC Podcast, Roberto García and Sam Bennett are joined by philosopher Jacob Browning (Baruch College, CUNY) to unpack his article, Intentionality All-Stars Redux: Do language models know what they are talking about?

Using a clever baseball diamond metaphor and drawing on the philosophy of Immanuel Kant, Jacob explains why Large Language Models lack the "intentionality" required for genuine comprehension. We cover:

  • First Base (Formal Competence): Why LLMs struggle with basic logic and negation, revealing the absence of an underlying logical engine.

  • Second Base (Rationality): Why true understanding requires purposive behavior, and how LLMs hilariously fail at "intuitive physics" (like trying to inflate a couch to get it onto a roof).

  • Shortstop (Objectivity and World Models): Why genuine understanding requires grasping an objective, mind-independent world that determines whether sentences are true or false. This position explores how LLMs lack a coherent "world model," causing them to fail at tasks that require intuitive physics and planning for counterfactual situations (like predicting where a billiard ball will go or playing simple video games).

  • Third Base (The Unified Self): Why making a claim requires a persistent self that takes responsibility for its beliefs—something a next-token predictor simply cannot do.

Whether you're exploring the intersection of AI, technology, and ethics, or just trying to figure out if your chatbot actually knows what it's saying, this conversation will give you the philosophical toolkit to see through the illusion.

Links

Next
Next

#30 Andrea Pinotti: Beyond the Frame—Virtual Reality, Narcissus, and the Desire to Enter the Image