#29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models
Have you ever felt like ChatGPT genuinely understands you? What if the reality is that it doesn't even have the foundational capacity to "speak" to you at all?
On this episode of The AITEC Podcast, Roberto Carlos García and Sam Bennett sit down with philosopher Justin Tiehen (University of Puget Sound) to unpack his fascinating new paper, LLM's Lack a Theory of Mind and So Can't Perform Speech Acts--A Causal Argument.
Justin takes us on a deep dive into the philosophy of mind to explain why current Large Language Models, despite their impressive output, are essentially just faking it. We explore why next-token predictors are completely missing the causal architecture required to have a "Theory of Mind," and why, without that, they are fundamentally incapable of making assertions, giving orders, or performing true speech acts.
Key Takeaways from this Episode:
The Ladder of Causation: Why AI is stuck observing statistical correlations and cannot grasp true causal interventions or counterfactuals (drawing on Judea Pearl’s work).
The Speech Act Problem: Why performing a true "speech act" requires the deliberate intention to influence another person's mind.
Cheating the Benchmarks: How LLMs "cheat" on psychological exams like the Sally-Anne false-belief test simply by memorizing statistical patterns in text.
The Threat of AI Blackmail: What it would actually look like if an AI possessed a Theory of Mind and strategically tried to manipulate human behavior to achieve its goals.
Whether you are deeply invested in the philosophy of language or just trying to figure out how much you should trust your favorite AI assistant, this conversation will completely reframe how you view generative AI.