RCM RCM

#33 Michael Gerlich: How AI is Stealing Your Ability to Think

Are we trading our critical thinking skills for the sake of digital convenience?

In this episode of The AITEC Philosophy Podcast, Roberto Carlos García sits down with Michael Gerlich. Michael is the Head of the Center for Strategic Corporate Foresight and Sustainability, the Head of Executive Education, and a Senior Faculty member at SBS Swiss Business School. Most recently, Michael summarized his research on the interaction between LLMs and humans in The Convenience Trap: What Happens When AI Becomes the Mind Behind Our Lives.

In this conversation, Michael shares his interdisciplinary research into how AI is "creeping" into nearly every aspect of our existence. We explore the dangerous phenomenon of "cognitive offloading"—the tendency to let algorithms make our choices, from the music we hear to the news we consume—and how this creates a "convenience trap" that narrows our perspective and weakens our mental "musculature". Michael argues that for AI to be a truly beneficial "sparring partner," we must do the hard work of thinking first before engaging with the machine.

Links:

Read More
RCM RCM

#32 Yochai Ataria: Why Blade Runner is Secretly About Fake Realities

Have you ever suspected that the technology you use isn't just a tool, but an entirely fake reality replacing the natural world?

On this episode of The AITEC Philosophy Podcast, Sam Bennett sits down with Israeli philosopher Yochai Ataria to explore the brilliant philosophical undercurrents of Ridley Scott's 1982 classic, Blade Runner. Ataria reveals how the film functions as a profound Heideggerian critique of the modern technological age.

They unpack how the film's protagonist, Rick Deckard, serves as a direct stand-in for René Descartes, undergoing a radical crisis of certainty and identity. The conversation also delves into ancient Greek frameworks, exploring how the replicant Roy Batty mirrors the Oedipus myth and how the rare flashes of lightning in a polluted sky tie back to Zeus. Ultimately, this episode asks whether the representations we rely on—from futuristic photo-analyzers to modern social media algorithms—are actually elaborate lies designed to disconnect us from reality.

Read More
RCM RCM

#31: Jacob Browning: Unmasking the Fake Minds of Large Language Models

Have you ever wondered if AI models actually understand the words they generate, or if they are just really good at faking it?

On this episode of The AITEC Podcast, Roberto García and Sam Bennett are joined by philosopher Jacob Browning (Baruch College, CUNY) to unpack his article, Intentionality All-Stars Redux: Do language models know what they are talking about?

Using a clever baseball diamond metaphor and drawing on the philosophy of Immanuel Kant, Jacob explains why Large Language Models lack the "intentionality" required for genuine comprehension. We cover:

  • First Base (Formal Competence): Why LLMs struggle with basic logic and negation, revealing the absence of an underlying logical engine.

  • Second Base (Rationality): Why true understanding requires purposive behavior, and how LLMs hilariously fail at "intuitive physics" (like trying to inflate a couch to get it onto a roof).

  • Shortstop (Objectivity and World Models): Why genuine understanding requires grasping an objective, mind-independent world that determines whether sentences are true or false. This position explores how LLMs lack a coherent "world model," causing them to fail at tasks that require intuitive physics and planning for counterfactual situations (like predicting where a billiard ball will go or playing simple video games).

  • Third Base (The Unified Self): Why making a claim requires a persistent self that takes responsibility for its beliefs—something a next-token predictor simply cannot do.

Whether you're exploring the intersection of AI, technology, and ethics, or just trying to figure out if your chatbot actually knows what it's saying, this conversation will give you the philosophical toolkit to see through the illusion.

Links

Read More
RCM RCM

#30 Andrea Pinotti: Beyond the Frame—Virtual Reality, Narcissus, and the Desire to Enter the Image

Philosopher Andrea Pinotti joins us to discuss At the Threshold of the Image: From Narcissus to Virtual Reality. What begins as a conversation about image theory quickly becomes a sweeping exploration of immersion, identity, and the strange pull of simulated worlds.

Why do we long to enter the image? What do we gain—and lose—when the frame disappears? Pinotti guides us from Paleolithic caves to VR headsets, through myths of Narcissus and Pygmalion, to Black Mirror’s digital afterlives.

Along the way, we consider how virtual environments blur fiction and reality, evoke religious promises, and reshape what it means to be human.

If you've ever wondered why virtual reality feels so real—or so dangerous—this episode is for you.

Read More
RCM RCM

#29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models

Have you ever felt like ChatGPT genuinely understands you? What if the reality is that it doesn't even have the foundational capacity to "speak" to you at all?

On this episode of The AITEC Podcast, Roberto Carlos García and Sam Bennett sit down with philosopher Justin Tiehen (University of Puget Sound) to unpack his fascinating new paper, LLM's Lack a Theory of Mind and So Can't Perform Speech Acts--A Causal Argument.

Justin takes us on a deep dive into the philosophy of mind to explain why current Large Language Models, despite their impressive output, are essentially just faking it. We explore why next-token predictors are completely missing the causal architecture required to have a "Theory of Mind," and why, without that, they are fundamentally incapable of making assertions, giving orders, or performing true speech acts.

Key Takeaways from this Episode:

  • The Ladder of Causation: Why AI is stuck observing statistical correlations and cannot grasp true causal interventions or counterfactuals (drawing on Judea Pearl’s work).

  • The Speech Act Problem: Why performing a true "speech act" requires the deliberate intention to influence another person's mind.

  • Cheating the Benchmarks: How LLMs "cheat" on psychological exams like the Sally-Anne false-belief test simply by memorizing statistical patterns in text.

  • The Threat of AI Blackmail: What it would actually look like if an AI possessed a Theory of Mind and strategically tried to manipulate human behavior to achieve its goals.

Whether you are deeply invested in the philosophy of language or just trying to figure out how much you should trust your favorite AI assistant, this conversation will completely reframe how you view generative AI.

Read More
RCM RCM

#28 Mathilda Marie Mulert: Sex Robots, Simulation, and the Question of Moral Harm

In this episode of the AITEC Podcast, we’re joined by philosopher Mathilda Marie Mulert, a doctoral researcher at the Oxford Internet Institute, to explore one of the most difficult questions in contemporary tech ethics: when, if ever, is it morally permissible to simulate sexual violence?

Drawing on her recent work on simulation ethics, Mulert examines video games, virtual environments, sex robots, and consensual role-play to challenge the assumption that “it’s just pretend.” We discuss the Gamers’ Dilemma, the limits of consent, and why moral context—not just content—matters when evaluating simulated wrongdoing.

This conversation is philosophical, careful, and candid. Listener discretion is advised.

Links:

Read More
RCM RCM

#27 Matheus Ferreira de Barros: Technology, Spheres, and the Human Being

In this episode of the AITEC podcast, Sam Bennett and Roberto Carlos speak with Matheus Ferreira de Barros, a philosopher of technology at PUC-Rio and the Federal University of Rio de Janeiro, about the work of Peter Sloterdijk. Ferreira de Barros introduces Sloterdijk’s philosophy of technology, focusing on the idea that human beings and technology co-evolve and that technology plays a constitutive role in human life rather than merely serving as an external tool.

The conversation explores Sloterdijk’s Spheres project, including his account of insulation, distance from nature, and the creation of protective interiors that stabilize human existence at biological, psychological, and symbolic levels. The discussion also examines the loss of large-scale meaning structures in modernity, the role of religion and culture as technologies of existential security, and how contemporary technologies, including AI, may both disrupt and reshape the spheres through which human life becomes livable.

Read More
RCM RCM

#26 Iwan Williams: Do Language Models Have Intentions?

In this episode of the AITEC podcast, Sam Bennett speaks with philosopher of mind and AI researcher Iwan Williams about his paper “Intention-like representations in language models?” Williams is a postdoctoral researcher at the University of Copenhagen and received his PhD from Monash University.

The conversation explores whether large language models exhibit internal representations that resemble intentions, as distinct from beliefs or credences. Focusing on features such as directive function, planning, and commitment, Williams evaluates several empirical case studies and explains why current models may appear intention-like in some respects while falling short in others. The discussion also considers why intentions matter for communication, safety, and our broader understanding of artificial intelligence.

Read More
RCM RCM

#25 Pilar López-Cantero: The Ethics of Breakup Chatbots

What if your ex never really left—because you trained a chatbot to be them? In this episode of the AITEC Podcast, we’re joined by philosopher Pilar López-Cantero to explore her provocative article, The Ethics of Breakup Chatbots. From the haunting potential of AI relationships to the dangers of narrative stagnation, we dive into what it means to love, let go, and maybe linger too long—with a machine. Are these bots helping us heal, or are they shaping a lonelier, more controllable kind of intimacy?

Read More
RCM RCM

#24 Kevin Crowston and Francesco Bolici: The Death of Expertise?

In this episode of the show, we sit down with Kevin Crowston and Francesco Bolici—two leading scholars of information science and organizational behavior—to explore the hidden risks of generative AI in the workplace and the classroom.

Their recent paper on deskilling and upskilling with AI serves as the foundation for a conversation that ranges from ChatGPT in programming to the future of education. The key concern? AI systems may offer short-term productivity boosts—but they quietly erode the very skills people need to think, solve problems, and make decisions when things go wrong.

We unpack:

  • The tension between efficiency and learning: how AI tools give us answers but rob us of “learning by doing”

  • Why novice users might look as good as experts—but only because AI is flattening the skill curve

  • The “leveling effect” vs. the “multiplier effect”: when AI empowers novices vs. when it amplifies expert performance

  • What happens to organizations—and societies—when no one remembers how to do things manually

  • How educators can respond: should we stop students from using AI? Or teach them how to use it without becoming dependent?

From sales to software engineering, and from university classrooms to global labor markets, this episode explores how generative AI reshapes human learning, power, and value—and what we must do now to avoid a future of mass deskilling.

Read More
RCM RCM

#23 Sebastian Purcell: Rootedness, Not Happiness — Aztec Wisdom for a Slippery World

In this episode, we speak with philosopher Sebastian Purcell about his new book The Outward Path: The Wisdom of the Aztecs. Purcell shows that Aztec philosophy offers a strikingly different vision of the good life — one that rejects the modern obsession with happiness and invulnerability in favor of something deeper: rootedness.

We discuss what it means to live a rooted life in a world that feels increasingly unstable — from collective agency and humility to willpower, ritual, and the art of balance. Along the way, Purcell explains how Aztec ethics can help us rethink everything from self-discipline and courage to how we live with technology, social media, and each other.

Links:
Sebastian’s website
Sebastian’s articles on Medium
Sebastian’s book

Read More
RCM RCM

#22 Iain Thomson: Why Heidegger Thought Technology Was More Dangerous Than We Realize

What if our deepest fears about AI aren't really about the machines at all—but about something we've forgotten about ourselves? In this episode, we speak with philosopher Iain D. Thomson (University of New Mexico), a leading scholar of Martin Heidegger, about his new book Heidegger on Technology’s Danger and Promise in the Age of AI.

Together we explore Heidegger’s famous claim that “the essence of technology is nothing technological,” and why today’s crises—from environmental collapse to algorithmic control—are really symptoms of a deeper existential and ontological predicament.

Also discussed:
– Why AI may not be dangerous because it’s too smart, but because we stop thinking
– Heidegger’s concept of “world-disclosive beings” and why ChatGPT doesn’t qualify
– How the technological mindset reshapes not just our tools but our selves
– What a “free relation” to technology might look like
– The creeping danger of lowering our standards and mistaking supplements for the real thing

Read More
RCM RCM

#21 Jayashri Bangali: AI in Education

In this episode, we sit down with Jayashri A. Bangali, a researcher and educator whose work explores the evolving role of artificial intelligence in education—both in India and around the world. We discuss how AI is transforming learning through personalization, interactivity, and accessibility—but also raise hard questions about bias, surveillance, dependence, and deskilling.

We dig into Jayashri’s recent research on AI integration in Indian schools and universities, including key findings from surveys of students and teachers across academic levels. We also explore global trends in AI adoption, potential regulatory safeguards, and how policymakers can ensure that AI enhances—not erodes—critical thinking and creativity.

This is a wide-ranging conversation on the future of learning, the risks of offloading too much to machines, and the kind of education worth fighting for in an AI-driven world.

Read More
RCM RCM

#20 Bernardo Bolaños and Jorge Luis Morton: On Stoicism and Technology

In this episode, we speak with Bernardo Bolaños and Jorge Luis Morton, authors of On Singularity and the Stoics, about the rise of generative AI, the looming prospect of superintelligence, and how Stoic philosophy offers a framework for navigating it all. We explore Stoic principles like the dichotomy of control, cosmopolitanism, and living with wisdom as we face of deepfakes, algorithmic manipulation, and the risk of superintelligent AI.

Read More
RCM RCM

#19 Joshua Hatherley: When Your Doctor Uses AI—Should They Tell You?

In this episode, we speak with Dr. Joshua Hatherley, a bioethicist at the University of Copenhagen, about his recent article, “Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?”

Dr. Hatherley challenges what has become a widely accepted view in bioethics: that patients must always be informed when clinicians use medical AI systems in diagnosis or treatment planning. We explore his critiques of four central arguments for the “disclosure thesis”—including risk, rights, materiality, and autonomy—and discuss why, in some cases, mandatory disclosure might do more harm than good.

Read More
RCM RCM

#18: Jeff Kane: Why Human Minds Are Not Computer Programs

Philosopher Jeff Kane joins us to discuss his new book The Emergence of Mind: Where Technology Ends and We Begin. In an age where AI writes poems, paints portraits, and mimics conversation, Kane argues that the human mind remains fundamentally different—not because of what it does, but because of what it is. We explore the moral risks of thinking of ourselves as machines, the embodied nature of thought, the deep structure of human values, and why lived experience—not information processing—grounds what it means to be human.


Read More
RCM RCM

#17 Caroline Ashcroft: The Catastrophic Imagination

In this episode, we speak with Dr. Caroline Ashcroft, Lecturer in Politics at the University of Oxford and author of Catastrophic Technology in Cold War Political Thought. Drawing on figures like Arendt, Jonas, Ellul, and Marcuse, Ashcroft explores a powerful yet underexamined idea: that modern technology is not just risky or disruptive—but fundamentally catastrophic. We discuss how mid-century political theorists viewed technology as reshaping the environment, the self, and the world in ways that eroded human dignity, democratic life, and any sense of limits.

Get the book here.


Read More
RCM RCM

#16 Teresa Baron: The Artificial Womb on Trial

Philosopher Teresa Baron joins us to discuss her book The Artificial Womb on Trial. As artificial womb technology edges closer to reality, Baron asks a different question: not just what ectogenesis means for society, but how we ethically get there. From human subject trials to questions of consent, regulation, and reproductive justice, this episode puts the development process itself under the bioethical microscope.

Links:
Teresa’s website
Teresa’s book


Read More
RCM RCM

#15 Stephen Kosslyn: Learning to Flourish in the Age of AI

What does it mean to live well in an AI-driven world—and how can we use AI to help us get there?

In this episode, we speak with psychologist and neuroscientist Dr. Stephen Kosslyn. Stephen Kosslyn is Professor Emeritus at Harvard University, and he is former chair of the Harvard psychology department and dean of social sciences. He is currently the CEO of Active Learning Sciences. We discuss how generative AI isn’t just a tool for speed and convenience—it can be a cognitive amplifier for building the kind of life we actually want. Drawing from his book Learning to Flourish in the Age of AI, we explore how to use large language models to set life goals, stay motivated, communicate better, manage emotions, understand ourselves and others, and think more clearly.

Note: In our conversation, Professor Kosslyn mentions the prompts he used, as well as other resources, being available online. You can find those here.

Read More
RCM RCM

#14 Alice Helliwell: The Art of Misalignment

What if the best AI art doesn’t care what we think? In this episode, we talk with philosopher Alice Helliwell about her provocative idea: that future AI might create aesthetic value not by mimicking human tastes, but by challenging them. Drawing from her 2024 article Aesthetic Value and the AI Alignment Problem,” we explore why perfect alignment isn't always ideal—and how a little artistic misalignment could open new creative frontiers.

Read More