#23 Sebastian Purcell: Rootedness, Not Happiness — Aztec Wisdom for a Slippery World
In this episode, we speak with philosopher Sebastian Purcell about his new book The Outward Path: The Wisdom of the Aztecs. Purcell shows that Aztec philosophy offers a strikingly different vision of the good life — one that rejects the modern obsession with happiness and invulnerability in favor of something deeper: rootedness.
We discuss what it means to live a rooted life in a world that feels increasingly unstable — from collective agency and humility to willpower, ritual, and the art of balance. Along the way, Purcell explains how Aztec ethics can help us rethink everything from self-discipline and courage to how we live with technology, social media, and each other.
Links:
Sebastian’s website
Sebastian’s articles on Medium
Sebastian’s book
#22 Iain Thomson: Why Heidegger Thought Technology Was More Dangerous Than We Realize
What if our deepest fears about AI aren't really about the machines at all—but about something we've forgotten about ourselves? In this episode, we speak with philosopher Iain D. Thomson (University of New Mexico), a leading scholar of Martin Heidegger, about his new book Heidegger on Technology’s Danger and Promise in the Age of AI.
Together we explore Heidegger’s famous claim that “the essence of technology is nothing technological,” and why today’s crises—from environmental collapse to algorithmic control—are really symptoms of a deeper existential and ontological predicament.
Also discussed:
– Why AI may not be dangerous because it’s too smart, but because we stop thinking
– Heidegger’s concept of “world-disclosive beings” and why ChatGPT doesn’t qualify
– How the technological mindset reshapes not just our tools but our selves
– What a “free relation” to technology might look like
– The creeping danger of lowering our standards and mistaking supplements for the real thing
#21 Jayashri Bangali: AI in Education
In this episode, we sit down with Jayashri A. Bangali, a researcher and educator whose work explores the evolving role of artificial intelligence in education—both in India and around the world. We discuss how AI is transforming learning through personalization, interactivity, and accessibility—but also raise hard questions about bias, surveillance, dependence, and deskilling.
We dig into Jayashri’s recent research on AI integration in Indian schools and universities, including key findings from surveys of students and teachers across academic levels. We also explore global trends in AI adoption, potential regulatory safeguards, and how policymakers can ensure that AI enhances—not erodes—critical thinking and creativity.
This is a wide-ranging conversation on the future of learning, the risks of offloading too much to machines, and the kind of education worth fighting for in an AI-driven world.
#20 Bernardo Bolaños and Jorge Luis Morton: On Stoicism and Technology
In this episode, we speak with Bernardo Bolaños and Jorge Luis Morton, authors of On Singularity and the Stoics, about the rise of generative AI, the looming prospect of superintelligence, and how Stoic philosophy offers a framework for navigating it all. We explore Stoic principles like the dichotomy of control, cosmopolitanism, and living with wisdom as we face of deepfakes, algorithmic manipulation, and the risk of superintelligent AI.
#19 Joshua Hatherley: When Your Doctor Uses AI—Should They Tell You?
In this episode, we speak with Dr. Joshua Hatherley, a bioethicist at the University of Copenhagen, about his recent article, “Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?”
Dr. Hatherley challenges what has become a widely accepted view in bioethics: that patients must always be informed when clinicians use medical AI systems in diagnosis or treatment planning. We explore his critiques of four central arguments for the “disclosure thesis”—including risk, rights, materiality, and autonomy—and discuss why, in some cases, mandatory disclosure might do more harm than good.
#18: Jeff Kane: Why Human Minds Are Not Computer Programs
Philosopher Jeff Kane joins us to discuss his new book The Emergence of Mind: Where Technology Ends and We Begin. In an age where AI writes poems, paints portraits, and mimics conversation, Kane argues that the human mind remains fundamentally different—not because of what it does, but because of what it is. We explore the moral risks of thinking of ourselves as machines, the embodied nature of thought, the deep structure of human values, and why lived experience—not information processing—grounds what it means to be human.
#17 Caroline Ashcroft: The Catastrophic Imagination
In this episode, we speak with Dr. Caroline Ashcroft, Lecturer in Politics at the University of Oxford and author of Catastrophic Technology in Cold War Political Thought. Drawing on figures like Arendt, Jonas, Ellul, and Marcuse, Ashcroft explores a powerful yet underexamined idea: that modern technology is not just risky or disruptive—but fundamentally catastrophic. We discuss how mid-century political theorists viewed technology as reshaping the environment, the self, and the world in ways that eroded human dignity, democratic life, and any sense of limits.
Get the book here.
#16 Teresa Baron: The Artificial Womb on Trial
Philosopher Teresa Baron joins us to discuss her book The Artificial Womb on Trial. As artificial womb technology edges closer to reality, Baron asks a different question: not just what ectogenesis means for society, but how we ethically get there. From human subject trials to questions of consent, regulation, and reproductive justice, this episode puts the development process itself under the bioethical microscope.
Links:
Teresa’s website
Teresa’s book
#15 Stephen Kosslyn: Learning to Flourish in the Age of AI
What does it mean to live well in an AI-driven world—and how can we use AI to help us get there?
In this episode, we speak with psychologist and neuroscientist Dr. Stephen Kosslyn. Stephen Kosslyn is Professor Emeritus at Harvard University, and he is former chair of the Harvard psychology department and dean of social sciences. He is currently the CEO of Active Learning Sciences. We discuss how generative AI isn’t just a tool for speed and convenience—it can be a cognitive amplifier for building the kind of life we actually want. Drawing from his book Learning to Flourish in the Age of AI, we explore how to use large language models to set life goals, stay motivated, communicate better, manage emotions, understand ourselves and others, and think more clearly.
Note: In our conversation, Professor Kosslyn mentions the prompts he used, as well as other resources, being available online. You can find those here.
#14 Alice Helliwell: The Art of Misalignment
What if the best AI art doesn’t care what we think? In this episode, we talk with philosopher Alice Helliwell about her provocative idea: that future AI might create aesthetic value not by mimicking human tastes, but by challenging them. Drawing from her 2024 article “Aesthetic Value and the AI Alignment Problem,” we explore why perfect alignment isn't always ideal—and how a little artistic misalignment could open new creative frontiers.
#13 Marianna Capasso: Manipulation as Digital Invasion
Can a simple design tweak undermine your freedom? In this episode, we speak with Dr. Marianna Capasso, a postdoctoral researcher at Utrecht University, about her 2022 book chapter “Manipulation as Digital Invasion: A Neo-Republican Approach,” featured in The Philosophy of Online Manipulation (Routledge).
Drawing on a neo-republican conception of freedom, Dr. Capasso analyzes the ethical status of digital nudges—subtle, non-intrusive design elements in digital interfaces that gently guide users toward specific actions or decisions—and explores when they cross the line into wrongful manipulation. We dive into key concepts like domination, user control, algorithmic bias, and what it truly means to be free in a digital world.
For more info, visit ethicscircle.org.
#12 Elyakim Kislev: Relationships 5.0
Elyakim Kislev is a senior lecturer in the School of Public Policy and Governance at the Hebrew University; there he specializes in relationships, technology, loneliness, and singles studies. Today we’ll be discussing his book Relationships 5.0: How AI, VR, and Robots Will Reshape Our Emotional Lives.
Some of the topics we discuss are the effect of technology on relationships throughout human history, the potential for meeting human relational needs through technology, and the challenge that emerging technologies pose on our existing forms of moral education—among many other topics. We hope you enjoy the conversation as much as we did.
For more info on the show, please visit ethicscircle.org.
#11 Kelly McDonough: Indigenous Science and Technology
Kelly McDonough is an Associate Professor at the University of Texas Austin. We'll be discussing her new book Indigenous Science and Technology: Nahuas and the World around Them (2024). This is a work in Nahua intellectual history, and it examines how Nahuas have explored, understood, and explained the world across pre-invasion, colonial, and contemporary eras.
Some of the topics we discuss are competing conceptions of science and technology, whether Western science is the only “real” science, Nahua science and technology, and the Nahua focus on balance and interrelatedness—among many other topics. We hope you enjoy the conversation as much as we did.
For more info on the show, please visit ethicscircle.org.
#10 Sara Migliorini: Biometric Harm
Sara Migliorini is Assistant Professor of Law at the University of Macau, specializing in international law, AI, and big data. We'll be discussing her 2023 article "Biometric Harm," which examines how the use of biometric identification—identifying people by their bodily or behavioral features—can pose significant harm to both individuals and society.
Some of the topics we discuss are the different technologies used for biometric identification, the human need for unobserved time, the right to control our informational narrative, and laws that might protect us from biometric harm—among many other topics. We hope you enjoy the conversation as much as we did.
For more info on the show, please visit ethicscircle.org.
#9 Amy Kind: Biometrics and the Metaphysics of Personal Identity
Amy Kind is a professor of Philosophy at Claremont McKenna College. She is a leading philosopher in the philosophy of mind, with a focus on imagination and consciousness. Her books include Imagination and Creative Thinking and Persons and Personal Identity. Today, we’ll explore her recent article, Biometrics and the Metaphysics of Personal Identity.
Some of the topics we discuss are the metaphysics of personal identity and the question of whether biometric technology actually tracks personal identity. We hope you enjoy the conversation as much as we did.
For more info on the show, please visit ethicscircle.org.
#8 Muriel Leuenberger: Track Thyself(?)
Muriel Leuenberger is a postdoctoral researcher in the Digital Society Initiative and the Department of Philosophy at the University of Zurich. Her research interests include the ethics of technology and AI, medical ethics (neuro-ethics in particular), philosophy of mind, meaning in life, and the philosophy of identity, authenticity, and genealogy. Today we will be discussing her articles “Technology, Personal Information, and Identity” and “Track Thyself? The Value and Ethics of Self-knowledge Through Technology”—both published in 2024.
Some of the topics we discuss are the different types of personal information technology, narrative identity theory, and the effects that personal information technology can have on our personal identity (positive, negative, and ambiguous)—among many other topics. We hope you enjoy the conversation as much as we did.
For more info on the show, please visit ethicscircle.org.
#7 Peter Hershock: Buddhism and Intelligent Technology
Peter Hershock is Manager of the Asian Studies Development Program (ASDP) at the East-West Center in Honolulu, Hawai'i. Most recently, he has helped launch the East-West Center’s initiative on Humane Artificial Intelligence, with a focus on the societal impacts and ethical issues raised by emerging technologies. Today we will be discussing his book Buddhism and Intelligent Technology: Toward A More Humane Future, published in 2021.
Some of the topics we discuss are the types of attention that humans have, the effect of the attention economy on our attention (through a Buddhist lens), the problems with digital hedonism as well as with digital asceticism, and how to reclaim our attention in our day and age—among many other topics. We hope you enjoy the conversation as much as we did.
For more info on the show, please visit ethicscircle.org.
#6 Thomas Nys & Bart Engelen: Manipulative Online Environments
Thomas Nys is in the Faculty of Humanities at the University of Amsterdam. Bart Engelen is an associate professor at Tilburg University, also in the Netherlands. Together, they have co-authored a number of essays. Today we will be discussing their chapter from the recently published The Philosophy of Online Manipulation, published in 2020. The title of this chapter is “Commercial Online Choice Architecture: When Roads Are Paved With Bad Intentions.”
Some of the topics we discuss are commercial online choice architecture (for which they use the acronym COCA), whether COCAs can be said to be manipulative, different conceptions of what manipulation is, how COCAs can undermine our autonomy, and what is at stake when our autonomy is eroded by web-based commercial interests—among many other topics. We hope you enjoy the conversation as much as we did.
#5 Giovanni Rubeis: Liquid Health
Giovanni Rubeis is a professor and head of the Department of Biomedical Ethics and Healthcare Ethics at the Karl Landsteiner Private University in Vienna. He also has worked as an ethics consultant for various biotech companies. He is the author of the recently published Ethics of Medical AI. And today we are chatting with Giovanni about his article on liquid health.
Some of the topics we discuss are the notion of liquification, the concept of surveillance capitalism, and the perils of liquid surveillance in healthcare—among many other topics.
#4 Giovanni Rubeis: Ethics of Medical AI
Giovanni Rubeis is a professor and head of the Department of Biomedical Ethics and Healthcare Ethics at the Karl Landsteiner Private University in Vienna. He also has worked as an ethics consultant for various biotech companies. And he is the author of Ethics of Medical AI.
Some of the topics we discuss are the history of AI in healthcare, past failures of medical AI (such as IBM’s Watson Health), the prospect of having digital twins to enable better healthcare strategies, and what we lose when we think only in terms of measurable data—among many other topics.