『The AI Fundamentalists』のカバーアート

The AI Fundamentalists

The AI Fundamentalists

著者: Dr. Andrew Clark & Dr. Sid Mangalik
無料で聴く

概要

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.

© 2026 The AI Fundamentalists
政治・政府 経済学
エピソード
  • AI and the lost art of reading
    2026/03/03

    As information sources have become abundant and attention spans have shortened in the age of AI, we take on the lost art of reading. Join us to explore why reading rates are falling, how that shift affects judgment and opportunity, and how interdisciplinary books help us see patterns across history, economics, and technology.

    To help us, Alisa Rusanoff, CEO of Eltech AI, joins us to share her perspective on reading, debate volume versus depth, and offer practical ways to reclaim attention and read with intention.

    • Evidence on declining reading rates among adults, teens and children
    • Noise versus signal in the attention economy
    • Mental models and interdisciplinary synthesis for better decisions
    • AI’s limits and why human integration still matters
    • Cycles in debt, trade, demography, and geopolitics
    • Fiction as a cultural sensor for lived experience
    • Wealth gaps, polarization and the need for critical thinking
    • Practical habits to train feeds and protect reading time
    • Challenge to read, reflect, and apply insights

    For people worried if they are reading enough:

    • Reading just 1 book a year puts you in the top 60% of readers
    • Read 4 books a year to be in the top 50% of readers
    • Read 10 books a year to be in the top 20% of readers
    • For those looking to be in the top 5% of readers, expect to read at least 50 books

    This episode is full of research and fun connections that are sure to make you think positively about your commitment to reading. At the time of this episode, it's not too late to join the top 20% in 2026!




    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    続きを読む 一部表示
    46 分
  • Metaphysics and modern AI: What is causality?
    2026/01/27

    In this episode of our series about Metaphysics and modern AI, we break causality down to first principles and explain how to tell factual mechanisms from convincing correlations. From gold-standard Randomized Control Trials (RCT) to natural experiments and counterfactuals, we map the tools that build trustworthy models and safer AI.

    • Defining causes, effects, and common causal structures
    • Gestalt theory: Why correlation misleads and how pattern-seeking tricks us
    • Statistical association vs causal explanation
    • RCTs and why randomization matters
    • Natural experiments as ethical, scalable alternatives
    • Judea Pearl’s do-calculus, counterfactuals, and first-principles models
    • Limits of causality, sample size, and inference
    • Building resilient AI with causal grounding and governance

    This is the fourth episode in our metaphysics series. Each topic in the series is leading to the fundamental question, "Should AI try to think?"

    Check out previous episodes:

    • Series Intro
    • What is reality?
    • What is space and time?

    If conversations like this sharpen your curiosity and help you think more clearly about complex systems, then step away from your keyboard and enjoy this journey with us.




    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    続きを読む 一部表示
    36 分
  • Why validity beats scale when building multi‑step AI systems
    2026/01/06

    In this episode, Dr. Sebastian (Seb) Benthall joins us to discuss research from his and Andrew's paper entitled “Validity Is What You Need” for agentic AI that actually works in the real world.

    Our discussion connects systems engineering, mechanism design, and requirements to multi‑step AI that creates enterprise impact to achieve measurable outcomes.

    • Defining agentic AI beyond LLM hype
    • Limits of scale and the need for multi‑step control
    • Tool use, compounding errors, and guardrails
    • Systems engineering patterns for AI reliability
    • Principal–agent framing for governance
    • Mechanism design for multi‑stakeholder alignment
    • Requirements engineering as the crux of validity
    • Hybrid stacks: LLM interface, deterministic solvers
    • Regression testing through model swaps and drift
    • Moving from universal copilots to fit‑for‑purpose agents

    You can also catch more of Seb's research on our podcast. Tune in to Contextual integrity and differential privacy: Theory versus application.


    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    続きを読む 一部表示
    40 分
まだレビューはありません