『Talking Product』のカバーアート

Talking Product

Talking Product

著者: John Young & Collin Lyons
無料で聴く

概要

John Young & Collin Lyons explore all things related to building digital products and leading digital transformations. In every episode we give you actions that you can put into practice immediately to reduce risk, create more effective and efficient product development capabilities, and build a culture of continuous learning.Copyright 2023 All rights reserved. マネジメント マネジメント・リーダーシップ リーダーシップ 経済学
エピソード
  • Episode 14 – Who Is Asking the Questions About Digital Capability?
    2026/05/11

    In this episode of Talking Product, Collin and I discuss a tension we increasingly see inside organisations: digital capability is becoming an enterprise survival issue rather than a technology optimisation issue.

    Unlike some of our previous episodes, this is less a discussion about a single topic and more a conversation between two practitioners trying to reconcile lived experience with organisational inertia. We believe this tension is surfacing more visibly now because of the disruption caused by generative AI.

    Collin highlights an article I published on Substack at the end of last year. In it, I argued that the tolerance for organisational inefficiency and limited digital capability is likely to compress dramatically in the years ahead. Organisations that have invested time and energy into improving product delivery practices, architectures, infrastructure, data quality, and operating models are materially better positioned than those that have largely treated such initiatives as secondary concerns.

    This led me to a broader question: if executives and boards are not asking deeper questions about AI, data, software capability, and operating models, who is? My conclusion is that institutional investors need to engage far more directly with digital capability risk because many portfolios are likely carrying significantly more exposure than is currently recognised.

    We do not pretend to have definitive answers to many of these questions. In fact, part of the episode is an acknowledgement of just how difficult these topics are to navigate. But we once again argue for the importance of having these conversations inside organisations — because avoiding them does not reduce the risk. It simply delays the moment when those risks become visible.

    Lastly, we once again argue that getting Chip Huyen’s book AI Engineering and starting conversations inside your organisation about the subjects she raises will be very good value for money — and may save you a great deal of money and grief further down the road.

    Takeaways:

    • An exploration of why digital capability is an enterprise-level risk rather than a delegated IT concern.

    • A discussion about how generative AI accelerates the consequences of technical debt and organisational underinvestment.

    • Questions around whether executives, boards, and institutional investors are asking sufficiently deep questions about AI, data, and operating models.

    The Digital Laggard Thesis: https://johnyoung.substack.com/p/towards-an-investment-thesis-on-digital

    続きを読む 一部表示
    32 分
  • Episode 13: More questions to improve the return on your AI investments
    2026/03/27

    In this episode, Collin and I continue building our list of questions to help increase the chances of your AI investments delivering a return. We focus in particular on the role non-technical C-level leaders should play in this effort.

    We use Chip Huyen’s AI Engineering as a practical framework, exploring how it gives digital leaders and non-technical senior managers a structured way to engage more deeply with AI initiatives. In particular, it provides a way into the ecosystem and lifecycle of AI application development—helping leaders ask better questions around things like data, prompt design, fine-tuning, and evaluation—so they can have more meaningful discussions with technical teams about how these applications are built, where the risks sit, and what criteria should be used to define and measure success.

    We walk through some of the key differences between traditional software and AI systems—particularly the shift from deterministic to probabilistic behaviour, and the central role of data in shaping outcomes.

    From there, we build on the questions we believe leaders should be asking: What problem are we solving? How are we evaluating outputs? How are we managing risks around data quality, safety, and factual accuracy? What trade-offs are we making between quality, cost, and latency?

    We spend some time looking at Chip’s section on evaluation criteria, using it as a springboard for non-technical senior leaders to delve deeper into the thinking behind—and expected outcomes of—AI applications. We also introduce the concept of “evals”—ongoing evaluation frameworks that extend beyond traditional testing—and why they require continuous iteration, collaboration, and oversight, even after deployment.

    This episode continues our exploration of how leaders can better understand what they are funding, engage more effectively with product and delivery teams, and create the conditions for AI investments to deliver real value.

    Links to Chip's book & interview on evals referred to in the episode:

    Chip Huyen’s AI Engineering: https://www.oreilly.com/library/view/ai-engineering/9781098166298/

    Lenny’s podcast - Why AI evals are the hottest new skill for product builders | Hamel Husain & Shreya Shankar: https://www.youtube.com/watch?v=BsWxPI9UM4c

    続きを読む 一部表示
    39 分
  • Episode 12 - Questions worth asking when your AI investments aren't showing returns
    2026/02/08

    A significant number of AI initiatives are consuming budget and executive attention, while failing to reach production or deliver measurable value. Episode 12 of Talking Product helps you pressure-test your organisation's AI spend today.

    In this episode, Collin and I use a Financial Times article as a springboard for a conversation about the state of AI initiatives in large organisations. The article—"AI's awfully exciting until companies want to use it"—captures a familiar pattern in technology: high expectations, significant investment, but limited impact. The article is freely available with FT registration.

    In this episode, we explore three themes:

    • Why pilots aren't scaling (often it's your data, not the technology)
    • Whether organisations are bringing experimental rigour to AI adoption, or just buying impressive demos
    • The leadership knowledge gap—understanding not just what AI can do, but what it can't

    What you'll get:

    • Questions that will help reveal whether your organisation is truly learning from its AI experiments or just spending
    • Insight into why data governance problems you've been kicking down the road are now becoming existential
    • What Gartner found about hidden costs in AI initiatives—and questions to ask to bring them out into the open
    • Practical guidance on what experimental rigour actually looks like in an AI context

    #AI #DigitalTransformation #Leadership #ProductManagement

    続きを読む 一部表示
    42 分
adbl_web_anon_alc_button_suppression_c
まだレビューはありません