エピソード

  • The Bottleneck Behind the Bottleneck
    2026/04/07

    If your AI implementation is delivering results, you should be looking for the cracks. Most leaders assume that if output is up and the team is keeping pace, the implementation is a success. They're wrong.

    In this episode, we diagnose why AI-driven acceleration is currently colliding with two layers of your organization that weren't built for speed: Authority and Governance.

    When a tool produces 500 outputs instead of 50, the informal "who says this is okay" process evaporates. You don't have a volume problem—you have an ownership problem. Meanwhile, boards are still governing budgets and strategies for a version of the organization that no longer exists.

    We break down:

    • Why "fixing the workflow" is just relocating the pressure instead of solving it.
    • The structural collision between execution speed and governance "brakes."
    • The hard questions you must ask about approval layers before the tool is even installed.

    AI won't break your organization. It will simply reveal the weaknesses that were already there.

    If you want to see the full video you can watch it here:

    YouTube video: https://youtu.be/2Y8TMLni5fU

    Other relevant links:

    Substack: https://brightnonprofit.substack.com/
    Website: https://brightnonprofit.org

    続きを読む 一部表示
    4 分
  • "What Are We Doing About AI?" Is the Wrong Question.
    2026/03/31

    Many nonprofit leaders believe their AI challenges begin at the moment of implementation — choosing tools, preparing staff, or establishing policies. But most AI adoption failures start earlier than that.

    They begin with the first question leadership asks.

    When organizations respond to pressure by asking, "What are we doing about AI?", the conversation begins with urgency and an assumed solution. What is missing is the step that makes the decision defensible: naming the specific problem the technology is supposed to solve.

    This episode examines how pressure-driven conversations convert anxiety into visible activity — pilots, tools, and announcements — while skipping the diagnostic step that should come first. It also explores the governance implications of that sequence and why nonprofit organizations, operating under fiduciary responsibility, require a structured framing conversation before implementation.

    The most responsible AI decision does not begin with readiness frameworks or vendor comparisons. It begins with a more difficult question: what problem are we actually trying to solve, and what would change if we solved it?

    If you want to see the full video you can watch it here:

    YouTube video: https://youtu.be/jKK4zMWURgU

    Other relevant links:

    Substack: https://brightnonprofit.substack.com/
    Website: https://brightnonprofit.org

    続きを読む 一部表示
    11 分
  • Why 92% of Nonprofits Using AI Don't See Results
    2026/03/24

    A recent benchmark report surveying hundreds of nonprofit organizations found that 92% are already using AI tools, yet only 7% report major strategic impact. The report describes this as an "AI readiness" gap and recommends stronger governance, clearer policies, and more structured workflows.

    In this episode, we take a closer look at that diagnosis. The data reveals real coordination and governance challenges, but it may still miss the deeper structural condition that determines whether AI produces meaningful results.

    For nonprofit leaders responsible for strategy, operations, and outcomes, the distinction matters. If readiness is defined incorrectly, organizations may build infrastructure that looks responsible but still fails to produce real capability.

    If you want to see the full video you can watch it here:

    YouTube video: https://youtu.be/NXDP-2zyev4

    続きを読む 一部表示
    13 分
  • AI Didn't Move Authority. It Was Already Gone.
    2026/03/17

    Most organizations believe they already know who is responsible when AI is used: the person who used the tool. But that answer assumes something that often isn't true — that the authority underneath that responsibility is clearly defined.

    In practice, many nonprofits operate with informal decision structures. Authority settles into roles, trusted individuals, compressed processes, and software systems over time. The org chart stays the same, but the real decision rights slowly move somewhere else.

    This episode explores four patterns of authority drift that exist in most organizations long before AI arrives: position drift, trust drift, process drift, and tool drift. AI does not introduce these patterns — it accelerates them by removing the friction that once made them visible.

    The governance challenge, then, is not simply writing AI policies. It is making operational decision rights visible before AI embeds those informal structures into systems operating at scale.

    If you want to see the full video you can watch it here:

    YouTube video: https://youtu.be/rpjqYXbm218

    Other relevant links:

    Substack: https://brightnonprofit.substack.com/
    Website: https://brightnonprofit.org

    続きを読む 一部表示
    15 分
  • AI Didn't Break It - It Was Already Broke
    2026/03/10

    Many nonprofits are adopting AI tools expecting efficiency gains. But when those gains fail to materialize, the problem often isn't the technology. It's the structure of the organization itself.

    In this episode, we examine three structural conditions that AI tends to expose: undesigned handoffs, ownership without authority, and hidden maintenance work. These are not new problems. They've existed quietly inside organizations for years. What AI changes is the speed and pressure at which those weaknesses surface.

    For executive directors, board members, and operations leaders, this is less about technology strategy and more about governance and systems design. AI doesn't just automate workflows — it reveals how work actually moves through your organization. The question is whether you'll see those fault lines before they become expensive.

    If you want to hear the full explanation delivered directly, you can watch the original video here:

    YouTube video: https://youtu.be/SDbgazetCYY
    Follow my Substack: https://brightnonprofit.substack.com/
    Website: https://brightnonprofit.org

    続きを読む 一部表示
    11 分
  • Nonprofits are Chasing the Wrong AI Efficiency
    2026/03/03

    Most nonprofits are working hard to become more efficient. AI makes that easier than ever. Drafts are faster. Analysis is instant. Throughput increases. But for many leaders, the promised relief never arrives.

    This episode examines why. It explores the structural shift that happens when execution speed accelerates but governance capacity does not. Efficiency is about rate. Capacity is about resilience — the ability to absorb variability, maintain oversight, and protect decision quality as volume increases.

    For executive directors, board members, and operations or development leaders, this conversation reframes the real constraint. If output can now scale rapidly, what must strengthen to prevent strain from quietly accumulating at the top of the organization?

    If you want to hear the full explanation delivered directly, you can watch the original video here:

    YouTube video: https://youtu.be/NdjBJgQsBjk

    ---
    Note: This podcast episode is an AI-generated conversation created by Bright Nonprofit. The source material is a real YouTube video featuring a real person, Steve Vick, speaking in his own words on the Bright Nonprofit YouTube channel. The AI format is used to reflect on and discuss that original video content. No new ideas, arguments, or claims are introduced beyond what appears in the original video.

    続きを読む 一部表示
    19 分
  • AI Readiness is a Governance Trap - And Most Nonprofits are Walking into It
    2026/02/24
    • Get the AI Readiness Memo: https://open.substack.com/pub/brightnonprofit/p/the-work-you-can-no-longer-see
    • Substack: https://brightnonprofit.substack.com/
    • iTunes: https://podcasts.apple.com/us/podcast/bright-nonprofit/id734475785
    • Spotify: https://open.spotify.com/show/6BtfqVBnNtA9eh5NK5wnQ6?si=c7fc98ed955e4742
    • Website: https://brightnonprofit.org

    "AI readiness" is often framed as a technology milestone — something to purchase, install, or train around. But in this episode, the focus shifts to a more uncomfortable question: can your governance structure remain accountable as organizational capacity increases?

    For executive directors, board members, and operations leaders, this conversation reframes readiness as a structural issue. It explores how data trust, process clarity, systems coherence, and governance boundaries determine whether AI increases effectiveness or simply accelerates fragility. The core tension is not about tools. It is about whether oversight can keep pace with velocity.

    This episode is particularly relevant for leaders responsible for outcomes, compliance, and long-term resilience. It clarifies what "good enough" readiness looks like and why waiting to prepare carries quiet but compounding risk.

    If you want to hear the full explanation delivered directly, you can watch the original video here:

    YouTube video: https://youtu.be/tuA4pYY7Ipg

    Note: This podcast episode is an AI-generated conversation created by Bright Nonprofit. The source material is a real YouTube video featuring a real person, Steve Vick, speaking in his own words on the Bright Nonprofit YouTube channel. The AI format is used to reflect on and discuss that original video content. No new ideas, arguments, or claims are introduced beyond what appears in the original video.

    続きを読む 一部表示
    18 分