『The Startup Different Podcast』のカバーアート

The Startup Different Podcast

The Startup Different Podcast

著者: David and Chris Sinkinson
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

SIGNAL AWARDS 2025 - BEST INDIE PODCAST - SILVER COMMUNICATOR AWARDS 2025 - BUSINESS - EXCELLENCE DAVEY AWARDS 2025 - PODCAST SERIES TALK SHOW - SILVER Startup Different is what happens when two brothers who’ve built and sold startups start debating whether AI is taking over — or just overhyped. Brothers and entrepreneurs Dave and Chris bring humor, hard-earned experience, and a touch of chaos to a weekly breakdown of how tech is reshaping business, startups, and work. Smart, funny, and occasionally wrong — it’s the award-winning podcast for people who still like humans.David and Chris Sinkinson 経済学
エピソード
  • Your AI Model Just Became Illegal
    2026/04/07

    Starting June 2026, if your startup uses AI-generated people in advertising and doesn't label them, you could face thousands of dollars in fines.

    New York's new synthetic performer disclosure law - the first of its kind in the U.S. - requires advertisers to clearly disclose when AI-generated humans appear in their ads. California's AI Transparency Act follows in August with watermarking requirements and even steeper penalties. Most startups have no idea these laws exist, and the deadlines are weeks away.

    We break down exactly what's covered (and what isn't), the strategic implications for founders building marketing on a budget, and the surprising consumer sentiment that may make AI-generated content a liability rather than an asset. With Gartner data showing half of consumers prefer brands that don't use AI, the regulatory requirement to label AI content could backfire on companies that rely heavily on synthetic imagery - turning compliance into a trust signal that pushes customers away.

    Whether you're a DTC founder figuring out your next ad campaign, a marketer deciding between AI tools and real photo shoots, or an entrepreneur watching the regulatory landscape evolve, this episode delivers the practical playbook you need. The hosts draw on their own experience launching consumer products and connect the dots to their earlier coverage of California's AI regulation efforts - with a clear message: the time to audit your marketing assets is now, not after the first fine hits.

    続きを読む 一部表示
    24 分
  • Your Specs Are The New Bottleneck
    2026/03/31

    AI coding tools have made individual developers dramatically faster - so why aren't teams shipping dramatically faster products? This week, Agoda Engineering published a fascinating analysis of what they're calling "The Velocity Paradox," and it reveals an uncomfortable truth: the bottleneck in software development has shifted from writing code to writing specifications. If your requirements are vague, AI will just build the wrong thing at 10x speed.

    We dig into what this means for startup founders and engineering teams. They explore the three ways teams are working with AI - from careful line-by-line review to "vibe coding" where you trust the AI and hope for the best - and discuss why the engineer's role is evolving from "Implementer" to "Solution Architect." With examples like the creator of Claude Code landing 259 pull requests in a month without opening an IDE, the shift is already happening at the highest levels of the industry.

    For entrepreneurs building technical products, this episode delivers a critical insight: in the AI era, the quality of your specifications determines the quality of your product. Small teams that can align quickly on clear requirements will outperform larger teams generating mountains of unreviewed AI code. If you're hiring engineers, building a dev team, or just trying to ship faster - this conversation will change how you think about where the real work happens.

    続きを読む 一部表示
    28 分
  • AI Espionage - Who's Copying Who?
    2026/03/24

    In what reads like the plot of a tech thriller, Anthropic just revealed that three Chinese AI labs - DeepSeek, Moonshot AI, and MiniMax - created over 24,000 fake accounts and generated 16 million exchanges with their Claude model in an industrial-scale operation to steal its capabilities. The technique, known as distillation, involves training smaller models on the outputs of more powerful ones — and while it's a standard industry practice, doing it through fraudulent accounts to extract a competitor's intelligence crosses legal and ethical lines.

    We unpack what this AI espionage operation means for the industry, national security, and startup founders. They explore the uncomfortable hypocrisy at the heart of the story - AI companies that trained their models on the internet's copyrighted content are now outraged about their own outputs being copied - and debate whether the national security framing is a genuine concern or a convenient business strategy. With both Anthropic and OpenAI making accusations against Chinese labs, and export control debates heating up in Washington, this story sits at the intersection of technology, geopolitics, and competitive strategy.

    For entrepreneurs building AI products, this episode delivers a critical insight: your model isn't your moat. If the world's most advanced AI companies can't prevent their capabilities from being extracted, startups need to build competitive advantages that can't be distilled - proprietary data, customer relationships, and the speed to innovate faster than anyone can copy. It's a masterclass in why execution always beats IP in the long run.

    続きを読む 一部表示
    21 分
まだレビューはありません