エピソード

  • Is ChatGPT lying about you? | Beneath the Surface Ep.1
    2026/04/21

    Why do LLMs 'lie'? And what can you do about it?

    ChatGPT told the world that Brian Hood was a convicted criminal. He wasn't. He was the whistleblower.

    Brian Hood spent years trying to expose a bribery scandal at a subsidiary of Australia's Reserve Bank. He raised it with the board. He raised it with the deputy governor. He was made redundant and escorted to the front gate. When prosecutions finally came, eleven people were found guilty. Brian was the key witness.

    Then, in 2023, ChatGPT wrote his story – and got the ending completely wrong. It said he'd been charged, found guilty, and sentenced to 30 months in jail. Brian nearly became the first person in the world to sue OpenAI for defamation. In this episode, he tells us what happened next.

    But Brian's story is really a window into a much bigger question: how do large language models actually decide what to say? These platforms don't retrieve facts the way a search engine does. They predict what sounds right – and when the pattern fits but the fact doesn't, you get confident fiction.

    We break down the mechanics in plain language: what tokens are, how pattern matching works, why a system trained on the entire internet can still fabricate a criminal conviction, and what the term "hallucination" actually gets wrong.

    Then we go deeper. If these systems can get things wrong by accident, what happens when people try to make them get things wrong on purpose? We speak to a senior figure from GCHQ's National Cyber Security Centre about the state-backed groups and organised criminals actively trying to poison what AI platforms tell you. He explains how little data it takes to contaminate a model's training, why the long tail problem from SEO is even more dangerous in the age of generative AI, and what the emerging discipline of defence actually looks like.

    If you work in communications, reputation, or GEO – generative engine optimisation – this is the foundation. You can't influence what these platforms say about your brand until you understand how they work, where they break, and who else is trying to shape the answers.

    #GEO #GenerativeEngineOptimisation #ChatGPT #AISearch #Reputation #LLM #Hallucination #Communications #MarComms

    0:00 What happens when ChatGPT lies about you?
    3:00 Can ChatGPT mix up facts and fiction?
    7:00 Can you sue ChatGPT for defamation?
    10:00 How do you get ChatGPT to correct false information?
    12:00 What is generative engine optimisation?
    16:00 How does ChatGPT generate its answers?
    17:00 Why does ChatGPT make things up?
    19:00 Can people manipulate what ChatGPT says?
    23:00 How easy is it to poison AI training data?
    27:00 How is GEO different from SEO?
    29:00 How does ChatGPT decide what to say?
    31:00 What is AI hallucination?
    32:00 How do you stop AI from being manipulated?

    Beneath the Surface is a podcast from Hard Numbers, hosted by Paul Stollery. New episodes monthly.

    続きを読む 一部表示
    38 分