『Beneath the Surface』のカバーアート

Beneath the Surface

Beneath the Surface

著者: Paul Stollery Hard Numbers
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

Beneath the Surface is a podcast about generative engine optimisation, AI search and reputation in the age of the algorithm.


We’ve grown used to algorithms deciding what we see. Since Google and the days of the eight blue links, information has been filtered, ranked and prioritised for us. But there was still a sense of where it came from. You could scroll. You could check the sources. You could look beneath the surface. But that’s changed.


This series explores what’s happening underneath AI-generated answers: what is shaping them, who is shaping them, and why. From misinformation and disinformation to corporate influence, reputation risk and the hidden sources behind tools like ChatGPT, Gemini and Claude, each episode looks at how generative search is changing visibility, trust and truth online.

For marketers, communicators and decision-makers, Beneath the Surface helps make sense of the new information landscape – and how to shape reputation in the age of the algorithm.

© 2026 Beneath the Surface
エピソード
  • Is ChatGPT lying about you? | Beneath the Surface Ep.1
    2026/04/21

    Why do LLMs 'lie'? And what can you do about it?

    ChatGPT told the world that Brian Hood was a convicted criminal. He wasn't. He was the whistleblower.

    Brian Hood spent years trying to expose a bribery scandal at a subsidiary of Australia's Reserve Bank. He raised it with the board. He raised it with the deputy governor. He was made redundant and escorted to the front gate. When prosecutions finally came, eleven people were found guilty. Brian was the key witness.

    Then, in 2023, ChatGPT wrote his story – and got the ending completely wrong. It said he'd been charged, found guilty, and sentenced to 30 months in jail. Brian nearly became the first person in the world to sue OpenAI for defamation. In this episode, he tells us what happened next.

    But Brian's story is really a window into a much bigger question: how do large language models actually decide what to say? These platforms don't retrieve facts the way a search engine does. They predict what sounds right – and when the pattern fits but the fact doesn't, you get confident fiction.

    We break down the mechanics in plain language: what tokens are, how pattern matching works, why a system trained on the entire internet can still fabricate a criminal conviction, and what the term "hallucination" actually gets wrong.

    Then we go deeper. If these systems can get things wrong by accident, what happens when people try to make them get things wrong on purpose? We speak to a senior figure from GCHQ's National Cyber Security Centre about the state-backed groups and organised criminals actively trying to poison what AI platforms tell you. He explains how little data it takes to contaminate a model's training, why the long tail problem from SEO is even more dangerous in the age of generative AI, and what the emerging discipline of defence actually looks like.

    If you work in communications, reputation, or GEO – generative engine optimisation – this is the foundation. You can't influence what these platforms say about your brand until you understand how they work, where they break, and who else is trying to shape the answers.

    #GEO #GenerativeEngineOptimisation #ChatGPT #AISearch #Reputation #LLM #Hallucination #Communications #MarComms

    0:00 What happens when ChatGPT lies about you?
    3:00 Can ChatGPT mix up facts and fiction?
    7:00 Can you sue ChatGPT for defamation?
    10:00 How do you get ChatGPT to correct false information?
    12:00 What is generative engine optimisation?
    16:00 How does ChatGPT generate its answers?
    17:00 Why does ChatGPT make things up?
    19:00 Can people manipulate what ChatGPT says?
    23:00 How easy is it to poison AI training data?
    27:00 How is GEO different from SEO?
    29:00 How does ChatGPT decide what to say?
    31:00 What is AI hallucination?
    32:00 How do you stop AI from being manipulated?

    Beneath the Surface is a podcast from Hard Numbers, hosted by Paul Stollery. New episodes monthly.

    続きを読む 一部表示
    38 分
まだレビューはありません