エピソード

  • Unlocking AI Vector Databases with James Luan, Zilliz CPO | EP 130
    2026/03/27

    Subscribe to AI Agents Podcast Channel:
    https://link.jotform.com/subscribe-to-podcast

    In this episode of the AI Agents Podcast, host Demetri Panici sits down with James Luan from Zilliz to talk about how AI is already changing the day to day work of engineers. James explains why coding agents are already taking over parts of his workflow, how vector databases became a core building block for modern AI systems, and why retrieval still matters even in a world obsessed with bigger models.

    They also get into the real mechanics behind RAG, hallucinations, MCP, long term memory for agents, and the challenges of building production grade AI systems that can search, reason, and scale reliably. If you want a practical conversation about where agent infrastructure is going and what engineers should actually pay attention to, this episode is worth watching.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    ⏰ TIMESTAMPS:
    00:00 – AI is already taking parts of engineering work
    01:03 – James Luan’s background and first AI moments
    07:07 – Why Zilliz was built and how vector databases fit in
    16:58 – Long term memory, agent search, and reasoning workflows
    21:37 – MCP, tooling limits, and real world production issues
    31:02 – Are coding agents already replacing parts of engineering?
    35:52 – AI for travel planning, presentations, and parallel work
    38:57 – NotebookLM, Gamma, and James’s favorite AI tools
    39:45 – Where to find James and Zilliz

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Sign up for free ➡️ https://www.jotform.com/

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Follow us on:
    Twitter ➡️ https://x.com/aiagentspodcast

    Instagram ➡️ https://www.instagram.com/aiagentspodcast

    TikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    41 分
  • AI Agents for Risk & Compliance with Dror Asaf KOVANT | EP 129
    2026/03/25

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcast

    In this episode of the AI Agents Podcast, host Demetri Panici sits down with Dror Asaf, co-founder and CTO of Coval, to talk about what it actually takes to bring agentic AI into enterprise operations. They get into why supply chain and operations are still surprisingly archaic, why trust and security are some of the biggest blockers to adoption, and how enterprise teams can use AI to remove bottlenecks without removing human decision-making.

    Dror also shares how Coval is approaching enterprise AI differently, from on-prem deployments and Microsoft Teams integrations to strong governance and compliance layers. They also dig into the future of jobs, why AI will likely reshape work more than eliminate it, and why the people who know how to leverage these tools will have a massive advantage going forward.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    ⏰ TIMESTAMPS:
    00:00 – AI, jobs, and why new skill sets will matter most
    01:03 – Dror Asaf’s background and why he started Coval
    03:42 – Raising pre-seed funding and finding the right VC partner
    08:31 – Key milestones: first deployment, first failures, and building the team
    11:05 – How Coval uses agents in enterprise operations and supply chain
    14:04 – What Dror learned about agent reliability, governance, and hallucinations
    23:58 – Coval’s edge: security, on-prem deployment, and Microsoft Teams
    27:56 – The long-term vision for enterprise AI adoption
    33:16 – Will AI replace jobs or transform them?
    38:33 – Dror’s favorite AI tools: Claude Code, Co-work, and more

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Sign up for free ➡️ https://www.jotform.com/

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Follow us on:
    Twitter ➡️ https://x.com/aiagentspodcast

    Instagram ➡️ https://www.instagram.com/aiagentspodcast

    TikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    43 分
  • AI Agents in Legal Tech - David Wong Thomson Reuters on Responsible AI | EP 127
    2026/03/25

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcast

    In this episode of the AI Agents Podcast, host Demetri Panici sits down with David Wong, Chief Product Officer at Thomson Reuters, to explore how AI agents are starting to reshape legal, tax, audit, and other professional services industries.


    They discuss how Thomson Reuters evolved far beyond its well-known news brand into a software and research powerhouse for lawyers, accountants, tax professionals, and risk teams — and why AI is now becoming one of the biggest technological shifts those industries have ever seen.


    David shares his journey from engineering, consulting, and ad systems into leading product at Thomson Reuters, along with how his team tested early GPT models on legal research years ago and watched the technology go from failing badly to becoming good enough for serious professional use cases.


    Demetri and David also break down how AI can help with legal research, tax preparation, compliance work, and document-heavy workflows, why tax is such a strong fit for AI systems, and how the future of professional services may involve smaller teams supported by highly capable AI agents.


    This episode is a must-watch for anyone interested in AI agents, legal tech, tax automation, enterprise workflows, and the future of knowledge work — especially if you want to understand how AI is beginning to transform industries built on expertise, research, and structured judgment.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬⏰ TIMESTAMPS:00:00 – Smaller teams powered by AI agents

    00:52 – Meet David Wong, Chief Product Officer at Thomson Reuters

    01:49 – David’s background and journey into product leadership

    03:24 – What Thomson Reuters actually does beyond news

    06:02 – Early exposure to AI and machine learning

    08:39 – Testing GPT-3 on legal research years before the boom

    10:20 – Where AI helps in legal research and written work

    13:55 – How AI applies to tax preparation and compliance

    24:00 – Teaching AI to use tools instead of “doing the math”

    33:48 – How AI could flatten team structures in professional services

    47:23 – Where to learn more about Thomson Reuters and CoCounsel

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Sign up for free ➡️ https://www.jotform.com/▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Follow us on:Twitter ➡️ https://x.com/aiagentspodcastInstagram ➡️ https://www.instagram.com/aiagentspodcast

    TikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    49 分
  • Building AI That Thinks Like a Human - Brian Raymond Unstructured on Agentic Software & Human-AI Collaboration | EP 128
    2026/03/17

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcast

    In this episode of the AI Agents Podcast, host Demetri Panici sits down with Bryan Raymond, founder and CEO of Unstructured, to break down one of the least flashy but most important layers in AI: data preparation. They dig into why so many AI prototypes still fail in practice, why RAG systems struggle with messy enterprise data, and how structured inputs like JSON, Markdown, and HTML can dramatically improve model performance.

    Bryan explains how Unstructured helps enterprises turn raw files, scanned documents, audio, video, and other messy sources into AI-ready data for RAG pipelines and agent systems. The conversation covers why context quality matters so much, why tables and document layout are still hard for models, how vision-language models changed the game, and what it takes to move from AI prototype to production.

    They also get into where AI is heading in 2026: declining failure rates, more practical “bread and butter” use cases, better multi-agent systems, and why companies like Cursor and Lovable have succeeded by packaging great UX around powerful infrastructure. If you want a clearer view of what’s actually missing in enterprise AI stacks right now, this episode is a must-watch.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    ⏰ TIMESTAMPS:
    00:00 – Why AI still feels exciting and broken at the same time
    01:03 – Bryan Raymond’s background and the origin of Unstructured
    03:03 – What Unstructured does: turning raw data into AI-ready data
    06:06 – Why RAG exists and why structured data matters so much
    10:42 – The hard part: tables, layouts, scanned PDFs, and document parsing
    17:48 – Who needs this most: gen-AI teams vs data engineering teams
    20:20 – The industries moving fastest with enterprise AI
    26:28 – 2026 predictions: lower failure rates and stronger agent systems
    29:33 – Cursor, Lovable, enterprise UX, and where AI infrastructure is heading
    33:42 – AI jobs, junior engineers, and where real opportunity still exists
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Sign up for free ➡️ https://www.jotform.com/

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Follow us on:
    Twitter ➡️ https://x.com/aiagentspodcast

    Instagram ➡️ https://www.instagram.com/aiagentspodcast

    TikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    43 分
  • Claude Code: Rebuilding 5 Websites Without Writing Code | EP 126
    2026/03/09

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcast

    In this episode of the AI Agents Podcast, host Demetri Panici sits down with Jotform CEO Aytekin Tank to explore Claude Code — a powerful AI development tool that lets you build, edit, and deploy entire applications simply by describing what you want.

    They walk through how Claude Code makes it possible to rebuild abandoned websites, deploy them with Vercel, and manage entire projects without writing a single line of code.

    By running multiple coding sessions in parallel, developers and non-developers alike can work on several products at once while AI handles the implementation.

    The conversation also dives into a live demo showing how Claude Code plans changes, modifies a codebase, creates a contact page using Jotform, and pushes updates to a project in real time.

    Along the way, they discuss how AI coding tools are changing the development workflow — from “vibe coding” and parallel agents to automated GitHub workflows that keep projects updated daily.

    More importantly, they explore the bigger shift happening in software: when AI handles most of the technical execution, the real bottleneck becomes ideas, product taste, and knowing what to build.

    If you're curious about the future of AI-assisted development, autonomous coding agents, and how tools like Claude Code could change the way products are built, this episode is a deep dive into what might be the next era of software creation.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬⏰ TIMESTAMPS:

    00:00 – Rebuilding multiple abandoned websites with Claude Code

    03:26 – Using Claude Code alongside OpenClaw infrastructure

    07:18 – Why ideas and product taste are becoming the new bottleneck

    16:08 – Watching Claude Code execute the development plan

    22:38 – Using GitHub to automate website updates

    31:11 – Running multiple AI coding agents simultaneously

    35:21 – Brainstorming and planning features with AI skills

    39:13 – Executing AI-generated development plans

    44:11 – Avoiding conflicts when editing projects in parallel

    48:02 – The “ChatGPT moment” for AI coding tools

    49:28 – Why 2026 feels like a new era for building software

    50:23 – Final thoughts on the future of AI-assisted development

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Sign up for free ➡️ https://www.jotform.com/▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Follow us on:

    Twitter ➡️ https://x.com/aiagentspodcast

    Instagram ➡️ https://www.instagram.com/aiagentspodcast

    TikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    51 分
  • OpenClaw: The AI That Controls Your Computer 24/7 | EP 125
    2026/03/03

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcastIn this episode of the AI Agents Podcast, hosts Aytekin Tank and Demetri Panici break down OpenClaw — a wildly powerful “AI that actually does things.”


    They compare it to tools like Claude Co-Work, show a live demo pulling G2 reviews and generating a report + Keynote deck, and talk about what it means when agents can run 24/7 via Telegram/Slack/WhatsApp.They also dig into the real tradeoff: OpenClaw’s insane capability (always-on, self-healing/heartbeat, autonomous computer control, agent teams) versus the security risks of giving an unvetted agent too much access.


    If you’re thinking about running agents on a dedicated machine (like a Mac mini), this episode is basically a “do this, not that” starter kit.This episode is a must-watch for anyone curious about autonomous agents, recurring workflows, and the early “AI employee” era — plus how to use these tools safely without nuking your accounts.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬⏰ TIMESTAMPS:00:00 – OpenClaw, always-on agents, and “it figured it out”00:54 – Why OpenClaw might be the craziest tool on the show02:15 – OpenClaw’s rise, name changes, and security concerns04:18 – OpenClaw vs Claude Co-Work (control + limits)07:39 – Live demo: pulling G2 reviews and generating insights12:08 – Making the report better + turning it into a Keynote deck18:02 – How to run OpenClaw via VM/Mac mini + messaging it remotely21:25 – Agent teams + “AI departments” doing recurring work32:14 – Safety checklist: what NOT to give agents access to37:39 – Agents making social networks (and weird agent behavior)46:52 – Final thoughts + using agents responsibly49:13 – Wrap-up▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Sign up for free ➡️ https://www.jotform.com/▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Follow us on: Twitter ➡️ https://x.com/aiagentspodcastInstagram ➡️ https://www.instagram.com/aiagentspodcastTikTok ➡️ https://www.tiktok.com/@aiagentspodcast▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    50 分
  • Vibe Working Is Here: Agent Teams, Claude Code & the Future of SaaS | EP 124
    2026/02/27

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcast


    In this episode of the AI Agents Podcast, hosts Aytekin Tank and Demetri Panici explore the rise of “vibe working” and what it means for knowledge workers, founders, and SaaS companies in 2026.


    They break down how AI agents like Claude Code, Claude Co-Work, OpenClaw, GPT-5.2, and Gemini are transforming how work gets done—from parallel research agents and automated slide decks to building full internal dashboards with nothing but natural language prompts.


    The conversation dives into the concept of “agent stress,” running AI agents in parallel, and how teams are starting to manage digital workers the same way they manage human ones.


    They also discuss the viral “Something Big Is Happening” article and what Anthropic’s latest releases signal for the future of software, SaaS businesses, and market competition.


    Demetri shares real-world demos of building motion graphics engines, ad dashboards, research systems, and internal tools using Claude Code—without being a traditional engineer. They also unpack the economics behind AI pricing, token subsidies, and why now may be the best time to experiment while frontier models are heavily subsidized.


    This episode is a must-watch for founders, operators, developers, and ambitious knowledge workers who want to understand how AI agents, parallel workflows, and natural-language programming are reshaping productivity—and how to take advantage of it before the landscape changes.

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬⏰ TIMESTAMPS:

    00:00 – Agent stress and keeping your AI busy

    00:50 – What is “vibe working”?

    02:18 – Parallel AI agents and agent teams explained

    06:11 – $85B wiped from software stocks in one day

    08:16 – “What software still matters?”

    13:16 – Why recent AI model improvements feel different

    15:55 – The $200 Claude plan and token economics1

    9:14 – Competition: Claude vs GPT vs Gemini

    22:49 – Building dashboards and motion graphics with AI

    33:30 – OpenClaw, Cloud Code, and workflow automation

    54:08 – How to start vibe working today

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Sign up for free ➡️ https://www.jotform.com/▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Follow us on:Twitter ➡️ https://x.com/aiagentspodcastInstagram ➡️ https://www.instagram.com/aiagentspodcastTikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    57 分
  • Building AI Red Flags - Max Eisendrath Makes Risk Management Smarter with Redflag AI | EP 123
    2026/02/24

    Subscribe to AI Agents Podcast Channel: https://link.jotform.com/subscribe-to-podcastIn this episode of the AI Agents Podcast, host Demetri Panici sits down with Max Eisinger, Founder and CEO of Red Flag AI, to break down content protection in the age of AI, deepfakes, and large-scale digital piracy.They talk about how piracy has evolved from classic reuploads to live stream leaks, and why AI-generated content is making attribution and authenticity harder than ever. Max shares how Red Flag AI approaches detection at scale, why watermarking/fingerprinting and provenance tracking matter, and what platforms like YouTube are doing to respond.They also cover Red Flag’s upcoming “Shield” concept (designed to make training on protected content way more expensive), the arms race of filters/edits meant to evade detection, and why soon humans won’t reliably tell what’s real without advanced verification.This episode is a must-watch for creators, media teams, and AI builders who want a clear view of where content ownership, monetization recovery, and authenticity standards are headed.▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬⏰ TIMESTAMPS:00:00 – Can humans still detect AI-generated content?00:40 – Max Eisinger’s background and founding Red Flag AI04:00 – The evolution of online piracy and live stream leaks06:02 – AI-generated content, deepfakes, and attribution challenges08:17 – Making enterprise-level protection accessible to creators10:39 – Watermarking, fingerprinting, and avoiding false positives13:35 – Red Flag Shield: protecting content from AI model training16:47 – Recovering lost revenue for creators18:26 – Shorts, edits, and the content arms race24:20 – Cross-platform protection and centralized control25:34 – The positives and risks of AI-generated media30:24 – Why humans can’t reliably detect AI anymore33:14 – Bottom-up provenance vs. top-down detection36:19 – The future of shared authenticity standards37:23 – AI tools inside software teams42:59 – Where to find Red Flag AI

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Sign up for free ➡️ https://www.jotform.com/

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    Follow us on:

    Twitter ➡️ https://x.com/aiagentspodcast

    Instagram ➡️ https://www.instagram.com/aiagentspodcast

    TikTok ➡️ https://www.tiktok.com/@aiagentspodcast

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    続きを読む 一部表示
    44 分