Your AI Agent Is Running As Root
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
When you fire up Claude Code, Cursor, or any AI coding agent, it launches with your full system permissions, your SSH keys, cloud credentials, browser passwords, every file on your machine. Most developers never think twice about it.
Luke Hinds did. And then he built something about it.
Luke is the creator of Sigstore, the cryptographic signing infrastructure now used by PyPI, Homebrew, GitHub, and Google as the industry standard for software supply chain security. In this episode, he joins Chris to talk about why he's watching the industry make the exact same mistake it made a decade ago, and what he built to try to stop it.
We cover the full picture: why application-layer guardrails and system prompts fundamentally fail as security boundaries for AI agents (and what kernel-level enforcement actually means), the .md file as an emerging control plane attack surface, the OpenClaw wake-up call and what the skills marketplace ecosystem gets structurally wrong about trust and provenance, the approval fatigue problem and Anthropic's 17% false negative rate on Claude Code's auto-mode classifier, extending SLSA and Sigstore attestation frameworks to AI-generated code, and why LLM-as-a-judge may not be the silver bullet many are hoping for.
Luke also makes a broader argument about where this is all heading — volumes of AI-generated code growing faster than human capacity to review it, junior engineers being priced out of the industry, and an aging cohort of engineers who can actually read and reason about code at depth. It's a candid, technically grounded conversation from someone who's been in open source security for 20+ years and has seen this movie before.
nono is at nono.sh, one line to install, one line to run. No excuse not to