Does AI Make War More Likely? We're About To Find Out
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
On January 9th 2026, the US Secretary of Defense signed a memorandum called Artificial Intelligence Strategy for the Department of War.
Six weeks later, the US was at war with Iran and AI was identifying targets.
Mark and Jeremy read the memo line by line. What they found: a strategy built on speed over safety, experimentation over caution, and the explicit statement that "the risks of not moving fast enough outweigh the risks of imperfect alignment." The memo outlines swarm warfare, AI-generated military intelligence, 30-day deadlines for federating classified data across all departments, and a talent war with Silicon Valley.
Anthropic, the company that asked for safeguards against mass surveillance and full automation of the kill chain, was classified as a supply chain risk.
This episode asks one question: does AI make war more likely or less likely?
--
🎧 Listen to every podcast
📺 Follow us on Instagram
🏠 Follow us on X
🏠 Follow Jeremy on LinkedIn
To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz
--
Chapters
(00:00) Artificial Intelligence Strategy for the Department of War
(00:58) Executive Order 14179: America's AI Military Dominance
(01:59) China And AI Arms Race
(04:36) Anthropic & Eliminating Bureaucratic Barriers
(07:20) The 7 Pace Setting Projects (PSPs) In The Memo
(08:28) 100% LLM Kill Chain Capability
(10:22) Palmer Luckey
(11:53) Intelligence & The AI Open Arsenal
(13:57) The War Time Approach To Blockers
(16:46) AI Talent Acquisition At The DOW
(18:54) We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment