Week of 22 April 2026
The inaugural briefing. Narrower than most future editions will be — one interview dominated the week's signal, and most of the below threads back to it.
Item 01Jensen Huang on Dwarkesh, 15 April
Nvidia's CEO sat for the most comprehensive public defence of the CUDA moat since the hyperscaler-silicon pivot began. The quiet concession in the interview — that Nvidia missed Anthropic at the founding round because it was not yet in the multi-billion-dollar equity-check class — is a rare public regret from a Fortune 50 CEO. The broader takeaway is that Jensen's argument is no longer about chips alone; it is about the full stack of chips plus systems plus kernels plus libraries, and whether that whole bundle can be cloned by anyone not named TSMC.
The glossary is the companion for anyone trying to navigate the terminology the interview uses.
Item 02Anthropic's Mythos Preview
A capability preview claiming thousands of high-severity zero-day discoveries across every major operating system, including a 27-year-old OpenBSD bug in a codebase explicitly hardened against this class of exploit. The export-control argument in 2026 now hinges on whether Mythos-class capability in adversarial hands constitutes catastrophic near-term risk. For Indian policy readers: there is no equivalent sovereign capability, and the domestic frontier gap is now measurable in zero-days found rather than benchmarks beaten.
Item 03"7nm chips are essentially Hopper"
Jensen's framing inverts the China-is-compute-starved consensus. If SMIC's 7nm domestic process approximates the 2022 Nvidia generation at scale, the binding constraint on Chinese AI is not capability; it is energy abundance, which they have. The export-control playbook reads differently if you take this premise seriously. It also changes what Indian AI policy should copy: strictly, nothing that assumes the adversary cannot build its own compute.
Item 04Energy as the one bottleneck that worries the CEO
Jensen named grid interconnection queues and transformer lead times as the constraint that takes years rather than months to resolve. This is layer zero of the nine-layer stack made explicit. For India: the National Electricity Plan and the states' transmission-tariff policies are AI-infrastructure policy in disguise, even though nobody is writing them that way. Whoever connects the dots first between the IndiaAI Mission compute buildout and the fifteen-year grid plan will set the framing for the decade.
Item 05Post-training RL as the growth slice
Jensen, direct quote: that entire area is just exploding. verl and NeMo RL now named on the main stage. Reinforcement learning on verifiable tasks — code, mathematics, tool use — is where capability is advancing fastest and where compute demand is growing fastest. Not yet reflected in the Indian AI funding thesis, which remains pre-training-pilled. Indic-language post-training on verifiable tasks is an underexplored wedge; whoever does it first at quality will be the Sarvam or Krutrim of the next cycle.