AI compute
AI chips, inference, post-training, and the stack underneath.
A topic hub for the physical and software layers behind AI: power, silicon, HBM, GPUs, frameworks, model weights, inference serving, and agents.
The AI stack
From electrons to tokens, layer by layer.
Decision guideAI chip buyer's guide
H100, H200, B200, GB300, TPU, Trainium, MI300X, Huawei 910C, and Groq by workload.
Training pipelineThe RL post-training map
SFT, RLHF, DPO, PPO, GRPO, and RLVR in one pipeline.
ReferenceAI, term by term
CUDA, CoWoS, HBM, vLLM, SGLang, MoE, and the vocabulary around AI infrastructure.
EssayNine layers between electrons and tokens
The compute stack as a dependency chain, with failure modes exposed.