Ultrascale Playbook - Pipeline Parallelism

Notes on training LLMs using pipeline parallelism

October 25, 2025 · 14 min · 2918 words

Pydata MCR talk on training LLMs

My talk on training LLMs at Pydata MCR

September 25, 2025 · 1 min · 145 words

Distributed communication for GPUs (part 2)

Introduction to collective communication operations used for distributed training.

September 13, 2025 · 13 min · 2567 words

Distributed communication for GPUs (part 1)

Introduction to distributed communication for GPUs.

September 9, 2025 · 11 min · 2146 words

Choosing a batch size and provider for LLM training

Notes on choosing appropriate batch size and compute for training LLMs

June 27, 2025 · 4 min · 751 words