Distributed communication for GPUs (part 2)

Introduction to collective communication operations used for distributed training.

September 13, 2025 · 13 min · 2567 words

Distributed communication for GPUs (part 1)

Introduction to distributed communication for GPUs.

September 9, 2025 · 11 min · 2146 words

Choosing a batch size and provider for LLM training

Notes on choosing appropriate batch size and compute for training LLMs

June 27, 2025 · 4 min · 756 words

Ultra-scale Playbook - ZeRO Sharding

Notes on training LLMs using sharding strategies

June 21, 2025 · 8 min · 1518 words

Ultra-scale Playbook - Data Parallelism

Notes on training LLMs using data parallelism strategy

May 17, 2025 · 5 min · 945 words