Choosing a batch size and provider for LLM training

Notes on choosing appropriate batch size and compute for training LLMs

June 27, 2025 · 4 min · 756 words

Ultra-scale Playbook - Deepspeed ZeRO

Notes on training LLMs using sharding strategies

June 21, 2025 · 8 min · 1519 words

Ultra-scale Playbook - Data Parallelism

Notes on training LLMs using data parallelism strategy

May 17, 2025 · 5 min · 940 words

Ultra-scale Playbook - Train on a single GPU

Notes on Ultra-scale Playbook - training LLM on a single GPU

April 27, 2025 · 4 min · 797 words