Distributed communication for GPUs (part 1)
Introduction to distributed communication for GPUs.
Introduction to distributed communication for GPUs.
Notes on choosing appropriate batch size and compute for training LLMs
Notes on training LLMs using sharding strategies
Notes on training LLMs using data parallelism strategy
Notes on Ultra-scale Playbook - training LLM on a single GPU