Pydata MCR talk on training LLMs
My talk on training LLMs at Pydata MCR
My talk on training LLMs at Pydata MCR
Introduction to collective communication operations used for distributed training.
Introduction to distributed communication for GPUs.
Notes on choosing appropriate batch size and compute for training LLMs
Notes on training LLMs using sharding strategies