Efficient Training of Large MoE Models with Amazon SageMaker Model Parallelism Library
Mixture of Experts (MoE) architectures have become increasingly popular in the field of large language models (LLMs) due to their ability to effectively increase model capacity while maintaining computational efficiency. These architectures use sparse expert subnetworks to process different subsets of tokens, allowing for an increase in the number of parameters without a significant increase in computation per token during training and inference. This leads to more cost-effective training of larger models within fixed compute budgets compared to dense architectures.
While MoE architectures offer computational benefits, training and fine-tuning large MoE models efficiently can pose some challenges. Load balancing can be an issue if tokens aren’t evenly distributed across experts during training, leading to some experts being overloaded while others are under-utilized. Additionally, MoE models have high memory requirements as all expert parameters need to be loaded into memory, even though only a subset is used for each input.
To address these challenges, Amazon SageMaker has introduced new features in its model parallelism library that enable efficient training of MoE models using expert parallelism. Expert parallelism involves splitting expert subnetworks across separate workers or devices, similar to tensor parallelism partitioning dense model layers. This allows for better load balancing and reduced memory requirements by distributing experts across workers.
The Mixtral 8x7B model, for example, utilizes a sparse MoE architecture with eight expert subnetworks containing around 7 billion parameters each. A trainable gate network called a router determines which input tokens are routed to which expert, allowing for specialization in processing different aspects of the input data. By using expert parallelism, the model can efficiently distribute the workload across multiple devices, improving computational efficiency.
The SMP library in Amazon SageMaker uses NVIDIA Megatron to implement expert parallelism and supports training MoE models on top of PyTorch Fully Sharded Data Parallel (FSDP) APIs. By specifying the expert_parallel_degree parameter, users can evenly divide experts across the number of GPUs in a cluster, optimizing memory usage and workload distribution.
In addition to expert parallelism, SMP also supports sharded data parallelism, which partitions and distributes experts and non-MoE layers of the model across a cluster to further reduce memory footprint. These combined features enable faster and more memory-efficient training of large models like the Mixtral 8x7B.
Overall, the integration of expert parallelism and sharded data parallelism in Amazon SageMaker’s SMP library offers a powerful solution for training and fine-tuning large MoE language models. By leveraging these capabilities, users can scale their models across multiple GPUs and workers effectively, improving training efficiency and performance.