Multilingual LMM PALO: Enhancing Vision and Language Understanding across Global Languages
The rise of Large Multimodal Models (LMMs) in the field of AI has brought about a revolution in vision and language tasks. However, a major limitation of these models has been their focus on the English language, leaving out billions of speakers of non-English languages. This gap in linguistic inclusivity has been addressed by researchers from Mohamed bin Zayed University of AI and other institutes, who have introduced PALO, a multilingual LMM capable of answering questions in ten languages simultaneously.
The researchers leverage a high-quality multilingual vision-language instruction dataset to train PALO, focusing on improving proficiency in low-resource languages while maintaining or enhancing performance in high-resource languages. By compiling a comprehensive multilingual instruction-tuning dataset and enhancing the state-of-the-art LMMs across different scales, PALO showcases improved language proficiency and versatility.
PALO integrates a vision encoder with a language model, utilizing CLIP ViT-L/14 for vision encoding. Different projectors, including a lightweight downsample projector (LDP) for MobilePALO-1.7B, are employed to efficiently process visual tokens and user queries, enhancing the model’s efficiency across varying computational settings. The evaluation of PALO’s multilingual capabilities demonstrates robust performance across high-resource languages while showing significant performance improvements in low-resource languages.
PALO’s ability to bridge vision and language understanding across ten languages, including high-resource languages like English and Chinese and low-resource languages like Arabic and Hindi, showcases its scalability and generalization capabilities. By training on diverse, multilingual datasets and fine-tuning language translation tasks, PALO is a step towards improving inclusivity and performance in vision-language tasks across a range of global languages.
In conclusion, the introduction of PALO by the researchers from Mohamed bin Zayed University of AI marks a significant advancement in the field of multilingual LMMs. With its ability to cater to nearly two-thirds of the global population and proficiently handle vision and language tasks in multiple languages, PALO is a promising step towards bridging the gap in linguistic inclusivity in AI models. Researchers and enthusiasts can explore the paper and GitHub repository to learn more about PALO and its capabilities.