Keynote Presentation on LLMs: Addressing Shortcomings and Building Better Solutions
Earlier this year, BigML’s Chief Scientist and Oregon State University Emeritus Professor Tom Dietterich gave a keynote presentation titled “What’s wrong with LLMs and what we should be building instead” at the ValgrAI event in Valencia, Spain. In his presentation, Professor Dietterich highlighted the achievements and shortcomings of Large Language Models (LLMs) and proposed a more modular architecture to address their limitations.
LLMs have revolutionized the field of Artificial Intelligence by providing a foundation for training various AI systems with capabilities such as conversational skills, document summarization, code generation, and context-based learning. Despite their impressive capabilities, LLMs have several shortcomings including high training costs, lack of non-linguistic knowledge, and the tendency to make false or socially inappropriate statements.
In response to these limitations, Professor Dietterich proposed a modular architecture that decomposes the functions of existing LLMs and adds additional components to address their shortcomings. By combining state-of-the-art machine learning techniques with software engineering best practices, this new architecture aims to make LLMs more robust and reliable.
If you’re interested in learning more about Professor Dietterich’s proposed solution architecture, you can watch his keynote on YouTube or access the slides for the presentation. In the comments, share your thoughts on the future of LLMs and whether they are ready to meet the expectations of the rapidly growing capital markets.
For organizations looking to scale their machine learning solutions without introducing unnecessary complexity, BigML offers a platform that makes machine learning easy and accessible to everyone. Contact us to schedule a demo and see how BigML can help your organization transition to a more efficient and effective machine learning model.
Stay tuned for more updates on the evolution of Large Language Models and the future of AI in the coming years.