Recent Progress in Evaluating Artificial Intelligence: Challenges and Approaches
Recent progress in artificial intelligence, especially in the area of deep learning, has been breath-taking. This is very encouraging for anyone interested in the field, yet the true progress towards human-level artificial intelligence is much harder to evaluate. The evaluation of artificial intelligence is a very difficult problem for a number of reasons. For example, the lack of consensus on the basic desiderata necessary for intelligent machines is one of the primary barriers to the development of unified approaches towards comparing different agents. Despite a number of researchers specifically focusing on this topic, the area would benefit from more attention from the AI community.
Methods for evaluating AI are important tools that help to assess the progress of already built agents. The comparison and evaluation of roadmaps and approaches towards building such agents is however less explored. Such comparison is potentially even harder, due to the vagueness and limited formal definitions within such forward-looking plans. Nevertheless, in order to steer towards promising areas of research and to identify potential dead-ends, we need to be able to meaningfully compare existing roadmaps.
At GoodAI, we are starting to look at this problem of comparing AI architectures internally. We have three architecture teams working on their respective roadmaps, and we are developing a framework to evaluate their progress and potential. This involves creating milestones for each plan, with time estimates, characteristics of work, and tests of new features. We have also introduced checkpoints to compare progress across different architectures and ensure alignment with a meta-roadmap of human-level AI development.
By comparing our approaches with those of other researchers, we are able to identify common challenges and areas of improvement. We aim to develop a unified set of features that we require from an architecture, in order to make comparisons more meaningful and facilitate collaboration within the AI community. Our work is still ongoing, but we believe that sharing our initial thoughts on this topic is important to stimulate discussion and progress in the field of artificial intelligence.
In conclusion, evaluating progress in artificial intelligence is a complex and challenging task. By developing frameworks for comparing AI architectures and roadmaps, we can better assess the potential and completeness of different approaches towards human-level artificial intelligence. Collaboration and sharing of knowledge within the AI community are crucial for advancing towards the ultimate goal of creating intelligent machines that can adapt to unknown environments and solve complex tasks.