Meta releases AI tools including one that checks other AI models’ accuracy

Meta Platforms on Friday released an array of new AI products including a “Self-Taught Evaluator” that is able to check the accuracy of other AI models. The model is therefore reportedly able to offer less human involvement in the development process of the AI models.

The Facebook and Instagram parent company on Friday said it was releasing the new AI models following its introduction of the tool in an August paper that explained in detail how it relies on the same “chain of thought” technique used by peers, ChatGPT maker OpenAI recently released 01 models.

This is expected to allow the AI model to make reliable judgments about the models’ responses.

Meta wants to address the challenges of inaccuracies

According to a Reuters article, the model can check and enhance the accuracy of responses to tough problems, such as those in subjects like science, math, and coding, because it entails breaking down complex problems into smaller logical steps.

Meta has revealed its intentions include to address the challenges experienced with other AI models like ChatGPT in addition to criticisms over outdated and inaccurate answers.

Researchers at Meta reportedly used entirely AI-generated data to train the evaluator model, removing human input at that stage.

Two of the Meta researchers told Reuters that the ability to use AI to evaluate other AI reliably offers a glimpse at a possible pathway towards building autonomous that can learn from their own mistakes.

“We hope, as AI becomes more and more super-human, that it will get better and better at checking its work, so that it will actually be better than the average human.” Researcher Jason Weston.

“The idea of being self-taught and able to self-evaluate is basically crucial to the idea of getting to this sort of super-human level of AI,” he added.

Meta is moving towards autonomous AI

According to the researchers, stakeholders in the AI industry see these agents as digital assistants that are intelligent enough to carry out a variety of tasks without human intervention.

The researchers maintain that self-improving models could cut out the need for an often expensive and inefficient process that is used today called Reinforcement Learning from Human Feedback. This requires input from human annotators who must have specialized expertise to label data accurately and verify that answers to complex math and writing queries are correct.

Industry peers like Google and Anthropic have also published research papers on the concept of RLAIF, or Reinforcement Learning from AI Feedback.

However, unlike Meta, these other companies tend not to release their models for public use.

Experts in the AI industry have opined that the use of AI to check AI is significant for building autonomous AI applications that can operate without human intervention. It means eventually AI models will learn from their own mistakes, self-correct, and improve without any input from humans.

The social media giant also released other tools including an update to its image-identification Segment Anything Model (SAM), a tool that speeds up LLM response generation times and datasets that can be used to aid the discovery of new inorganic materials.

Stay up to date

on all important crypto news!

The most important news, once a week. No spam.