Goal maintains its renewing strategic line and its latest operation has been to present a variant of Callsits artificial intelligence model, which is expected to achieve a performance equal to or greater than OpenAI GPT-4. LIME (Less is More for Alingment) emerges as a large language model capable of providing high-quality responses based on small indications.
The team that leads mark zuckerberg is training the new tool through a precise and rigorous process, something that sets it apart from its main competitors. Based on 65 million parameters provided by Meta for research purposes in the first quarter of the year, LIMA has been optimized with just 1,000 prompts.
Your training process
LIMA, the new LLM of Metahe trains in Two phases. The first is a unsupervised learning process in which it is fed with pure content, raw content, to learn in a general purpose and representation scenario.
The second is a process in which LLMs adopt a debug phase applying reinforcement learning methods with human feedback. In this way, its operation can be oriented to perform specific tasks based on user preferences. Nevertheless, this is the most expensive operation in the training process of an AI model, in such a way that LIMA was trained without taking this phase into account, something that did happen in the cases of GPT-4 of OpenAI or of Bard of Google.
With this new advance, Meta comes to demonstrate how an LLM model can work from few indications. Thus, you will be able to fulfill specific tasks or respond to new tasks that were not initially considered in the data set. To do this, a thousand examples of real instructions have been taken; 750 coming from forums like Stack Exchange, Reddit and wikiHow and 250 own writing by the researchers.
Put to a real test
Meta submitted its new AI tool to a group of human testers who had previously worked with other models such as GPT-4, DaVinci003, claude of anthropic and Google Bard to 200-300 requests. The results were compelling and showed that LIMA produced answers equal to or better than its main competitors: 43% better than ChatGPT, 58% better than Bard and 65% better than DaVinci003. All these models were optimized by RLHF.
The conclusion reached by the META researchers is that the RLHF technique does not bring great improvements, but that doing without it can reduce costs when training an AI-based language model. LIMA’s responses showed that, in 88% of cases, the immediate requirements are met. Likewise, 50% are qualified as excellent.
Meta researchers themselves have concluded that creating data sets with high-quality examples means a challenging and difficult-to-scale approach. Also, LIMA is not as robust as earlier models available as the GPT-4, but still requires extensive running-in and training. Thus, if an adversary prompt or unlucky sample were generated during its training, LIMA would generate inaccurate responses.
Despite everything, the company is satisfied with its investigations. It is because of that Yann LeCunhead of AI research at Meta, said that the fact of investing in the development of new LLMs will be key in the short termbut that they will have to fight against the possibility that they lose validity in the medium term due to the high evolution of the sector.
A collaborative result
LIMA is the result of Meta’s collaboration with researchers from the Carnegie Mellon Universitythe University of Southern California and Tel Aviv University. This combination between proposals open source with a light and flexible internal structure it becomes the philosopher’s stone of advancement.
In fact, recently Meta has presented the models Massively Multilingual Speech (MMS)whose technology extends the capabilities of tools that convert text to speech and vice versa into more than 1,100 languages, allowing them to identify more than 4,000 spoken languages. Meta’s MMS are open source models.
All of this in the midst of a maelstrom of layoffs for the company, which in recent hours has dispensed with about 6,000 people. They accumulate 27,000 jobs eliminated since November 2022, when it had 87,000 employees.