Meta has presented LIMAa large language model with which they demonstrate that high-quality responses can be obtained from a small set of prompts with a previously trained model.
LIME (Less Is More for Alignmentfor its acronym in English) is based on LlaMaa model with 65,000 million parameters that the technology company provided for research purposes in the first quarter of the year.
Meta explains that large language models usually They train in two phases.: a unsupervised pre-training of raw textso that it learns general representations, and another on a large scale of learning through fit and reinforcementwith which it is sought that the AI aligns better with the final tasks and the preferences of the user.
With LIMA, Meta intends to demonstrate that it is possible obtain quality results from a few indications with a model that has been extensively trained before. And to do this, he has used a thousand examples of carefully curated real instructions, 750 from forums such as Stack Exchange and wikiHow and another 250 written by the researchers themselves.
To analyze its performance, have compared it to GPT-4 from OpenAI, Claude from Anthropic and Bard from Google with a controlled test of 300 indications. The results they obtained show that LIMA produces “same or preferable” responses in 43 percent, 46 percent, and 58 percent of cases, respectively.

As stated in the study published in arxiv.orgOn an absolute scale, LIMA’s responses “reveal that 88 percent meet the immediate requirements, and 50 percent are considered excellent,” the researchers note.
On the other hand, for Yann LeCun, from Meta, LIMA’s behavior reflects that investing in the development of new and large language models (LLM) will be important in the short term, but not in the medium term.
They are a component of the future.
They are the short-term future.
They are not the medium-term future.
At least not without some major changes.— Yann LeCun (@ylecun) May 22, 2023
sign up for our newsletter and receive the latest technology news in your email.