Dear friends,
It is only rarely that, after reading a research paper, I feel like giving the authors a standing ovation. But I felt that way after finishing Direct Preference Optimization (DPO) by Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Chris Manning, and Chelsea Finn. (I didn't actually stand up and clap, since I was in a crowded coffee shop when I read it and would have gotten weird looks! đ)
This beautiful work proposes a much simpler alternative to RLHF (reinforcement learning from human feedback) for aligning language models to human preferences. Further, people often ask if universities â which don't have the massive compute resources of big tech â can still do cutting-edge research on large language models (LLMs). The answer, to me, is obviously yes! This article is a beautiful example of algorithmic and mathematical insight arrived at by an academic group thinking deeply.
RLHF became a key algorithm for LLM training thanks to the InstructGPT paper, which adapted the technique to that purpose. A typical implementation of the algorithm works as follows:
This is a relatively complex algorithm. It needs to separately represent a reward function and an LLM. Also, the final, reinforcement learning step is well known to be finicky to the choice of hyperparameters. DPO dramatically simplifies the whole thing. Rather than needing separate transformer networks to represent a reward function and an LLM, the authors show how, given an LLM, you can figure out the reward function (plus regularization term) that that LLM is best at maximizing. This collapses the two transformer networks into one. Thus, you now need to train only the LLM and no longer have to deal with a separately trained reward function. The DPO algorithm trains the LLM directly, so as to make the reward function (which is implicitly defined by the LLM) consistent with the human preferences. Further, the authors show that DPO is better at achieving RLHF's optimization objective (that is, (i) and (ii) above) than most implementations of RLHF itself.
Keep learning! Andrew
P.S. We just launched our first short course that uses JavaScript! In âââBuild LLM Apps with LangChain.js,â taught by LangChainâs founding engineer Jacob Lee, youâll learn many steps that are common in AI development, including how to use (i) data loaders to pull data from common sources such as PDFs, websites, and databases; (ii) different models to write applications that are not vendor-specific; and (iii) parsers that extract and format the output for your downstream code to process. Youâll also use the LangChain Expression Language (LCEL), which makes it easy to compose chains of modules to perform complex tasks. Putting it all together, youâll build a conversational question-answering LLM application capable of using external data as context. Please sign up here!
NewsDeep Learning Discovers AntibioticsBiologists used neural networks to find a new class of antibiotics. Whatâs new: Researchers at MIT and Harvard trained models to screen chemical compounds for those that kill methicillin-resistant Staphylococcus aureus (MRSA), the deadliest among bacteria that have evolved to be invulnerable to common antibiotics, and arenât toxic to humans. How it works: The authors built a training set of 39,312 compounds including most known antibiotics and a diverse selection of other molecules. In a lab, they tested each compound for its ability to inhibit growth of MRSA and its toxicity to human liver, skeletal muscle, and lung cells. Using the resulting data, they trained four ensembles of 20 graph neural networks each to classify compounds for (i) antibiotic properties, (ii) toxicity to the liver, (iii) toxicity to skeletal muscles, and (iv) toxicity to the lungs.
Results: Of the compounds predicted to be likely antibiotics and nontoxic, the authors lab-tested 241 that were not known to work against MRSA. Of those, 8.7 percent inhibited the bacteriumâs growth. This exceeds the percentage of antibiotics in the training set (1.3 percent), suggesting that the authorsâ approach could be a useful first step in finding new antibiotics. The authors also tested 30 compounds predicted not to be antibiotics. None of them (0 percent) inhibited the bacteriumâs growth â further evidence that their approach could be a useful first step. Two of the compounds that inhibited MRSA share a similar and novel mechanism of action against bacteria and also inhibited other antibiotic-resistant infections in lab tests. One of them proved effective against MRSA infections in mice. Behind the news: Most antibiotics currently in use were discovered in the mid-20th century, a golden age of antibiotics, which brought many formerly deadly pathogens under control. Modern techniques, including genomics and synthetic antibiotics, extended discoveries through the end of the century by identifying variants on existing drugs. However, in the 21st century, new antibiotics have either been redundant or havenât been clinically successful, a report by the National Institutes of Health noted. At the same time, widespread use of antibiotics has pushed many dangerous bacteria to evolve resistance. Pathogens chiefly responsible for a variety of ailments are generally resistant even to antibiotics reserved for use as a last resort. Weâre thinking: If neural networks can identify new classes of medicines, AI could bring a golden age of medical discovery. That hope helps to explain why pharmaceutical companies are hiring machine learning engineers at unprecedented rates.
OpenAI Revamps Safety ProtocolRetrenching after its November leadership shakeup, OpenAI unveiled a new framework for evaluating risks posed by its models and deciding whether to limit their use. Whatâs new: OpenAIâs safety framework reorganizes pre-existing teams and forms new ones to establish a hierarchy of authority with the companyâs board of directors at the top. It defines four categories of risk to be considered in decisions about how to use new models. How it works: OpenAIâs Preparedness Team is responsible for evaluating models. The Safety Advisory Group, whose members are appointed by the CEO for year-long terms, reviews the Preparedness Teamâs work and recommends approaches to deploying models and mitigating risks, if necessary. The CEO has the authority to approve and oversee recommendations, overriding the Safety Authority Group if needed. OpenAIâs board of directors can overrule the CEO.
Behind the news: The Preparedness Team and Safety Advisory Group join a number of safety-focused groups within OpenAI. The Safety Systems Team focuses on mitigating risks after a model has been deployed; for instance, ensuring user privacy and preventing language models from providing false information. The Superalignment Team, led by Ilya Sutskever and Jan Leike, is charged with making sure hypothetical superintelligent systems, whose capabilities would surpass humans, adhere to values that benefit humans. Weâre thinking: OpenAIâs safety framework looks like a step forward, but its risk categories focus on long-term, low-likelihood outcomes (though they stop short of considering AIâs hypothetical, and likely mythical, existential risk to humanity). Meanwhile, clear and present safety issues, such as social bias and factual accuracy, are well known to afflict current models including OpenAIâs. We hope that the Preparedness Team promptly adds categories that represent safety issues presented by todayâs models.
A MESSAGE FROM DEEPLEARNING.AIIn this short course, youâll dive into LangChain.js, a JavaScript framework for building applications based on large language models, and learn how to craft powerful, context-aware apps. Elevate your machine learning-powered development skills using JavaScript. Sign up today
AGI DefinedHow will we know if someone succeeds in building artificial general intelligence (AGI)? A recent paper defines milestones on the road from calculator to superintelligence. Whatâs new: Researchers at Google led by Meredith Ringel Morris propose a taxonomy of AI systems according to their degree of generality and ability to perform cognitive tasks. They consider todayâs large multimodal models to be âemerging AGI.â AGI basics: Artificial general intelligence is commonly defined as AI that can perform any intellectual task a human can. Shane Legg (who co-founded DeepMind) and Ben Goertzel (co-founder and CEO of SingularityNet) coined the term AGI for a 2007 collection of essays. Subsequently, companies like DeepMind and OpenAI, which explicitly aim to develop AGI, propelled the idea into the mainstream. How it works: The taxonomy categorizes systems as possessing narrow skills (not AGI) or general capabilities (AGI). It divides both narrow and general systems into five levels of performance beyond calculator-grade Level 0. It also includes a metric for degree of autonomy.
Yes, but: The authorsâ definition identifies some classes of tasks that contribute to generality, but it includes neither a list of tasks a system must perform to be considered general nor a method for selecting them. Rather, the authors call on the research community to develop a âliving benchmarkâ for generality that includes a mechanism for adding novel tasks. Why it matters: AGI is one of the tech worldâs hottest buzzwords, yet it has had no clear definition, and various organizations propose different definitions. This lack of specificity makes it hard to talk about related technology, regulation, and other topics. The authorsâ framework, on the other hand, supports a more nuanced discussion of the path toward AGI. And it may have high-stakes business implications: Under the terms of their partnership, OpenAI can withhold from Microsoft models that attain AGI. Applying the authorsâ taxonomy would make it harder for one of the parties to move the goalposts. Weâre thinking: Defining AGI is tricky! For instance, OpenAI defines AGI as âa highly autonomous system that outperforms humans at most economically valuable work.â This definition, had it been formulated in the early 1900s, when agriculture accounted for 70 percent of work globally, would have described the internal combustion engine.
Text or Images, Input or OutputGPT-4V introduced a large multimodal model that generates text from images and, with help from DALL-E 3, generates images from text. However, OpenAI hasnât fully explained how it built the system. A separate group of researchers described their own method. What's new: Jing Yu Koh, Daniel Fried, and Ruslan Salakhutdinov at Carnegie Mellon University proposed Generating Images with Large Language Models (GILL), a training method that enables a large language model and a text-to-image generator to use both text and images as either input or output. Given text and/or image input, it decides whether to retrieve existing images or generate new ones. Key insight: Models like CLIP and ImageBind map text and image inputs to a similar embedding space, so closely related text and images have similar embeddings. This approach enables a large multimodal model to process both data types. Text outputs, too, can be mapped to the same embedding space, so an image decoder, such as a diffusion model, can use them to produce images or an image retriever to retrieve images. How it works: The authors used a pretrained OPT large language model, ViT-L image encoder (taken from CLIP), and pretrained Stable Diffusion text-to-image generator. The authors trained ViT-L to map its embeddings to those produced by OPT. They trained OPT to recognize prompts that request an image and enabled the system to either generate or retrieve images. Finally, a separate linear classifier learned whether to retrieve or generate images.
Results: VIST is a dataset of 20,000 visual stories, each of which comprises five captioned images. The authors evaluated GILLâs and Stable Diffusionâs abilities, given the final caption or all five captions, to generate the final image in each story based on CLIP similarity scores between generated and ground-truth images. Given one caption, GILL achieved 0.581 similarity and Stable Diffusion achieved 0.592 similarity. Given five captions, GILL achieved 0.612 similarity and Stable Diffusion scored 0.598 similarity, highlighting GILLâs ability to use the context afforded by more extensive input. It did even better (0.641 similarity) given both captions and images, which Stable Diffusion couldnât handle. The authors also evaluated how well their system retrieved the correct last image from VIST given the 5 captions and the first 4 images. GILL retrieved the correct image 20.3 percent of the time, while their own FROMAGe retrieved the correct image 18.2 percent of the time. In comparison, CLIP, given the 5 captions (without the images), retrieved the correct image 8.8 percent of the time. Why it matters: Models that wed text and images are advancing rapidly. GILL and other recent models extend single-image input and/or output to any combination of images and text. This capability â which GILL achieves by mapping embeddings of image and text to one another â gives the models more context to generate more appropriate output. Weâre thinking: The authors add an interesting twist: Rather than generating images, the system can choose to retrieve them. Sometimes an existing image will do.
A MESSAGE FROM LANDING AIJoin a free webinar on January 11, 2024, featuring experts from Snowflake and Landing AI to explore how large vision models (LVMs) are transforming the image processing landscape. Learn more about the session and register here
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|