Dear friends,
Last week, I attended the NeurIPS conference in New Orleans. It was fun to catch up with old friends, make new ones, and also get a wide scan of current AI research. Work by the big tech companies tends to get all the media coverage, and NeurIPS was a convenient place to survey the large volume of equally high-quality work by universities and small companies that just don’t have a comparable marketing budget!
At NeurIPS, many people I spoke with expressed anxiety about the pace of AI development and how to keep up as well as publish, if what you're working on could be scooped (that is, independently published ahead of you) at any moment. While racing to publish first has a long history in science, there are other ways to do great work. The media, and social media especially, tend to focus on what happened today. This makes everything seem artificially urgent. Many conversations I had at NeurIPS were about where AI might go in months or even years.
I like to work quickly, but I find problem solving most satisfying when I’ve developed an idea that I believe in — especially if it’s something that few others see or believe in — and then spend a long time executing it to prove out the vision (hopefully). I find technical work more fulfilling when I have time to think deeply, form my own conclusion, and perhaps even hold an unpopular opinion for a long time as I work to prove it. There’s a lot of value in doing fast, short-term work; and given the large size of our community, it’s important to have many of us doing long-term projects, too.
Happy holidays! Andrew
Top AI Stories of 2023A Year of Innovation and ConsternationRecent years brought systems that, given a text prompt, generate high-quality text, pictures, video, and audio. In 2023, the wave of generative AI washed over everything. And its expanding capabilities raised fears that intelligent machines might render humanity obsolete. As in past years at this season, we invite you to settle by the fire and savor 12 months of technological progress, business competition, and societal impact.
Generative AI EverywhereThis year, AI became virtually synonymous with generative AI. What happened: Launched in November 2022, OpenAI’s ChatGPT ushered in a banner year for AI-driven generation of text, images, and an ever widening range of data types. Driving the story: Tech giants scrambled to launch their own chatbots and rushed cutting-edge natural language processing research to market at a furious pace. Text-to-image generators (also sparked by OpenAI with DALL·E in early 2021) continued to improve and ultimately began to merge with their text-generator counterparts. As users flocked to try out emerging capabilities, researchers rapidly improved the models’ performance, speed, and flexibility.
Gold rush: Generative AI didn’t just thrill customers and businesses; it generated a flood of funding for AI developers. Microsoft invested $13 billion in OpenAI, and Amazon and Google partnered with the nascent startup Anthropic in respective multibillion-dollar investments. Other generative AI startups raised hundreds of millions of dollars. Where things stand: In the span of a year, we went from one chat model from OpenAI to numerous closed, open, and cloud-hosted options. Image generators have made strides in their ability to interpret prompts and produce realistic output. Video and audio generation are becoming widely available for short clips, and text-to-3D is evolving. 2024 is primed for a generative bonanza, putting developers in a position to build a wider variety of applications than ever before.
Hollywood Squares OffThe movie capital became a front line in the battle over workplace automation. What happened: U.S. film and television writers went on strike in May, and actors followed in July. They took up a variety of issues with their employers, but concern that AI would damage their job prospects prolonged the work stoppage. Both groups inked agreements shortly before the year ended. Driving the story: Screenwriters negotiated for 148 days, and actors for 118, winning limits on their employers’ abilities to replace them with machine learning models.
AI on the silver screen: Traditional Hollywood studios negotiated alongside the film departments of Amazon, Apple, and Netflix, tech powerhouses that have access to considerable AI expertise. All are likely to use AI to generate text, images, audio, and video.
Where things stand: The unions and studios agreed to use AI while enabling writers and actors to continue to ply their trades. The agreements will remain in force for three years — time enough for both sides to learn a bit about what the technology is and isn’t good for, and to form a vision of its role in the future. Now Hollywood faces the next challenge: Using AI to make better movies that grow the pie for producers and creatives alike.
Can I Use This Data?Information may not want to be free after all. What happened: The age-old practice of training AI systems on data scraped from the web came into question as copyright owners sought to restrict AI developers from using their works without permission. Driving the story: Individual copyright holders filed lawsuits against AI companies for training models on their data without obtaining explicit consent, giving credit, or providing compensation. Concurrently, formerly reliable repositories of data on the open web started to require payment or disappeared entirely.
Copyright conundrum: Whether copyright restricts training machine learning models is largely an open question. Laws in most countries don’t address the question directly, leaving it to the courts to interpret which uses of copyrighted works do and don’t require a license. (In the U.S., the Copyright Office deemed generated images ineligible for copyright protection, so training corpuses made up of generated images are fair game.) Japan is a notable exception: The country’s copyright law apparently allows training machine learning models on copyrighted works. Where things stand: Most copyright laws were written long ago. The U.S. Copyright Act was established in 1790 and was last revised in 1976! Copyright will remain a battlefield until legislators update laws for the era of generative AI.
A MESSAGE FROM KIRA LEARNINGLooking for a gentle lead-in to AI? Introduction to Artificial Intelligence is designed for middle- and high-school learners who have no prior AI experience. Edited by Jagriti Agrawal, co-founder of Kira Learning (a sister company of DeepLearning.AI), this textbook teaches what AI is, how it works, and why it matters. Download for free
High Anx-AI-etyAngst at the prospect of intelligent machines boiled over in moves to block or limit the technology. What happened: Fear of AI-related doomsday scenarios prompted proposals to delay research and soul searching by prominent researchers. Amid the doomsaying, lawmakers took dramatic regulatory steps. Driving the story: AI-driven doomsday scenarios have circulated at least since the 1950s, when computer scientist and mathematician Norbert Weiner claimed that “modern thinking machines may lead us to destruction.” Such worries, amplified by prominent members of the AI community, erupted in 2023.
Regulatory reactions: Lawmakers from different nations took divergent approaches with varying degrees of emphasis on preventing hypothetical catastrophic risks.
Striking a balance: AI has innumerable beneficial applications that we are only just beginning to explore. Excessive worry over hypothetical catastrophic risks threatens to block AI applications that could bring great benefit to large numbers of people. Some moves to limit AI would impinge on open source development, a major engine of innovation, while having the anti-competitive effect of enabling established companies to continue to develop the technology in their own narrow interest. It’s critical to weigh the harm that regulators might do by limiting this technology in the short term against highly unlikely catastrophic scenarios. Where things stand: AI development is moving too quickly for regulators to keep up. It will require great foresight — and a willingness to do the hard work of identifying real, application-level risks rather than imposing blanket regulations on basic technology — to limit AI’s potential harms without hampering the good that it can do. The EU’s AI Act is a case in point: The bill, initially drafted in 2021, has needed numerous revisions to address developments since then. Should it gain final approval, it will not take effect within two years. By then, AI likely will raise further issues that lawmakers can’t see clearly today.
Deep Learning RocksFans of AI-driven music pressed play, while a major recording company reached for the stop button. What happened: AI grabbed listeners by the ears when it helped produce a new single by The Beatles, mimicked the voices of beloved stars, and generated music from text prompts. Driving the story: AI hasn’t quite had its first hit record, but developments in generated music put both fans and the record industry on notice that it may not be far away.
Industry crackdown: Universal Music Group (UMG), which accounts for nearly one-third of the global music market, reacted swiftly to the wave of generated music. It blocked streaming services from distributing fan-made, voice-cloned productions and demanded that they block AI developers from downloading music by UMG artists so they can’t use it to train machine learning models. Shortly afterward, UMG partnered with Endel, a startup that generates background music. UMG artist James Blake released music he created using Endel’s system. Where things stand: Generative AI is poised to play an increasing role in recorded music. AI-powered tools exist for many phases of recording production, including composition, arrangement, and mixing. The recent agreements between actors and writers and Hollywood studios may offer pointers to musicians and recording executives who would like to use these tools to make exciting, marketable music.
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|