Dear friends,
AI risks are in the air — from speculation that AI, decades or centuries from now, could bring about human extinction to ongoing problems like bias and fairness. While it’s critically important not to let hypothetical scenarios distract us from addressing realistic issues, I’d like to talk about a long-term risk that I think is realistic and has received little attention: If AI becomes cheaper and better than many people at doing most of the work they can do, swaths of humanity will no longer contribute economic value. I worry that this could lead to a dimming of human rights.
We’ve already seen that countries where many people contribute little economic value have some of the worst records of upholding fundamental human rights like free expression, education, privacy, and freedom from mistreatment by authorities. The resource curse is the observation that countries with ample natural resources, such as fossil fuels, can become less democratic than otherwise similar countries that have fewer natural resources. According to the World Bank,“developing countries face substantially higher risks of violent conflict and poor governance if [they are] highly dependent on primary commodities.”
A ruler (perhaps dictator) of an oil-rich country, for instance, can hire foreign contractors to extract the oil, sell it, and use the funds to hire security forces to stay in power. Consequently, most of the local population wouldn’t generate much economic value, and the ruler would have little incentive to make sure the population thrived through education, safety, and civil rights.
What would happen if, a few decades from now, AI systems reach a level of intelligence that disempowers large swaths of people from contributing much economic value? I worry that, if many people become unimportant to the economy, and if relatively few people have access to AI systems that could generate economic value, the incentive to take care of people — particularly in less democratic countries — will wane. Marc Andreessen recently pointed out that Tesla, having created a good car, has an incentive to sell it to as many people as possible. So why wouldn’t AI builders similarly make AI available to as many people as possible? Wouldn’t this keep AI power from becoming concentrated within a small group? I have a different point of view. Tesla sells cars only to people who generate enough economic value, and thus earn enough wages, to afford one. It doesn’t sell many cars to people who have no earning power.
Researchers have analyzed the impact of large language models on labor. While, so far, some people whose jobs were taken by ChatGPT have managed to find other jobs, the technology is advancing quickly. If we can’t upskill people and create jobs fast enough, we could be in for a difficult time. Indeed, since the great decoupling of labor productivity and median incomes in recent decades, low-wage workers have seen their earnings stagnate, and the middle class in the U.S. has dwindled.
Many people derive tremendous pride and sense of purpose from their work. If AI systems advance to the point where most people no longer can create enough value to justify a minimum wage (around $15 per hour in many places in the U.S.), many people will need to find a new sense of purpose. Worse, in some countries, the ruling class will decide that, because the population is no longer important for production, people are no longer important.
What can we do about this? I’m not sure, but I think our best bet is to work quickly to democratize access to AI by (i) reducing the cost of tools and (ii) training as many people as possible to understand them. This will increase the odds that people have the skills they need to keep creating value. It will also ensure that citizens understand AI well enough to steer their societies toward a future that’s good for everyone.
Keep working to make the world better for everyeone! Andrew
NewsTaught by a BotWhile some schools resist their students’ use of chatbots, others are inviting them into the classroom. What’s new: Some primary and secondary schools in the United States are testing an automated tutor built by online educator Khan Academy, The New York Times reported. Users of the Khanmigo chatbot include public schools in New Jersey and private schools like Silicon Valley’s Khan Lab School (established by Khan Academy founder Sal Khan).
Behind the news: Chegg, which maintains a cadre of tutors to help students with homework, recently lost 48 percent of its market value after the company’s CEO said ChatGPT had dampened subscriber growth. Chegg plans to launch a GPT-4-based chatbot called CheggMate next year.
Training Data Free-For-AllAmid rising questions about the fairness and legality of using publicly available information to train AI models, Japan affirmed that machine learning engineers can use any data they find. What’s new: A Japanese official clarified that the country’s law lets AI developers train models on works that are protected by copyright. How it works: In testimony before Japan’s House of Representatives, cabinet minister Keiko Nagaoka explained that the law allows machine learning developers to use copyrighted works whether or not the trained model would be used commercially and regardless of its intended purpose.
Yes, but: Politicians in minority parties have pressed the ruling party to tighten the law. Visual artists and musicians have also pushed for a revision, saying that allowing AI to train on their works without permission threatens their creative livelihoods. Behind the news: Japan is unusual insofar as it explicitly permits AI developers to use copyrighted materials for commercial purposes.
Why it matters: Last month, member states of the Group of Seven (G7), an informal bloc of industrialized democratic governments that includes Japan, announced a plan to craft mutually compatible regulations and standards for generative AI. Japan’s stance is at odds with that of its fellows, but that could change as the members develop a shared vision.
A MESSAGE FROM DEEPLEARNING.AIGain hands-on experience with a framework for addressing complex public-health and environmental challenges in our upcoming specialization, AI for Good. Pre-enroll and get 14 days of your subscription for free!
Game Makers Embrace Generative AIThe next generation of video games could be filled with AI-generated text, speech, characters, and background art. How it works: Tech companies are providing software that generates game assets either in production or on the fly. Some large game studios are developing their own tools.
Behind the news: Gamers, too, are using generative AI to modify their favorite games. For instance, modders have used voice cloning to vocalize lines for the main character of “The Elder Scrolls V: Skyrim,” who otherwise is silent. Why it matters: Generative AI tools can streamline video game production, which is bound to appeal to developers who aim to cut both costs and timelines. More exciting, it can supercharge their ability to explore art styles, characters, dialog, and other creative features that may not be practical in a conventional production pipeline.
Like Diffusion but FasterThe ability to generate realistic images without waiting would unlock applications from engineering to entertainment and beyond. New work takes a step in that direction. What’s new: Dominic Rampas and colleagues at Technische Hochschule Ingolstadt and Wand Technologies released Paella, a system that uses a process similar to diffusion to produce Stable Diffusion-quality images much more quickly. Key insight: An image generator’s speed depends on the number of steps it must take to produce an image: The fewer the steps, the speedier the generator. A diffusion model learns to remove varying amounts of noise from each training example; at inference, given pure noise, it produces an image by subtracting noise iteratively over a few hundred steps. A latent diffusion model reduces the number of steps to around a hundred by removing noise from a vector that represents the image rather than the image itself. Instead of a vector, using a selection of tokens from a predefined list makes it possible to do the same job in still fewer steps. How it works: Like a diffusion model, Paella learned to remove varying amounts of noise from tokens that represented an image and then produced a new image from noisy tokens. It was trained on 600 million image-text pairs from LAION-Aesthetics.
Results: The authors evaluated Paella (573 million parameters) according to Fréchet inception distance (FID), which measures the difference between the distributions of original and generated images (lower is better). Paella achieved 26.7 FID on MS-COCO. Stable Diffusion v1.4 (860 million parameters) trained on 2.3 billion images achieved 25.40 FID — somewhat better, but significantly slower. Running on an Nvidia A100 GPU, Paella took 0.5 seconds to produce a 256x256-pixel image in eight steps, while Stable Diffusion took 3.2 seconds. (The authors reported FID for 12 steps but speed for eight steps.) Why it matters: Efforts to accelerate diffusion have focused on distilling models such as Stable Diffusion. Instead, the authors rethought the architecture to reduce the number of diffusion steps. We’re thinking: The authors trained Paella on 64 NVIDIA A100s for two weeks using computation from Stability AI (the group behind Stable Diffusion). It’s great to see such partnerships between academia and industry that give researchers access to computation.
Work With Andrew Ng
DevOps Engineer: DeepLearning.AI seeks an engineer with strong computer science fundamentals and a passion for improving learner experiences. The ideal candidate will thrive in an early development stage of a leading educational environment that focuses on AI-related topics. The role is responsible for designing, implementing, and maintaining the infrastructure that supports software development and deployment processes. Apply here
Frontend Engineer: DeepLearning.AI seeks an engineer with strong computer science fundamentals to develop our educational products. The role is responsible for building and delivering high-quality experiences for technical content. You will work alongside a team of talented content creators and outside partners to build the infrastructure for world-renowned AI-driven education. Apply here
Project Manager: DeepLearning.AI seeks a Project Manager to manage the process to build an online course or series of courses. The ideal candidate has the ability to communicate and work well with a cross-functional team of highly skilled, globally-distributed data scientists and engineers. Apply here
Senior Manager, Community Growth: DeepLearning.AI seeks a person to spearhead community and events strategy. The ideal candidate is passionate about driving scale and engagement and has experience producing and executing multimedia content across multiple channels. Apply here
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|