Dear friends,
There is significant unmet demand for developers who understand AI. At the same time, because most universities have not yet adapted their curricula to the new reality of programming jobs being much more productive with AI tools, there is also an uptick in unemployment of recent CS graduates.
Someone with these skills can get a massively greater amount done than someone who writes code the way we did in 2022, before the advent of Generative AI. I talk to large businesses every week that would love to hire hundreds or more people with these skills, as well as startups that have great ideas but not enough engineers to build them. As more businesses adopt AI, I expect this talent shortage only to grow! At the same time, recent CS graduates face an increased unemployment rate (e.g., see this study using data from 2023), though the underemployment rate — of graduates doing work that doesn’t require a degree — is still lower than for most other majors. This is why we hear simultaneously anecdotes of unemployed CS graduates and also of rising salaries for in-demand AI engineers.
When programming evolved from punchcards to keyboard and terminal, employers continued to hire punchcard programmers for a while. But eventually, all developers had to switch to the new way of coding. AI engineering is similarly creating a huge wave of change.
There is a stereotype of “AI Native” fresh college graduates who outperform experienced developers. There is some truth to this. Multiple times, I have hired, for full-stack software engineering, a new grad who really knows AI over an experienced developer who still works 2022-style. But the best developers I know aren’t recent graduates (no offense to the fresh grads!). They are experienced developers who have been on top of changes in AI. The most productive programmers today are individuals who deeply understand computers, how to architect software, and how to make complex tradeoffs — and who additionally are familiar with cutting-edge AI tools.
Keep building, Andrew
A MESSAGE FROM DEEPLEARNING.AIIn our course Retrieval Augmented Generation, available on Coursera, you’ll build RAG systems that connect AI models to trusted, external data sources. This hands-on course covers techniques for retrieval, prompting, and evaluation to improve your applications’ output. Get started now
News
Chatbot Interviewers Fill More Jobs
Large language models may have advantages over human recruiters when conducting job interviews, a study shows.
What’s new: Researchers at the University of Chicago and Erasmus University Rotterdam found that, relative to interviews by recruiters, AI-led interviews increased job offers, acceptances, and retention of new employees.
How it works: The authors collected interviews with roughly 67,000 qualified applicants for nearly 50 job openings in a range of industries. The jobs were mostly entry-level, customer-service positions located in the Philippines that offered monthly compensation between $280 and $435. Interviewees were either assigned to the recruiter, assigned to the chatbot, or given a choice between the two. The chatbot was Anna AI, a large language model with voice input/output from the recruiter PSG Global Solutions.
Results: The authors found that AI can yield more hires, seem more unbiased, and put applicants more at ease than human interviewers.
Behind the news: The rise of AI software that performs job interviews has raised concerns that such systems may be biased against certain demographic characteristics. Some U.S. states have moved to limit some uses of AI in hiring. Meanwhile, job seekers are turning the tables on employers by using a variety of AI models to make a better impression during interviews.
Why it matters: Many discussions of AI-powered job interviews focus on the potential for bias, but few point out the technology’s benefits for applicants and employers alike. This study found that chatbot interviews can contribute to a win-win situation: More applicants hired and fewer quick departures. The study covered the relatively narrow realm of call-center jobs, and its conclusions may not apply more broadly. But it suggests that chatbot interviews may have advantages beyond convenience and cost.
We’re thinking: Job applicants in this study felt the chatbot was less biased when it came to gender. Today more tools are available for reducing AI bias than human bias! Technologists’ work is clearly paying off in this area.
China’s Emerging AI Hub
Hangzhou, a longtime manufacturing hub in eastern China, is blossoming into a center of AI innovation.
What’s new: The rise of DeepSeek and other AI companies that are among the “6 little dragons of Hangzhou” has raised the city’s profile as a technology hotbed. Hangzhou’s ability to produce AI leaders — not only the dragons but also Alibaba, Hikvision, NetEase, and Rokid — has generated headlines.
Dragons: The 6 little dragons include five AI companies: BrainCo, Deep Robotics, DeepSeek, ManyCore, and Unitree Robotics. (The sixth is the hit game developer Game Science.)
Lessons: Shenzhen and Beijing have been called “China’s Silicon Valley,” but lately Hangzhou has started to eclipse them, largely by providing startups with tax breaks and subsidies, maintaining talent pipelines, encouraging collaboration between private and public sectors, and spending on computing resources and other infrastructure. Hangzhou’s recent Future Industries Development Plan (2025–2026) focuses on AI and robotics as well as synthetic biology.
Why it matters: The world needs many AI centers, and Hangzhou is bringing its own distinctive character to AI development.
We’re thinking: In the U.S., tech companies are concentrated in a few cities, notably in Northern California. But as countries across the globe venture into AI, they would be wise to try and establish multiple hubs.
Learn More About AI With Data Points!
AI is moving faster than ever. Data Points helps you make sense of it just as fast. Data Points arrives in your inbox twice a week with six brief news stories. This week, we covered Google’s new top-rated image editing model now available in the Gemini app and Microsoft’s release of two new foundation models. Subscribe today!
Gemini’s Environmental Impact Measured
Google determined that its large language models have a smaller environmental footprint than previous estimates had led it to expect.
What’s new: For one year, Google researchers studied the energy consumption, greenhouse gas emissions, and water consumption of the models that drove its Gemini AI assistant in applications like Gmail, Calendar, Drive, Flights, and Maps. (They didn’t identify the specific models involved.) They found that the impact of processing a single prompt was roughly comparable to loading a web page or streaming a brief video to a television screen.
How it works: The authors confined their study to inference in text-processing tasks, calculating the impact of processing a single “median” prompt (one that consumes the median amount of energy across all prompts and models). They considered only activities under Google’s operational control, including data-center construction and hardware manufacturing, but not including internet routing or end-user devices.
Results: The energy and water consumed and greenhouse gases emitted by Gemini AI assistant’s models fell well below Google’s estimates in previous years. Moreover, between May 2024 and May 2025, given a median prompt, the models’ energy consumption fell by a factor of 33 and their greenhouse gas emissions fell by a factor of 44, reductions attributable to clean-energy procurement and more energy-efficient hardware and software.
Behind the news: Recently, Mistral assessed its Mistral Large 2 model in a similar way (although its study included training). It found that, at inference, 400-token prompt generated 1.14 grams of greenhouse gases and consumed 45 milliliters of water.
Yes, but: Earlier research arrived at measurements as much as two orders of magnitude higher than Google’s, largely because they included factors that Google did not, The Verge reported. For instance, a 2023 study found that GPT-3 used about 10 milliliters to 50 milliliters of water per (average) prompt — greater than Google’s Gemini findings by 40 to 200 times. That study included water used in generating electricity, such as steam used to turn turbines or water used to cool nuclear generators, which Google omitted. Further, the 2023 study based its estimate of greenhouse gas emissions on actual emissions of local grids, while Google based its measurement on the company’s commitments to buy energy from low-carbon sources. Google did not respond to questions from The Verge.
Why it matters: Assessing the environmental cost of AI has proven to be difficult, and different approaches paint very different pictures. Google’s approach has the benefit of focusing on variables under its control and addressing energy, greenhouse gases, and water. However, it leaves out important contributors to these measures — including training — as well as consumption of materials, as highlighted in Mistral’s assessment.
We’re thinking: The AI industry needs a standard method that would enable AI companies to report their models’ environmental impacts and the public to compare them. Kudos to Google, Mistral, and the independent researchers for proposing practical approaches and continuing to refine them.
Cybersecurity for Agents
Autonomous agents built on large language models introduce distinct security concerns. Researchers designed a system to protect agents from common vulnerabilities.
Results: The authors evaluated LlamaFirewall using AgentDojo, an environment that evaluates attacks against 10 agents (10 different LLMs coupled with the authors’ agentic framework).
Why it matters: The rise of agentic systems is opening new vectors of cyberattack, and security risks are likely to rise as agents operate with greater autonomy and perform more critical tasks. LlamaFirewall addresses a wide range of potential security issues in an open-source tool kit.
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|