There is significant unmet demand for developers who understand AI.
View in browser
The Batch top banner - September 3, 2025
Subscribe   Submit a tip

 

 

Dear friends,

 

There is significant unmet demand for developers who understand AI. At the same time, because most universities have not yet adapted their curricula to the new reality of programming jobs being much more productive with AI tools, there is also an uptick in unemployment of recent CS graduates. 


When I interview AI engineers — people skilled at building AI applications — I look for people who can:

  • Use AI assistance to rapidly engineer software systems
  • Use AI building blocks like prompting, RAG, evals, agentic workflows, and machine learning to build applications
  • Prototype and iterate rapidly

Someone with these skills can get a massively greater amount done than someone who writes code the way we did in 2022, before the advent of Generative AI. I talk to large businesses every week that would love to hire hundreds or more people with these skills, as well as startups that have great ideas but not enough engineers to build them. As more businesses adopt AI, I expect this talent shortage only to grow! At the same time, recent CS graduates face an increased unemployment rate (e.g., see this study using data from 2023), though the underemployment rate — of graduates doing work that doesn’t require a degree — is still lower than for most other majors. This is why we hear simultaneously anecdotes of unemployed CS graduates and also of rising salaries for in-demand AI engineers. 

 

When programming evolved from punchcards to keyboard and terminal, employers continued to hire punchcard programmers for a while. But eventually, all developers had to switch to the new way of coding. AI engineering is similarly creating a huge wave of change.

Comic showing tech interviews: 2022 asks “Can you code FizzBuzz?” vs 2025 asks “Can you build an e-commerce platform?”

There is a stereotype of “AI Native” fresh college graduates who outperform experienced developers. There is some truth to this. Multiple times, I have hired, for full-stack software engineering, a new grad who really knows AI over an experienced developer who still works 2022-style. But the best developers I know aren’t recent graduates (no offense to the fresh grads!). They are experienced developers who have been on top of changes in AI. The most productive programmers today are individuals who deeply understand computers, how to architect software, and how to make complex tradeoffs — and who additionally are familiar with cutting-edge AI tools. 


Sure, some skills from 2022 are becoming obsolete. For example, a lot of coding syntax that we had to memorize back then is no longer important, since we no longer need to code by hand as much. But even if, say, 30% of CS knowledge is obsolete, the remaining 70% — complemented with modern AI knowledge — is what makes really productive developers. (Even after punch cards became obsolete, a fundamental understanding of programming was very helpful for typing code into a keyboard.)


Without understanding how computers work, you can’t just “vibe code” your way to greatness. Fundamentals are still important, and for those who additionally understand AI, job opportunities are numerous!

 

Keep building,

Andrew 

 

 

A MESSAGE FROM DEEPLEARNING.AI

Promo banner for: "Retrieval Augmented Generation (RAG)"

In our course Retrieval Augmented Generation, available on Coursera, you’ll build RAG systems that connect AI models to trusted, external data sources. This hands-on course covers techniques for retrieval, prompting, and evaluation to improve your applications’ output. Get started now

 

News

Bar charts comparing AI and human interviewers show higher rates for AI across job offers, job starts, and one-month retention.

Chatbot Interviewers Fill More Jobs

 

Large language models may have advantages over human recruiters when conducting job interviews, a study shows.

 

What’s new: Researchers at the University of Chicago and Erasmus University Rotterdam found that, relative to interviews by recruiters, AI-led interviews increased job offers, acceptances, and retention of new employees.

 

How it works: The authors collected interviews with roughly 67,000 qualified applicants for nearly 50 job openings in a range of industries. The jobs were mostly entry-level, customer-service positions located in the Philippines that offered monthly compensation between $280 and $435. Interviewees were either assigned to the recruiter, assigned to the chatbot, or given a choice between the two. The chatbot was Anna AI, a large language model with voice input/output from the recruiter PSG Global Solutions.

  • All interviews followed the same format: Applicants were asked about career goals, education, and experience and were allowed to ask questions afterward. Both the recruiter and Anna AI were permitted to ask follow-up questions.
  • Following the interviews, around 2,700 applicants completed a survey designed to measure their satisfaction with the interview process and general attitudes toward AI.
  • Human recruiters made all hiring decisions after assessing interviews via audio recordings, interview transcripts, and standardized test scores. They were instructed to apply the same assessment criteria to each hire regardless of whether the applicant was interviewed by a recruiter or Anna AI.

Results: The authors found that AI can yield more hires, seem more unbiased, and put applicants more at ease than human interviewers.

  • Job applicants that were interviewed by Anna AI were 12 percent more likely to be offered a job than those who were interviewed by a recruiter. Among applicants who received an offer, those who had been interviewed by Anna AI were 18 percent more likely to start the job.
  • In a free-form survey, applicants interviewed by Anna AI were half as likely to report that the interviewer discriminated against them based on their gender.
  • Around 5 percent of AI interviews ended early, and 7 percent had technical difficulties.
  • On the other hand, Anna AI covered a median of 9 topics while recruiters covered 5, and applicants interviewed by Anna AI were 71 percent more likely to give a positive assessment of the interview experience.

Behind the news: The rise of AI software that performs job interviews has raised concerns that such systems may be biased against certain demographic characteristics. Some U.S. states have moved to limit some uses of AI in hiring. Meanwhile, job seekers are turning the tables on employers by using a variety of AI models to make a better impression during interviews.

 

Why it matters: Many discussions of AI-powered job interviews focus on the potential for bias, but few point out the technology’s benefits for applicants and employers alike. This study found that chatbot interviews can contribute to a win-win situation: More applicants hired and fewer quick departures. The study covered the relatively narrow realm of call-center jobs, and its conclusions may not apply more broadly. But it suggests that chatbot interviews may have advantages beyond convenience and cost.

 

We’re thinking: Job applicants in this study felt the chatbot was less biased when it came to gender. Today more tools are available for reducing AI bias than human bias! Technologists’ work is clearly paying off in this area.

 

Hangzhou skyline with modern skyscrapers, the golden sphere of Intercontinental Hotel, and the silver dome of Qianjiang complex.

China’s Emerging AI Hub

 

Hangzhou, a longtime manufacturing hub in eastern China, is blossoming into a center of AI innovation.

 

What’s new: The rise of DeepSeek and other AI companies that are among the “6 little dragons of Hangzhou” has raised the city’s profile as a technology hotbed. Hangzhou’s ability to produce AI leaders — not only the dragons but also Alibaba, Hikvision, NetEase, and Rokid — has generated headlines.

 

Dragons: The 6 little dragons include five AI companies: BrainCo, Deep Robotics, DeepSeek, ManyCore, and Unitree Robotics. (The sixth is the hit game developer Game Science.)

  • BrainCo started in a Boston garage in 2015, when Bicheng Han was pursuing a PhD at Harvard. Hangzhou offered him funds to rent property, and he moved there in 2018. It makes brain-computer interfaces designed for meditation and sleep using AI to interpret brain signals.
  • Deep Robotics was founded in 2017 by Zhu Qiuguo and Li Chao. It makes quadruped robots that navigate autonomously for industrial uses and rescue missions. Singapore Power Group uses its X30 robot to inspect power tunnels.
  • Founded in 2023 by Liang Wenfeng, DeepSeek is an independent subsidiary of the AI-powered investment firm High-Flyer Capital Management. The company has focused on building open-weights models, including DeepSeek-R1, that famously rival top closed models but cost much less to develop.
  • ManyCore was founded in 2011 by Huang Xiaohuang, Chen Hang, and Zhu Hao. In 2023, its 3D design platform, which uses AI to generate and manipulate virtual scenes, was the world’s largest by monthly active users, and China’s largest by revenue. It applied for a public offering on the Hong Kong stock exchange in early 2025.
  • Unitree Robotics, maker of acrobatic humanoid robots, was founded in 2016 by Wang Xingxing. Today it accounts for 60 percent of the quadruped robot market and also produces humanoid robots. It’s valued at $1.4 billion.

Lessons: Shenzhen and Beijing have been called “China’s Silicon Valley,” but lately Hangzhou has started to eclipse them, largely by providing startups with tax breaks and subsidies, maintaining talent pipelines, encouraging collaboration between private and public sectors, and spending on computing resources and other infrastructure. Hangzhou’s recent Future Industries Development Plan (2025–2026) focuses on AI and robotics as well as synthetic biology.

  • Hangzhou allocates 15 percent of the city’s annual fiscal revenue to tech investments. For instance, when Game Science ran out of office space, the city secured space and kept two buildings vacant for three years in case Game Science needed them.
  • The city benefits from the presence of Zhejiang University, which feeds talent to local companies. Zhejiang alumni founded 4 of the 6 dragons. Graduates looking for work in Hangzhou can spend a week in government-managed accommodations, free of charge. For those who qualify as high-level talent, Hangzhou supplements housing costs and daily expenses with hundreds of thousands of RMB.
  • Alibaba Cloud, China’s largest cloud platform, provides computing power to startups, ThinkChina reported. In addition, many companies have stockpiles of Nvidia GPUs, supplemented by homegrown processors from Huawei and Semiconductor Manufacturing International Corporation.

Why it matters: The world needs many AI centers, and Hangzhou is bringing its own distinctive character to AI development.

 

We’re thinking: In the U.S., tech companies are concentrated in a few cities, notably in Northern California. But as countries across the globe venture into AI, they would be wise to try and establish multiple hubs.

 

Digital map of Latin America with glowing data connections across continents, representing global AI and technology networks.

Learn More About AI With Data Points!

 

AI is moving faster than ever. Data Points helps you make sense of it just as fast. Data Points arrives in your inbox twice a week with six brief news stories. This week, we covered Google’s new top-rated image editing model now available in the Gemini app and Microsoft’s release of two new foundation models. Subscribe today!

 

Google study chart comparing energy use of AI accelerators for Gemini, including chip power, CPU, and idle machines.

Gemini’s Environmental Impact Measured

 

Google determined that its large language models have a smaller environmental footprint than previous estimates had led it to expect.

 

What’s new: For one year, Google researchers studied the energy consumption, greenhouse gas emissions, and water consumption of the models that drove its Gemini AI assistant in applications like Gmail, Calendar, Drive, Flights, and Maps. (They didn’t identify the specific models involved.) They found that the impact of processing a single prompt was roughly comparable to loading a web page or streaming a brief video to a television screen.

 

How it works: The authors confined their study to inference in text-processing tasks, calculating the impact of processing a single “median” prompt (one that consumes the median amount of energy across all prompts and models). They considered only activities under Google’s operational control, including data-center construction and hardware manufacturing, but not including internet routing or end-user devices.

  • Energy: The authors measured energy used to classify prompts, route them to specific models, and rank potential responses. To accomplish this, they traced the hardware used and measured energy consumption of all hardware components within a server rack, including idle machines, active processors, and cooling systems. TPUs, Google’s custom AI processors, accounted for 58 percent of the total energy consumption.
  • Emissions: The authors calculated greenhouse gas emissions by multiplying the energy consumed per median-length prompt by the previous year’s average emissions per unit of electricity plus operational emissions from sources like heating and air conditioning as well as embodied emissions like hardware manufacturing and transportation, and building the data center itself. They estimated operational and embodied emissions using results from this study.
  • Water: Water is used to cool data-center hardware, and around 80 percent of it evaporates. The authors measured water input minus water returned in 2023 and 2024. This enabled them to calculate water usage per energy (1.15 liter per kilowatt hour), which they multiplied by the amount of energy used per prompt to calculate the water usage per prompt.

Results: The energy and water consumed and greenhouse gases emitted by Gemini AI assistant’s models fell well below Google’s estimates in previous years. Moreover, between May 2024 and May 2025, given a median prompt, the models’ energy consumption fell by a factor of 33 and their greenhouse gas emissions fell by a factor of 44, reductions attributable to clean-energy procurement and more energy-efficient hardware and software.

  • A median text prompt consumed approximately 0.24 watt-hours, around the amount of energy that a television screen consumes over 9 seconds.
  • The median prompt consumed 0.26 milliliters of water, about five drops.
  • Each median prompt generated about 0.03 grams of greenhouse gases, roughly the amount emitted when loading a single webpage.

Behind the news: Recently, Mistral assessed its Mistral Large 2 model in a similar way (although its study included training). It found that, at inference, 400-token prompt generated 1.14 grams of greenhouse gases and consumed 45 milliliters of water.

 

Yes, but: Earlier research arrived at measurements as much as two orders of magnitude higher than Google’s, largely because they included factors that Google did not, The Verge reported. For instance, a 2023 study found that GPT-3 used about 10 milliliters to 50 milliliters of water per (average) prompt — greater than Google’s Gemini findings by 40 to 200 times. That study included water used in generating electricity, such as steam used to turn turbines or water used to cool nuclear generators, which Google omitted. Further, the 2023 study based its estimate of greenhouse gas emissions on actual emissions of local grids, while Google based its measurement on the company’s commitments to buy energy from low-carbon sources. Google did not respond to questions from The Verge.

 

Why it matters: Assessing the environmental cost of AI has proven to be difficult, and different approaches paint very different pictures. Google’s approach has the benefit of focusing on variables under its control and addressing energy, greenhouse gases, and water. However, it leaves out important contributors to these measures — including training — as well as consumption of materials, as highlighted in Mistral’s assessment.

 

We’re thinking: The AI industry needs a standard method that would enable AI companies to report their models’ environmental impacts and the public to compare them. Kudos to Google, Mistral, and the independent researchers for proposing practical approaches and continuing to refine them.

 

Charts showing PromptGuard 2 blocking attacks, AlignmentCheck detecting goal hijacking, and CodeShield finding insecure code.

Cybersecurity for Agents

 

Autonomous agents built on large language models introduce distinct security concerns. Researchers designed a system to protect agents from common vulnerabilities.


What’s new: Sahana Chennabasappa and colleagues at Meta released LlamaFirewall, an open-source system designed to mitigate three lines of attack: (i) jailbreaking (prompts that bypass an LLM’s built-in safeguards), (ii) goal hijacking (inputs that aim to change an LLM’s prompted goal), and (iii) exploiting vulnerabilities in generated code. The code and models are freely available for projects that have up to 700 million monthly active users.


Key insight: Security for LLMs typically focuses on filtering inputs and fine-tuning outputs. But agentic LLMs retain vulnerabilities that aren’t addressed by those techniques and present new ones as well. Receiving instructions exposes them to jailbreaking, tool use makes them vulnerable to goal hijacking (for instance, when an agent conducts a web search and encounters malicious data), and output code may open security holes outside the agent itself. To defend against these weaknesses, a security system can filter malicious prompts, monitor chains of thought for deviations from prompted goals, and check generated code for flaws.


How it works: LlamaFirewall integrates three modules:

  • PromptGuard 2: To block malicious inputs, DeBERTa, an 86 million parameter transformer fine-tuned to classify prompts as benign or malicious, classifies incoming text from users or external tools. 
  • AlignmentCheck: To detect goal hijacking, Llama 4 Maverick compares chains of thought, tool calls, and output with the user’s objective as stated in the initial prompt. If the generated text or tool calls drift away from the user’s intended objective, LlamaFirewall stops the generation.
  • CodeShield: To check generated code for flaws, this module uses rules to detect insecure patterns in generated code, such as vulnerability to SQL injections (like "SELECT * FROM users WHERE email LIKE '" + domain + "'", which allows SQL injections through the unsanitized input parameter  “domain”). It prevents insecure code from being passed to users until the agent fixes the code and it passes review.

Results: The authors evaluated LlamaFirewall using AgentDojo, an environment that evaluates attacks against 10 agents (10 different LLMs coupled with the authors’ agentic framework). 

  • With LlamaFirewall, attacks were successful 1.7 percent of the time. Without it, they succeeded 17.6 percent of the time. 
  • AlignmentCheck detected 83 percent of attacks in a proprietary dataset with a false-positive rate of 2.5 percent.
  • The authors tuned PromptGuard 2’s classification threshold to achieve a false-positive rate of 1 percent. At this rate, PromptGuard 2 detected 97.5 percent of attacks in a proprietary dataset.
  • The authors also compared the performance of PromptGuard 2 to competing prompt classifiers using AgentDojo. With PromptGuard 2, 3.3 percent of jailbreak attempts were successful. Using the next-best competitor, ProtectAI, 13.7 percent succeeded. 

Why it matters: The rise of agentic systems is opening new vectors of cyberattack, and security risks are likely to rise as agents operate with greater autonomy and perform more critical tasks. LlamaFirewall addresses a wide range of potential security issues in an open-source tool kit.


We’re thinking: This work is a helpful reminder that, while generative LLMs are all the rage, BERT-style classifiers remain useful when an application needs to classify text quickly.

 

Work With Andrew Ng

 

Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.

 

Subscribe and view previous issues here.

 

Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.

 

DeepLearning.AI, 195 Page Mill Road, Suite 115, Palo Alto, CA 94306, United States

Unsubscribe Manage preferences