|
Dear friends,
Will AI create new job opportunities? My daughter Nova loves cats, and her favorite color is yellow. For her 7th birthday, we got a cat-themed cake in yellow by first using Gemini’s Nano Banana to design it, and then asking a baker to create it using delicious sponge cake and icing. My daughter was delighted by this unique creation, and the process created additional work for the baker (which I feel privileged to have been able to afford).
The evolution of AI and software continues to accelerate, and the set of opportunities for things we can build still grows every day. I’ve stopped writing code by hand. More controversially, I’ve long stopped reading generated code. I realize I’m in the minority here, but I feel like I can get built most of what I want without having to look directly at coding syntax, and I operate at a higher level of abstraction using coding agents to manipulate code for me. Will conventional programming languages like Python and TypeScript go the way of assembly — where it gets generated and used, but without direct examination by a human developer — or will models compile directly from English prompts to byte code?
Keep building, Andrew
A MESSAGE FROM DEEPLEARNING.AIThe first speakers for AI Dev 26 × San Francisco are confirmed! Hear from leaders shaping AI, join hands-on technical workshops, explore live demos of real-world systems, and discover emerging startups in our new AI Startup Track. View the lineup and secure your ticket today
News
GLM-5 Scales Up
Z.ai more than doubled the size of its flagship large language model to deliver outstanding performance among open-weights competitors.
What’s new: GLM-5 is designed for long-running agentic tasks. It tops other open-weights models in Artificial Analysis’ Intelligence Index.
How it works: Z.ai disclosed few details about the GLM-5’s architecture and training.
Performance: GLM-5 achieved the highest performance among open-weights models in some coding and agentic tasks but generally trailed proprietary frontier models.
Why it matters: On Artificial Analysis’ Intelligence Index, GLM-5 nearly matches proprietary leaders Claude Opus 4.6 and GPT-5.2. The shrinking gaps between open-weights and proprietary models give developers high-performance options to modify and/or run on their own hardware.
We’re thinking: The center of gravity in open-weights AI has shifted decisively eastward. Developers in China have been responsible for a succession of leading open-weights large language models lately, including GLM 4.5, Kimi K2, Qwen3-VL-235B-A22B, and Kimi K2.5.
Big AI Spends Big on Lobbying
Top tech and AI companies spent more than $100 million to influence government policy in 2025, the first time they exceeded that figure.
What happened: Meta put $26.29 million into political lobbying last year, more than any other company in any industry, Bloomberg reported. Other big spenders include Amazon ($17.89 million), Alphabet ($13.10 million), and Microsoft ($9.36 million), and Nvidia’s relatively modest budget ballooned to $4.9 million, seven times its size in 2024. Big spenders have been rewarded as the federal government shifted toward more tech-friendly policies, notably support for building data centers and a reversal of the White House’s ban on selling advanced AI chips to China. (Disclosure: Andrew Ng serves on Amazon’s board of directors.)
How it works: Corporate spending on lobbying typically goes into advising officials and drafting legislative proposals, often indirectly through political action committees and industry groups. (Spending to elect favored candidates can be even higher; Meta has allocated $65 million to elect AI-friendly state officials this year, The New York Times reported.) Of the 10 tech companies that spent the most on lobbying last year, several donated to favored White House projects and political organizations. In addition, some companies hired employees who have close relationships with the Trump administration, had their executives attend White House events, and committed to spending on administration priorities.
Tech-friendly policies: Recent changes in national AI policy mirrored the interests of companies that spent the most on lobbying.
Why it matters: Tech companies aren’t the biggest spenders on lobbying. That distinction belongs to healthcare companies. Yet the AI giants’ escalating efforts portend a streamlined regulatory environment while consolidating their power within it. The impact on developers has been largely positive. Lobbying by tech giants appears to have helped alleviate the headache of navigating a patchwork of state laws. The push to build massive infrastructure projects and relax restrictions on chip exports promises a surge in overall compute capacity and hardware stability. However, doing business may become harder for companies that don’t pay to play.
We’re thinking: As industries mature, sometimes they shift from technical meritocracies in which the best tech wins to political arenas in which power dynamics matter at least as much. AI developers increasingly may be channeled into policy frameworks developed by big-tech lobbyists, for better or worse.
Learn More About AI With Data Points!
AI is moving faster than ever. Data Points helps you make sense of it just as fast. Data Points arrives in your inbox twice a week with six brief news stories. This week, we covered Qwen 3.5’s state-of-the-art performance across 200+ languages and Claude Sonnet 4.6 reaching Opus-level performance at a significantly lower price. Subscribe today!
Faster Reasoning at the Edge
Reasoning models in the 1 to 2 billion-parameter range typically require more than 1 gigabyte of RAM to run. Liquid AI released one that runs in less than 900 megabytes, and does it with exceptional speed and efficiency.
What’s new: Liquid AI’s LFM2.5-1.2B-Thinking is designed to run on small devices. It complements base, instruction-tuned, Japanese, vision-language, and audio-language LFM2.5 variants, which debuted in January.
How it works: The architecture mixes attention layers with convolutional layers which, given a new token, process only an adjacent group of tokens — rather than the entire input sequence, as attention does — and thus use less computation and memory. Small models can develop issues such as forgetting as they’re trained on successive domains. To overcome such problems, the team trained LFM2.5-12B-Thinking in phases.
Results: On Artificial Analysis’ Intelligence Index, a weighted average of 10 benchmarks, LFM2.5-1.2B-Thinking matched models of similar size and larger size including Qwen3-1.7B in thinking mode.
Yes, but: Small models struggle with hallucinations, and LFM2.5-1.2B-Thinking underperforms competing models in this regard.
Why it matters: LFM2.5-1.2B-Thinking is well suited to drive on-device agents that orchestrate tool calls, extract data, or query local databases. Such agents need the ability to follow instructions more than encyclopedic knowledge, since they’re likely to fetch external information. They also benefit from speed to handle lengthy chains of requests and a small memory footprint that leaves room for other applications.
We’re thinking: While many developers try to pack the most intelligence into their models, LFM2.5-1.2B strikes a balance between intelligence, inference speed, and memory requirements.
Sleep Signals Predict Illness
Difficulty sleeping often precedes heart disease, psychiatric disorders, and many other illnesses. Researchers used data gathered during sleep studies to detect such conditions.
What’s new: SleepFM is a system that classifies Alzheimer’s, Parkinson’s, prostate cancer, stroke, congestive heart failure, and many other conditions based on a person’s vital signs while asleep — as much as 6 years before they show symptoms. Rahul Thapa and Magnus Ruud Kjaer worked with colleagues at Stanford University, Danish Center for Sleep Medicine, Technical University of Denmark, BioSerenity, Harvard Medical School, and University of Copenhagen.
How it works: SleepFM comprises a convolutional neural network (CNN), transformer, and LSTM. The authors trained the system in two stages: (i) to encode patterns in sleep data and (ii) to classify diseases. The training data comprised roughly 585,000 hours of sleep-study recordings that included, in addition to each patient’s age and sex, signals of activity in the brain, heart, respiratory system (airflow, snoring, and blood oxygen level), and leg muscles. The data was mostly proprietary but included public datasets.
Results: The authors compared SleepFM’s performance on a proprietary test set to the same system without pretraining and a vanilla neural network that was trained on only demographic information.
Why it matters: AI’s ability to recognize subtle patterns has amazing potential in medicine and beyond. In this application, it could provide early warning of serious diseases, enabling people to take steps to prevent illness before it develops.
We’re thinking: We’re wide awake after reading this paper!
Work with Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|