Dear friends,
Last week, China barred its major tech companies from buying Nvidia chips. This move received only modest attention in the media, but has implications far beyond what’s widely appreciated. Specifically, it signals that China has progressed sufficiently in semiconductors to break away from dependence on advanced chips designed in the U.S., the vast majority of which are manufactured in Taiwan. It also highlights the U.S. vulnerability to possible disruptions in Taiwan at a moment when China is becoming less vulnerable.
After the U.S. started restricting AI chip sales to China, China dramatically ramped up its semiconductor research and investment to move toward self-sufficiency. These efforts are starting to bear fruit, and China’s willingness to cut off Nvidia is a strong sign of its faith in its domestic capabilities. For example, the new DeepSeek-R1-Safe model was trained on 1000 Huawei Ascend chips. While individual Ascend chips are significantly less powerful than individual Nvidia or AMD chips, Huawei’s system-level design approach to orchestrating how a much larger number of chips work together seems to be paying off. For example, Huawei’s CloudMatrix 384 system of 384 chips aims to compete with Nvidia’s GB200, which uses 72 higher-capability chips.
Today, U.S. access to advanced semiconductors is heavily dependent on Taiwan’s TSMC, which manufactures the vast majority of the most advanced chips. Unfortunately, U.S. efforts to ramp up domestic semiconductor manufacturing have been slow. I am encouraged that one fab at the TSMC Arizona facility is now operating, but issues of workforce training, culture, licensing and permitting, and the supply chain are still being addressed, and there is still a long road ahead for the U.S. facility to be a viable substitute for manufacturing in Taiwan.
If China gains independence from Taiwan manufacturing significantly faster than the U.S., this would leave the U.S. much more vulnerable to possible disruptions in Taiwan, whether through natural disasters or man-made events. If manufacturing in Taiwan is disrupted for any reason and Chinese companies end up accounting for a large fraction of global semiconductor manufacturing capabilities, that would also help China gain tremendous geopolitical influence.
But hope is not a plan. In addition to working to ensure peace, practical work lies ahead to multi-source, build more chip fabs in more nations, and enhance the resilience of the semiconductor supply chain. Dependence on any single manufacturer invites shortages, price spikes, and stalled innovation the moment something goes sideways.
Keep building, Andrew
A MESSAGE FROM DEEPLEARNING.AIBuild a data agent that plans steps, connects to various data sources, and self-corrects based on evaluations. Learn how to measure answer quality, track plan adherence, and add runtime checks that redirect agents when context becomes irrelevant. Enroll for free!
News
Agents of Commerce
Google launched an open protocol for agentic payments that enables agents based on any large language model to purchase items over the internet.
What’s new: Agent Payments Protocol (AP2) is designed for buyers and sellers to securely initiate, authorize, and close purchases. AP2 works with Google’s A2A and Anthropic’s similar MCP, open protocols that instruct agents or provide access to data and APIs. It manages diverse payment types including credit cards, bank transfers, digital payments, and cryptocurrency.
How it works: Agentic payments pose challenges to security, such as manipulation by malicious actors, and liability, particularly with respect to whether a user or agent is to blame for mistakes. AP2 aims to solve these problems by using cryptographically signed contracts called mandates. Three distinct mandates record the terms of the purchase, its fulfillment, and the user’s authorization of payment. If a fraudulent or incorrect transaction occurs, the payment processor can consult this record to see which party is accountable. To buy an item using AP2:
Behind the news: Many companies have experimented with agentic payments with varying degrees of success. For example, last year Stripe launched an agentic payment toolkit that issues a one-time debit card for each purchase. This approach reduces risk, but it requires Stripe’s payment system, particular models, and specific agentic frameworks. Google’s approach is more comprehensive, initially including more than 60 partners including payment processors, financial institutions, and software giants.
Why it matters: AP2 opens up automated sales in which any participant can buy and sell, and it does this in a standardized, flexible way. For instance, a user could tell an agent to book a vacation in a specific location with a specific budget. The agent could transmit those requirements to many sellers’ agents that might assemble customized packages to meet the user’s demands. Then the user’s agent could either present the packages to the user or choose one itself. The buyer would get the vacation they want and the seller would make a valuable sale, while AI did the haggling.
We’re thinking: The internet didn’t make travel agents obsolete, it made them agentic!
What ChatGPT Users Want
What do ChatGPT’s 700 million weekly active users do with it? OpenAI teamed up with a Harvard economist to find out.
What’s new: ChatGPT users are turning to the chatbot increasingly for personal matters rather than work, and the gender balance of the user base is shifting, OpenAI found in a large-scale study. “How People Use ChatGPT,” a preliminary report published by the National Bureau of Economic Research, is available in return for an institutional email address.
How it works: The study examined 1.58 million messages entered by users and drawn at random from over 1.1 million conversations between May 2024 and July 2025.
Results: Most users of ChatGPT were young adults, and apparently more women are joining their ranks. Uses shifted from work to more personal tasks over the course of the study period. Writing and guidance were most popular uses, followed closely by seeking information.
Behind the news: OpenAI said its report is the largest study of chatbot usage undertaken to date, but its peers have published similar research. Anthropic released its third Economic Index, which analyzes consumer and business use of its Claude models. Anthropic’s study shows that Claude API users are much more likely to automate tasks than consumer users. Claude is used overwhelmingly for computational and mathematical tasks, but education, arts and media, and office and administrative support are steadily rising.
Why it matters: In OpenAI’s study (and Anthropic’s), AI users and uses are becoming more diverse. The initial user of AI chatbots was disproportionately likely to be based in the U.S., highly educated, highly paid, male, young, and focused on technology. Nearly 3 years after ChatGPT’s introduction, they are far more varied, as are their wants, needs, and expectations.
We’re thinking: Early on, it seemed as though large language models would be most useful for work. But people are using them to seek information and advice about personal matters, plan their lives, and express themselves. It turns out that we need more intelligence in our whole lives, not just at the office.
Learn More About AI With Data Points!
AI is moving faster than ever. Data Points helps you make sense of it just as fast. Data Points arrives in your inbox twice a week with six brief news stories. This week, we covered Nvidia and OpenAI’s $100 billion AI infrastructure partnership and the launch of GPT-5-Codex with new developer tools. Subscribe today!
Sports Betting Goes Agentic
AI agents are getting in on the action of online sports gambling.
What’s new: Several startups cater to betting customers by offering AI-powered sports analysis, chat, and tips, Wired reported. Some established gambling operations are adding AI capabilities to match.
How it works: Most AI sports-betting startups analyze which bets are the most statistically likely to pay off based on publicly available data. Increasingly, agents suggest specific bets. Only a few take bets from users and pay winnings to them, and fewer offer agents that actively place bets on third-party web sites on a user’s behalf.
Behind the news: Most AI gambling startups are based in the United States, where online betting recently became legal. In 2024, Americans bet over $150 billion on legal sports wagers, up 22 percent from 2023. The share of online betting has grown steadily from 25 percent of the total in 2024 to 30 percent in 2025 and shows no sign of slowing down.
Why it matters: Online gambling is an AI laboratory that uses nearly every emerging element of the technology. It requires quantitative reasoning to analyze bets, RAG to scour sports statistics and other relevant information, classification models to identify potentially profitable bets, and payment agents to place bets automatically. As these technologies advance, betting analysis and tools will advance with them.
We’re thinking: Whether you gamble with cash or just wager your time and energy, learning more about AI is a smart bet.
Faster Reinforcement Learning
Fine-tuning large language models via reinforcement learning is computationally expensive, but researchers found a way to streamline the process.
Results: The authors compared models that were fine-tuned using GAIN-RL with counterparts that used GRPO performed on randomly ordered examples. GAIN-RL generally accelerated learning by a factor of 2.5.
Why it matters: Many strategies for ordering training examples rely on external, often expensive heuristics based on their difficulty, for example judgments by human annotators or a proprietary LLM. By using a simple signal generated by the model itself, this method provides a direct and efficient way to identify the most effective examples, making reinforcement learning much faster.
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|