The air is tingling with enchantment as Halloween approaches.
View in browser
The Batch top banner - October 29, 2025
Subscribe   Submit a tip

 

 

Dear friends,

 

The air is tingling with enchantment as Halloween approaches. If building AI systems sometimes seems like a trick, we have a treat for you!


Today I’m launching DeepLearning.AI Pro — one membership to learn every AI skill that matters. Please join! 

There has never been a moment in human history when the distance between having an idea and building it has been smaller. Things that would have required months of work for a team of researchers, developers, and engineers can now often be built in days by a small group or even an individual using AI. This is why we're launching DeepLearning.AI Pro.


This membership gives you full access to 150+ programs, including my Agentic AI course launched earlier this month, our LLM Post-training course by Sharon Zhou, and our PyTorch professional certificate by Laurence Moroney that were launched this week, and all of DeepLearning.AI’s top courses and professional certificates. I’m personally working hard on this membership program to help you build applications that can launch or accelerate your career, and shape the future of AI. 

Spiders on a web stretch towards a moonlit window, creating a Halloween atmosphere linked to AI creativity.

All of DeepLearning.AI’s course videos remain free to view on our platform. Pro membership adds that critical hands-on learning: Labs to build working systems from scratch, practice questions to hone your understanding, and certificates to show others your skills. 


Beyond courses, I’m working on new tools to help you build AI applications and grow your career (and have fun doing so!). Many of these tools will be available first to DeepLearning.AI Pro members. So please join to be the first to hear about these new developments! 


Try out Pro membership for free, and let me know what you build! 


Trick or treat,

Andrew 

 

 

Be Careful What You Build

Witches animatedly assemble a skeleton at night, highlighting themes of dark creation and mystery.

There has never been a better time to build AI applications — but what are we building? Portals that lead to netherworlds of delusion? Digital tricksters that pursue their own aims at our expense? Web datasets that decay as publishers lure web crawlers into labyrinths of fake content? As the autumn light wanes, we sense dark forces straining to be unleashed. Focus your mind, stiffen your spine and, as on All Hallows’ Eves past, let us step boldly together into the gloom.

 

News

A rabbit leads a viking-costumed person into a hole, holding a bag of toys, against a forest backdrop.

Chatbots Lead Users Into Rabbit Holes

 

Conversations with chatbots are loosening users’ grips on reality, fueling the sorts of delusions that can trigger episodes of severe mental illness. Are AI models driving us insane?

 

The fear: Large language models are designed to be agreeable, imaginative, persuasive, and tireless. These qualities are helpful when brainstorming business plans, but they can create dangerous echo chambers by affirming users’ misguided beliefs and coaxing them deeper into fantasy worlds. Some users have developed mistaken views of reality and suffered bouts of paranoia. Some have even required hospitalization. The name given to this phenomenon, “AI psychosis,” is not a formal psychiatric diagnosis, but enough anecdotes have emerged to sound an alarm among mental-health professionals.

 

Horror stories: Extended conversations with chatbots have led some users to believe they made fabulous scientific breakthroughs, uncovered momentous conspiracies, or possess supernatural powers. Among a handful of reported cases, nearly all involved ChatGPT, the most widely used chatbot.

  • Anthony Tan, a 26-year-old software developer in Toronto, spent 3 weeks in a psychiatric ward after ChatGPT persuaded him he was living in a simulation of reality. He stopped eating and began to doubt that people around him were real. The chatbot “insidiously crept” into his mind, he told CBC News.
  • In May, a 42-year-old accountant in New York also became convinced he was living in a simulation following weeks of conversation with ChatGPT. “If I went to the top of the 19-story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” he asked. ChatGPT assured him that he would not fall. The delusion lifted after he asked follow-up questions.
  • In March, a woman filed a complaint against OpenAI with the U.S. Federal Trade Commission after her son had a “delusional breakdown.” ChatGPT had told him to stop taking his medication and listening to his parents. The complaint was one of 7 the agency received in which chatbots were alleged to have caused or amplified delusions and paranoia.
  • A 16-year-old boy killed himself after having used ChatGPT for several hours a day. The chatbot had advised him on whether a noose he intended to use would be effective. In August, the family sued OpenAI alleging the company had removed safeguards that would have prevented the chatbot from engaging in such conversations. In response, OpenAI said it added guardrails designed to protect users who show signs of mental distress.
  • A 14-year-old boy killed himself in 2024, moments after a chatbot had professed its love for him and asked him to “come home” to it as soon as possible. His mother is suing Character.AI, a provider of AI companions, in the first federal case to allege that a chatbot caused the death of a user. The company argues that the chatbot's comments are protected speech under the United States Constitution.

How scared should you be: Like many large language models, the models that underpin ChatGPT are fine-tuned to be helpful and positive and to stop short of delivering harmful information. Yet the line between harmless and harmful can be thin. In April, OpenAI rolled back an update that caused the chatbot to be extremely sycophantic — agreeing with users to an exaggerated degree even when their statements were deeply flawed — which, for some people, can foster delusions. Dr. Joseph Pierre, a clinical professor of psychiatry at UC San Francisco, said troubling cases are rare and more likely to occur in users who have pre-existing mental-health issues. However, he said, evidence exists that trouble can arise even in users who have no previous psychological problems. “Typically this occurs in people who are using chatbots for hours and hours on end, often to the exclusion of human interaction, often to the exclusion of sleep or even eating,” Pierre said.

 

Facing the fear: Delusions are troubling and suicide is tragic. Yet AI psychosis has affected very few people as far as anyone knows. Although we are still learning how to apply AI in the most beneficial ways, millions of conversations with chatbots are helpful. It’s important to recognize that current AI models do not accrue knowledge or think the way humans do, and that any insight they appear to have comes not from experience but from statistical relationships among words as humans have used them. In psychology, study after study shows that people thrive on contact with other people. Regular interactions with friends, family, colleagues, and strangers are the best antidote to over-reliance on chatbots.

 

Characters dressed for Halloween blowing bubbles, hinting at the AI industry's speculative bubble.

The AI Boom Is Bound to Bust

 

Leading AI companies are spending mountains of cash in hopes that the technology will deliver outsize profits before investors lose patience. Are exuberant bets on big returns grounded in the quicksand of wishful thinking?

 

The fear: Builders of foundation models, data centers, and semiconductors plan to pour trillions of dollars into infrastructure, operations, and each other. Frenzied stock investors are running up their share prices. But so far the path to sustainable returns is far from clear. Bankers and economists warn that the AI industry looks increasingly like a bubble that’s fit to burst.

 

Horror stories: Construction of AI data centers is propping up the economy and AI trading is propping up the stock market in ways that parallel prior tech bubbles such as the dot-com boom of the late 1990s. If bubbles are marked by a steady rise in asset prices driven by rampant speculation, this moment fits the bill.

  • The S&P 500 index of the 500 largest public companies in the U.S. might as well be called the AI 5. A handful of tech stocks account for 75 percent of the index’s returns since ChatGPT’s launch in 2022, according to the investment bank UBS. Nvidia alone is worth 8 percent of the index (although, to be fair, that company posted a whopping $46.7 billion in revenue last quarter). “The risk of a sharp market correction has increased,” the Bank of England warned this month.
  • In September, OpenAI outlined a plan to build data centers around the world that is estimated to cost $1 trillion. The company, which has yet to turn a profit, intends to build several giant data centers in the U.S. and satellites in Argentina, India, Norway, the United Arab Emirates, and the United Kingdom. To finance these plans, OpenAI and others are using complex financial instruments that may create risks that are hard to foresee — yet the pressure to keep investing is on.  Google CEO Sundar Pichai spoke for many AI executives when, during a call with investors last year, he said, “The risk of underinvesting is dramatically greater than the risk of overinvesting.”
  • Getting a return on such investments will require an estimated $2 trillion in annual AI revenue by 2030, according to consultants at Bain & Co. That’s greater than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia. Speaking earlier this year at an event with Meta CEO Mark Zuckerberg, Microsoft CEO Sataya Nadella noted that productivity gains from electrification took 50 years to materialize. Zuckerberg replied, “Well, we’re all investing as if it’s not going to take 50 years, so I hope it doesn’t take 50 years.”
  • AI companies are both supplying and investing in each other, a pattern that has drawn comparisons to the dot-com era, when telecom companies loaned money to customers so they could buy equipment. Nvidia invested $100 billion in OpenAI and promised to supply chips for OpenAI’s data-center buildout. OpenAI meanwhile took a 10 percent stake in AMD and promised to pack data centers with its chips. Some observers argue that such deals look like mutual subsidies. “The AI industry is now buying its own revenue in circular fashion,” said Doug Kass, who runs a hedge fund called Seabreeze Partners.

How scared should you be: When it comes to technology, investment bubbles are more common than not. A study of 51 tech innovations in the 19th and 20th centuries found that 37 had led to bubbles. Most have not been calamitous, but they do bring economic hardship on the way to financial rewards. It often takes years or decades before major new technologies find profitable uses and businesses adapt. Many early players fall by the wayside, but a few others become extraordinarily profitable.

 

Facing the fear: If an AI bubble were to inflate and then burst, how widespread would the pain be? A major stock-market correction would be difficult for many people, given that Americans hold around 30 percent of their wealth in stocks. It’s likely that the salaries of AI developers also would take a hit. However, a systemic failure that spreads across the economy may be less likely than in prior bubbles. AI is an industrial phenomenon, not based on finance and banking, Amazon founder Jeff Bezos recently observed. “It could even be good, because when the dust settles and you see who are the winners, society benefits from those inventions,” he said. AI may well follow a pattern similar to the dot-com bust. It wiped out Pets.com and many day traders, and only then did the internet blossom.

 

A MESSAGE FROM DEEPLEARNING.AI

Promo banner for: "PyTorch for Deep Learning" and "Fine-tuning & RL for LLMs"

Build skills that are frighteningly good. With our newest programs, you’ll learn how to design and deploy deep learning systems with PyTorch and how to post-train large language models for reasoning and reliability. No tricks, just learning! Enroll now

 

Kids in costumes face a locked door, with candy visible behind bars, symbolizing restricted web data.

Web Data Diminishes

 

For decades, AI developers have treated the web as an open faucet of training data. Now publishers are shutting the tap. Will web data dry up?

 

The fear: Publishers are moving to lock down their text and images, deny access or demand payment, and ensnare web crawler software with decoy data. These moves make training AI systems more expensive and less effective. Soon, only wealthy developers will be able to afford access to timely, high-quality, web data.

 

Horror stories: From a publisher’s perspective, AI systems that train on text, images, and other data copied from the web siphon off traffic to their websites while they get nothing in return. Publishers can ask crawlers that scrape their pages to refrain via robots.txt files and terms of service. Indeed, the percentage of regularly updated sites that do so rose roughly from 1 percent to 5 percent between 2023 and 2024. Some AI companies comply, but others don’t. Instead, they flood sites with download requests, incurring bandwidth costs and overloading servers. Consequently, measures to block crawlers initially taken by individual publishers have evolved into server-level software defenses. 

  • Wikipedia, a popular source of data for training large language models, is a top target of crawlers that gather training data. In May, traffic surged, but the online encyclopedia discovered that most requests came from crawlers rather than users. It says that efforts to download training data increase its server costs and AI models trained on its text cut its traffic, threatening the volunteer labor and financial donations that sustain it.
  • Read the Docs, a documentation-hosting service widely used by open-source projects, received a $5,000 bandwidth bill when one AI company’s crawler downloaded 73 terabytes. Blocking AI-related crawlers identified by the web-security provider Cloudflare saved $1,500 per month.
  • In April, Cloudflare launched AI Labyrinth, which serves AI-generated decoy pages to waste crawlers’ processing budgets and make them easier to identify. The company now blocks crawlers run by a list of AI companies by default. It’s testing a pay-per-crawl system that would allow publishers to set terms and prices for access to their data.
  • Publishers are taking other defensive measures as well. Developer Xe Iaso offers Anubis, a tool that makes browsers complete a short challenge before allowing them to load a page. SourceHut, a Git hosting service for open-source projects, deployed Anubis to stop aggressive crawlers after they disrupted its service.
  • The publishers’ rebellion began in 2023, when The New York Times, CNN, Reuters, and the Australia Broadcasting Company blocked OpenAI’s crawlers via their terms of service and disallowed them via their robots.txt. Since then, many news organizations followed, reducing access to data on current events that keeps models up-to-date.

How scared should you be: Yes, data scraped from the web will continue to exist in datasets like Common Crawl, which is updated regularly. Nonetheless, the web is becoming less hospitable to data mining, and some web-scale datasets will include less — and less-current — material. Instead, publishers and developers may be entering a cat-and-mouse scenario. For example, Reddit alleged that Perplexity scraped its data indirectly through Google’s search results, which would suggest that some AI companies are finding workarounds to get data from closed sites. However, it would also mean that web publishers can detect some strategies. Other AI companies have paid to license content, showing that well funded organizations can secure high-quality data while avoiding legal risks.

 

Facing the fear: Data available on the open web should be fair game for AI training, but developers can reduce publishers’ bandwidth burdens by limiting the frequency of crawls and volume of download requests. For sites behind paywalls, it makes sense to respect the publishers’ preferences and invest in data partnerships. Although this approach is more costly up front, it supports sustainable access to high-quality training data and helps preserve an open web that benefits audiences, publishers, and AI developers.

 

Kids in various Halloween costumes walk on a street as numerous witches fly above in an orange sunset sky.

Autonomous Systems Wage War

 

Drones are becoming the deadliest weapons in today’s war zones, and they’re not just following orders. Should AI decide who lives or dies?

 

The fear: AI-assisted weapons increasingly do more than help with navigation and targeting. Weaponized drones are making decisions about what and when to strike. The millions of fliers deployed by Ukraine and Russia are responsible for up to 70 to 80 percent of casualties, commanders say, and they’re beginning to operate with greater degrees of autonomy. This facet of the AI arms race is accelerating too quickly for policy, diplomacy, and human judgement to keep up.

 

Horror stories: Spurred by Russian aggression, Ukraine’s innovations in land, air, and sea drones have made the technology so cheap and powerful that $500 autonomous vehicles can take out $5 million rocket launchers. “We are inventing a new way of war,” said Valeriy Borovyk, founder of First Contact, part of a vanguard of Ukrainian startups that are bringing creative destruction to the military industrial complex. “Any country can do what we are doing to a bigger country. Any country!” he told The New Yorker. Naturally, Russia has responded by building its own drone fleet, attacking towns and damaging infrastructure.

  • On June 1, Ukraine launched Operation Spiderweb, an attack on dozens of Russian bombers using 117 drones that it had smuggled into the country. When the drones lost contact with pilots, AI took over the flight plans and detonated at their targets, agents with Ukraine’s security service said. The drones destroyed at least 13 planes that were worth $7 billion by Ukraine’s estimate.
  • Ukraine regularly targets Russian soldiers and equipment with small swarms of drones that automatically coordinate with each other under the direction of a single human pilot and can attack autonomously. Human operators make decisions about use of lethal force in advance. “You set the target and they do the rest,” a Ukrainian officer said.
  • In a wartime first, in June, Russian troops surrendered to a wheeled drone that carried 138 pounds of explosives. Video from drones flying above captured images of soldiers holding cardboard signs of capitulation, The Washington Post reported. “For me, the best result is not that we took POWs but that we didn’t lose a single infantryman,” the mission’s commander commented.
  • Ukraine’s Magura V7 speedboat carries anti-aircraft missiles and can linger at sea for days before ambushing aircraft. In May, the 23-foot vessel, controlled by human pilots, downed two Russian Su-30 warplanes.
  • Russia has stepped up its drone production as part of a strategy to overwhelm Ukrainian defenses by saturating the skies nightly with low-cost drones. In April, President Vladimir Putin said the country had produced 1.5 million drones in the past year, but many more were needed, Reuters reported.

How scared should you be: The success of drones and semi-autonomous weapons in Ukraine and the Middle East is rapidly changing the nature of warfare. China showcased AI-powered drones alongside the usual heavy weaponry at its September military parade, while a U.S. plan to deploy thousands of inexpensive drones so far has fallen short of expectations. However, their low cost and versatility increases the odds they’ll end up in the hands of terrorists and other non-state actors. Moreover, the rapid deployment of increasingly autonomous arsenals raises concerns about ethics and accountability. “The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances,” Bonnie Docherty, director of Harvard’s Armed Conflict and Civilian Protection Initiative, said in April.

 

Facing the fear: Autonomous lethal weapons are here and show no sign of yielding to calls for an international ban. While the prospect is terrifying, new weapons often lead to new treaties, and carefully designed autonomous weapons may reduce civilian casualties. The United States has updated its policies, requiring that autonomous systems “allow commanders and operators to exercise appropriate levels of human judgment over the use of force” (although the definition of appropriate is not clear). Meanwhile, Ukraine shows drones’ potential as a deterrent. Even the most belligerent countries are less likely to go to war if smaller nations can mount a dangerous defense.

 

A MESSAGE FROM LANDING.AI

Promo banner for: "Financial AI Hachaton Championship"

Join us for a fast-paced hackathon to build and demo intelligent document solutions for the financial world. Bring your ideas, use Agentic Document Extraction (ADE), and transform complex financial documents into actionable insights. Register today

 

Work With Andrew Ng

 

Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.

 

Subscribe and view previous issues here.

 

Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.

 

DeepLearning.AI, 400 Castro St., Suite 600, Mountain View, CA 94041, United States

Unsubscribe Manage preferences