|
Dear friends,
The air is tingling with enchantment as Halloween approaches. If building AI systems sometimes seems like a trick, we have a treat for you!
All of DeepLearning.AI’s course videos remain free to view on our platform. Pro membership adds that critical hands-on learning: Labs to build working systems from scratch, practice questions to hone your understanding, and certificates to show others your skills.
Andrew
There has never been a better time to build AI applications — but what are we building? Portals that lead to netherworlds of delusion? Digital tricksters that pursue their own aims at our expense? Web datasets that decay as publishers lure web crawlers into labyrinths of fake content? As the autumn light wanes, we sense dark forces straining to be unleashed. Focus your mind, stiffen your spine and, as on All Hallows’ Eves past, let us step boldly together into the gloom.
News
Chatbots Lead Users Into Rabbit Holes
Conversations with chatbots are loosening users’ grips on reality, fueling the sorts of delusions that can trigger episodes of severe mental illness. Are AI models driving us insane?
The fear: Large language models are designed to be agreeable, imaginative, persuasive, and tireless. These qualities are helpful when brainstorming business plans, but they can create dangerous echo chambers by affirming users’ misguided beliefs and coaxing them deeper into fantasy worlds. Some users have developed mistaken views of reality and suffered bouts of paranoia. Some have even required hospitalization. The name given to this phenomenon, “AI psychosis,” is not a formal psychiatric diagnosis, but enough anecdotes have emerged to sound an alarm among mental-health professionals.
Horror stories: Extended conversations with chatbots have led some users to believe they made fabulous scientific breakthroughs, uncovered momentous conspiracies, or possess supernatural powers. Among a handful of reported cases, nearly all involved ChatGPT, the most widely used chatbot.
How scared should you be: Like many large language models, the models that underpin ChatGPT are fine-tuned to be helpful and positive and to stop short of delivering harmful information. Yet the line between harmless and harmful can be thin. In April, OpenAI rolled back an update that caused the chatbot to be extremely sycophantic — agreeing with users to an exaggerated degree even when their statements were deeply flawed — which, for some people, can foster delusions. Dr. Joseph Pierre, a clinical professor of psychiatry at UC San Francisco, said troubling cases are rare and more likely to occur in users who have pre-existing mental-health issues. However, he said, evidence exists that trouble can arise even in users who have no previous psychological problems. “Typically this occurs in people who are using chatbots for hours and hours on end, often to the exclusion of human interaction, often to the exclusion of sleep or even eating,” Pierre said.
Facing the fear: Delusions are troubling and suicide is tragic. Yet AI psychosis has affected very few people as far as anyone knows. Although we are still learning how to apply AI in the most beneficial ways, millions of conversations with chatbots are helpful. It’s important to recognize that current AI models do not accrue knowledge or think the way humans do, and that any insight they appear to have comes not from experience but from statistical relationships among words as humans have used them. In psychology, study after study shows that people thrive on contact with other people. Regular interactions with friends, family, colleagues, and strangers are the best antidote to over-reliance on chatbots.
The AI Boom Is Bound to Bust
Leading AI companies are spending mountains of cash in hopes that the technology will deliver outsize profits before investors lose patience. Are exuberant bets on big returns grounded in the quicksand of wishful thinking?
The fear: Builders of foundation models, data centers, and semiconductors plan to pour trillions of dollars into infrastructure, operations, and each other. Frenzied stock investors are running up their share prices. But so far the path to sustainable returns is far from clear. Bankers and economists warn that the AI industry looks increasingly like a bubble that’s fit to burst.
Horror stories: Construction of AI data centers is propping up the economy and AI trading is propping up the stock market in ways that parallel prior tech bubbles such as the dot-com boom of the late 1990s. If bubbles are marked by a steady rise in asset prices driven by rampant speculation, this moment fits the bill.
How scared should you be: When it comes to technology, investment bubbles are more common than not. A study of 51 tech innovations in the 19th and 20th centuries found that 37 had led to bubbles. Most have not been calamitous, but they do bring economic hardship on the way to financial rewards. It often takes years or decades before major new technologies find profitable uses and businesses adapt. Many early players fall by the wayside, but a few others become extraordinarily profitable.
Facing the fear: If an AI bubble were to inflate and then burst, how widespread would the pain be? A major stock-market correction would be difficult for many people, given that Americans hold around 30 percent of their wealth in stocks. It’s likely that the salaries of AI developers also would take a hit. However, a systemic failure that spreads across the economy may be less likely than in prior bubbles. AI is an industrial phenomenon, not based on finance and banking, Amazon founder Jeff Bezos recently observed. “It could even be good, because when the dust settles and you see who are the winners, society benefits from those inventions,” he said. AI may well follow a pattern similar to the dot-com bust. It wiped out Pets.com and many day traders, and only then did the internet blossom.
A MESSAGE FROM DEEPLEARNING.AIBuild skills that are frighteningly good. With our newest programs, you’ll learn how to design and deploy deep learning systems with PyTorch and how to post-train large language models for reasoning and reliability. No tricks, just learning! Enroll now
Web Data Diminishes
For decades, AI developers have treated the web as an open faucet of training data. Now publishers are shutting the tap. Will web data dry up?
The fear: Publishers are moving to lock down their text and images, deny access or demand payment, and ensnare web crawler software with decoy data. These moves make training AI systems more expensive and less effective. Soon, only wealthy developers will be able to afford access to timely, high-quality, web data.
Horror stories: From a publisher’s perspective, AI systems that train on text, images, and other data copied from the web siphon off traffic to their websites while they get nothing in return. Publishers can ask crawlers that scrape their pages to refrain via robots.txt files and terms of service. Indeed, the percentage of regularly updated sites that do so rose roughly from 1 percent to 5 percent between 2023 and 2024. Some AI companies comply, but others don’t. Instead, they flood sites with download requests, incurring bandwidth costs and overloading servers. Consequently, measures to block crawlers initially taken by individual publishers have evolved into server-level software defenses.
How scared should you be: Yes, data scraped from the web will continue to exist in datasets like Common Crawl, which is updated regularly. Nonetheless, the web is becoming less hospitable to data mining, and some web-scale datasets will include less — and less-current — material. Instead, publishers and developers may be entering a cat-and-mouse scenario. For example, Reddit alleged that Perplexity scraped its data indirectly through Google’s search results, which would suggest that some AI companies are finding workarounds to get data from closed sites. However, it would also mean that web publishers can detect some strategies. Other AI companies have paid to license content, showing that well funded organizations can secure high-quality data while avoiding legal risks.
Facing the fear: Data available on the open web should be fair game for AI training, but developers can reduce publishers’ bandwidth burdens by limiting the frequency of crawls and volume of download requests. For sites behind paywalls, it makes sense to respect the publishers’ preferences and invest in data partnerships. Although this approach is more costly up front, it supports sustainable access to high-quality training data and helps preserve an open web that benefits audiences, publishers, and AI developers.
Autonomous Systems Wage War
Drones are becoming the deadliest weapons in today’s war zones, and they’re not just following orders. Should AI decide who lives or dies?
The fear: AI-assisted weapons increasingly do more than help with navigation and targeting. Weaponized drones are making decisions about what and when to strike. The millions of fliers deployed by Ukraine and Russia are responsible for up to 70 to 80 percent of casualties, commanders say, and they’re beginning to operate with greater degrees of autonomy. This facet of the AI arms race is accelerating too quickly for policy, diplomacy, and human judgement to keep up.
Horror stories: Spurred by Russian aggression, Ukraine’s innovations in land, air, and sea drones have made the technology so cheap and powerful that $500 autonomous vehicles can take out $5 million rocket launchers. “We are inventing a new way of war,” said Valeriy Borovyk, founder of First Contact, part of a vanguard of Ukrainian startups that are bringing creative destruction to the military industrial complex. “Any country can do what we are doing to a bigger country. Any country!” he told The New Yorker. Naturally, Russia has responded by building its own drone fleet, attacking towns and damaging infrastructure.
How scared should you be: The success of drones and semi-autonomous weapons in Ukraine and the Middle East is rapidly changing the nature of warfare. China showcased AI-powered drones alongside the usual heavy weaponry at its September military parade, while a U.S. plan to deploy thousands of inexpensive drones so far has fallen short of expectations. However, their low cost and versatility increases the odds they’ll end up in the hands of terrorists and other non-state actors. Moreover, the rapid deployment of increasingly autonomous arsenals raises concerns about ethics and accountability. “The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances,” Bonnie Docherty, director of Harvard’s Armed Conflict and Civilian Protection Initiative, said in April.
Facing the fear: Autonomous lethal weapons are here and show no sign of yielding to calls for an international ban. While the prospect is terrifying, new weapons often lead to new treaties, and carefully designed autonomous weapons may reduce civilian casualties. The United States has updated its policies, requiring that autonomous systems “allow commanders and operators to exercise appropriate levels of human judgment over the use of force” (although the definition of appropriate is not clear). Meanwhile, Ukraine shows drones’ potential as a deterrent. Even the most belligerent countries are less likely to go to war if smaller nations can mount a dangerous defense.
A MESSAGE FROM LANDING.AIJoin us for a fast-paced hackathon to build and demo intelligent document solutions for the financial world. Bring your ideas, use Agentic Document Extraction (ADE), and transform complex financial documents into actionable insights. Register today
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|