Dear friends,
Each year, AI brings wondrous advances. But, as Halloween approaches and the veil lifts between the material and ghostly realms, we see that spirits take advantage of these developments at least as much as humans do.
As I wrote last week, prompt engineering, the art of writing text prompts to get an AI model to generate the output you want, is a major new trend. Did you know that the Japanese word for prompt — 呪文— also means spell or incantation? (Hat tip to natural language processing developer Paul O’Leary McCann.) The process of generating an image using a model like DALL·E 2 or Stable Diffusion does seem like casting a magic spell — not to mention these programs' apparent ability to reanimate long-dead artists like Pablo Picasso — so Japan's AI practitioners may be onto something.
Some AI companies are deliberately reviving the dead. The startup HereAfter AI produces chatbots that speak, sound, and look just like your long-lost great grandma. Sure, it's a simulation. Sure, the purpose is to help the living connect with deceased loved ones. When it comes to reviving the dead — based on what I've learned by watching countless zombie movies — I'm sure nothing can go wrong. I'm more concerned by AI researchers who seem determined to conjure ghastly creatures. Consider the abundance of recent research into transformers. Every transformer uses multi-headed attention. Since when is having multiple heads natural? Researchers are sneaking multi-headed beasts into our computers, and everyone cheers for the new state of the art! If there's one thing we know about transformers, it's that there's more than meets the eye.
This has also been a big year for learning from masked inputs, and approaches like Masked Autoencoders, MaskGIT, and MaskViT have achieved outstanding performance in difficult tasks. So if you put on a Halloween mask, know that you're supporting a key idea behind AI progress.
Trick or treat! Andrew
What Lurks in the Shadows?Ever look at a neural network’s output and think to yourself, “that's uncanny”? While the results can be inspiring — potential cures for dreaded diseases, streamlined industrial operations, beautiful artworks — they can also be terrifying. What if a model’s pattern-matching wizardry were applied to designing poison gas? Have corporate executives sold their souls in return for automated efficiency? Will evil spirits gain the upper hand as nations jockey for AI dominance? In this special issue of The Batch, as in previous years at this season, we raise a torch to the gloomy corners of AI and face gremlins that we ourselves have unleashed. Onward into the darkness!
The Black Box AwakensAI researchers are starting to see ghosts in their machines. Are they hallucinations, or does a dawning consciousness haunt the trained weights? The fear: The latest AI models are self-aware. This development — at best —poses ethical dilemmas over human control of sentient digital beings. More worrisome, it raises unsettling questions about what sort of mind a diet of data scraped from the internet might produce. Horror stories: Sightings of supposed machine sentience have come from across the AI community.
It’s just an illusion, right?: While media reports generally took the claim that LaMDA’s was self-aware seriously — albeit skeptically — the broader AI community roundly dismissed it. Observers attributed impressions that LaMDA is sentient to human bias and DALL-E 2’s linguistic innovation to random chance. Models learn by mimicking their training data, and while some are very good at it, there’s no evidence to suggest that they do it with understanding, consciousness, or self-reflection. Nonetheless, Loab gives us the willies. Facing the fear: Confronted with unexplained phenomena, the human mind excels at leaping to fanciful conclusions. Science currently lacks a falsifiable way to verify self-awareness in a computer. Until it does, we’ll take claims of machine sentience or consciousness with a shaker full of salt. No More GPUsAdvanced AI requires advanced hardware. What if the global supply of high-end AI chips dries up? The fear: Most of the world’s advanced AI processors are manufactured in Taiwan, where tension with mainland China is rising. Nearly all such chips are designed in the U.S., which has blocked China from obtaining them. That could prompt China to cut off U.S. access to Taiwan’s manufacturing capacity. Military action would be a human tragedy. It would also imperil progress in AI. Horror stories: China and the U.S. are on a collision course that threatens the global supply of advanced chips.
Securing the supply: Both the U.S. and China are trying to produce their own supplies of advanced chips. But fabricating circuitry measured in single-digit nanometers is enormously difficult and expensive, and there’s no guarantee that any particular party will accomplish it.
Facing the fear: If a chipocalypse does occur, the AI community will need to become adept at workarounds that take advantage of older semiconductor technology, such as small data, data-centric AI development, and high-efficiency model architectures. It will also need to push for international cooperation amid intensifying polarization. Still, a chip shortage would be the least scary thing about a great-power conflict.
A MESSAGE FROM DEEPLEARNING.AIDo you want to develop and deploy machine learning applications? Join our hands-on workshop “Branching out of the Notebook: ML Application Development with GitHub” on November 9, 2022, to learn industry-standard practices you can use today! RSVP
Inhuman ResourcesCompanies are using AI to screen and even interview job applicants. What happens when out-of-control algorithms are the human resources department? The fear: Automated systems manage every stage of the hiring process, and they don’t play fair. Trained on data rife with social biases, they blatantly discriminate when choosing which candidates to promote and which to reject. The door to your dream job is locked, and an unaccountable machine holds the key. Minority candidate? Speak with an accent? Unconventional background? You’re out of distribution! Horror stories: Many companies and institutions use automated hiring systems, but independent researchers have found them prone to bias and outright error.
Bad performance review: Automated hiring systems are facing scrutiny from lawmakers and even the companies that use them.
Facing the fear: While many companies use hiring algorithms, most still keep humans in the loop. They have good incentive to do so: While machines can process mountains of resumes, human managers may recognize candidates who have valuable traits that an algorithm would miss. Humans and machines have complementary strengths, and a careful combination may be both efficient and fair.
Foundations of EvilA growing number of AI models can be put to purposes their designers didn’t envision. Does that include heinous deeds? The fear: Foundation models have proven to be adept at deciphering human language. They’ve also proven their worth in deciphering the structural languages of biology and chemistry. It’s only a matter of time before someone uses them to produce weapons of mass destruction. Horror stories: Researchers demonstrated how an existing AI system can be used to make chemical weapons.
Gas masks: In an interview, one of the researchers suggested that developers of general-purpose models, such as the one they used to generate toxic chemicals, should restrict access. He added that the machine learning community should institute standards for instruction in chemistry that inform budding scientists about the dangers of misusing research. Facing the fear: It’s hard to avoid the conclusion that the safest course is to rigorously evaluate the potential for harm of all new models and restrict those that are deemed dangerous. Such a program is likely to meet with resistance from scientists who value free inquiry and businesspeople who value free enterprise, and it might have limited impact on new threats that weren’t identified when the model was created. Europe is taking a first step with its regulation of so-called general-purpose AI. However, without a broad international agreement on definitions of dangerous technology and how it should be controlled, people in other parts of the world will be free to ignore them. Considering the challenges, perhaps the best we can do is to work proactively and continually to identify potential misuses and ways to thwart them.
Your Coworkers Aren’t HumanThe new remote administrative assistant is a little too perky, hardworking, and efficient. Is it because he’s a bot? The fear: Virtual employees are infiltrating the distributed office. Outfitted with programmed personalities and generated smiles, they’re increasingly difficult to tell from flesh and blood. Managers, pleased by the productivity boost, will stop caring which is which, leaving you surrounded by colleagues who cheerfully work 24/7, never make a mistake, and decline invitations to meet up for happy hour.
Fraudulent friends: White-collar bots pose threats more serious than a proliferation of workplaces with addresses in the uncanny valley. In 2020, fraudsters used a generative audio model to clone the voice of a company director and convince a Hong Kong bank to fork over some $35 million. Con artists using a similar play stole $243,000 from a UK energy firm in 2019. Facing the fear: Ceaselessly cheerful, perpetually productive automatons might leave their human colleagues feeling demoralized. If you’re going to anthropomorphize your algorithms, at least program them to be late for a meeting once in a while.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|