Dear friends,
On Halloween, the veil lifts between the spirit and AI worlds, allowing the two to pass through one another. The resulting paranormal — or, as AI practitioners call it, paragaussian — phenomena raise questions like these:
What do you call it when it takes repeated practice to make a scary jack-o’-lantern?
Responsible AI requires being candid about what it can do. Who’s the best person to help with this?
The ghost of a machine learning engineer visited a museum and defaced all the paintings. Why?
On Halloween night, when kids in costume go from house to house and only get unpopped popcorn, what do you call it?
Keep spooking!
P.S. When my daughter Nova was six months old, I bought her a panda stuffed animal. She liked it, and after many panda-related requests, guess what my Halloween costume is? The lesson for me is: Be careful what presents you give, lest they lead to panda-monium.
Be Very Afraid . . .
Something Wicked This Way ComesThe days grow short, trees shed their leaves, and shadows loom in the failing light. Halloween is upon us, and once again we’re beset by thoughts that all is not well in our world. We sense, lurking in the dusk, the presence of weaponized drones that attack on their own volition, disease-carrying models that breed like rats, algorithms that drive people mad with power. Let us step boldly into the darkness and lift a flaming PyTorch to light the way.
Don’t Be EvilTech companies generally try to be (or to appear to be) socially responsible. Would some rather let AI’s negative impacts slide? The fear: Companies with the know-how to apply AI at scale dominate the information economy. This gives them an overpowering incentive to release harmful products and services, jettison internal checks and balances, buy or lie their way out of regulations, and ignore the trail of damage in their wake. Horror stories: When you move fast and break things, things get broken.
Is a corporate dystopia inevitable? So far, most government moves to regulate AI have been more bark than bite.
Facing the fear: Some tech giants have demonstrated an inability to restrain themselves, strengthening arguments in favor of regulating AI. At the same time, AI companies themselves must publicly define acceptable impacts and establish regular independent audits to detect and mitigate harm. Ultimately, AI practitioners who build, deploy, and distribute the technology are responsible for ensuring that their work brings a substantial net benefit.
Killer Robots Are HereWar is already bad enough. What happens when human combatants are replaced by machines? The fear: Autonomous weapons will become an inevitable aspect of warfare. AI that can’t reliably tell friend from foe will strike mistaken targets, kill civilians, and attack enemies who have surrendered. Systems trained to react to threats quickly will escalate conflicts. Humans won’t be held accountable for automated atrocities. Horror stories: While world leaders debate the ethics of fully autonomous weapons, killer robots are already on the march.
Quivering in your (combat) boots? Efforts to automate weaponry have a long history. Lately, AI has found its way into command and control systems. It’s not too late to establish an international ban on autonomous weapons, but the door is closing fast.
Facing the fear: Countries need ways to defend themselves. An effective ban on autonomous weapons must start with a clear line between what is and isn’t acceptable. Machine learning engineers should play a key role in drawing it.
New Models Inherit Old FlawsIs AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web. The web is a repository of much that’s noble in humanity — but also much that’s lamentable including social biases, ignorance, and cruelty. Consequently, while the fine-tuned models may attain state-of-the-art performance, they also exhibit a penchant for prejudice, misinformation, pornography, violence, and other undesirable traits. Horror stories: Over 100 Stanford University researchers jointly published a paper that outlines some of the many ways foundation models could cause problems in fine-tuned implementations.
How firm is the foundation? The Stanford paper stirred controversy as critics took issue with the authors’ definition of a foundation model and questioned the role of large, pretrained models in the future of AI. Stanford opened a center to study the issue. Facing the fear: It’s not practical to expect every user of a foundation model to audit it fully for everything that might go wrong. We need research centers like Stanford’s — in both public and private institutions — to investigate the effects of AI systems, how harmful capabilities originate, and how they spread.
A MESSAGE FROM DEEPLEARNING.AI
DeepLearning.AI has updated the Natural Language Processing Specialization with new and improved content. We partnered with Hugging Face to create lectures and labs to give you more hands-on experience with transformer models! Enroll now
Democracies Embrace SurveillanceWhat if AI-enabled monitoring isn’t just for dictators and despots? The fear: Under the pretext of maintaining law and order, even countries founded on a commitment to individual rights allow police to take advantage of smart-city infrastructure and smart-home devices. The ability to spy on citizens is rife with moral hazards and opens the door to authoritarian control. Horror stories: Law enforcement agencies worldwide have found AI-driven surveillance irresistible. Reports of deals between police and vendors portend further invasive practices to come.
Panopticon now? Most Americans believe that, in the hands of law enforcement, face recognition will make society safer. Yet such systems are notoriously prone to misuse, inaccuracy, and bias. Several U.S. cities and states have passed laws that restrict or ban police use of face recognition, and others are considering similar legislation. The European Parliament recently passed a nonbinding ban on the practice. Facing the fear: Society should guarantee basic rights to privacy. That said, the impulse to ban face recognition carries its own danger. Ceding AI development to repressive regimes risks a proliferation of systems that enable repressive uses. Instead, elected leaders should establish rules to ensure that such systems are transparent, auditable, explainable, and secure.
Artistry Is ObsoleteIs human creativity being replaced by the synthetic equivalent? The fear: AI is cranking out increasingly sophisticated visual, musical, and literary works. AI-generated media will flood the market, squeezing out human artists and depriving the world of their creativity. Horror stories: The most compelling AI-generated art today requires people who curate a system’s inputs and outputs to ensure that automated creations have a recognizable aesthetic character. Tomorrow is up for grabs.
The end of art history? AI-generated art has edged its way into both fine-art and commercial worlds.
Facing the fear: AI makes a wonderful complement to human creativity, producing variations, offering alternatives, or supplying a starting point for traditional artistic exploration. On the other hand, the best current models can produce output that, to an untrained eye or ear, comes close to human artworks. And they’re only going to get better.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|