Dear friends,
The United States Federal Reserve Bank has signaled that it will continue to raise interest rates. As one consequence, the stock market is significantly down, particularly tech stocks, relative to the beginning of the year. What does this mean for AI? In this two-part series, I’d like to discuss what I think will happen — which may have implications on your AI projects — and what I think should happen. Unfortunately, these are different things.
The U.S. has enjoyed low interest rates over the past decade. Simplifying a bit, if r is the interest rate (if the interest rate is 2%, then r = 0.02), then one dollar T years in the future is worth 1/(1+r)^T as much as one dollar today. The larger r is, the less that future dollar is worth relative to its value today. If you’re familiar with the discount factor ɣ (Greek alphabet gamma) in reinforcement learning, you may notice that ɣ plays a similar role to 1/(1+r) and weights rewards T steps in the future by ɣ^T.
Many investors are wondering if the stock market’s 13-year bull run has come to an end, and if the next era will be very different. If interest rates continue to rise, then:
What this means for our community is that we should be ready for increased pressure to develop projects that demonstrate near-term, tangible value. For example, if you can explain how your AI system — for reading hospital records, inspecting parts, ensuring worker safety, or what have you — can save $1 million in two years, it will be easier to justify the $300,000 annual budget that you might be asking for. So if you’re looking for funding for a company or project, consider near-term impacts or financial justifications you can develop.
P.S. I’m grateful to Erik Brynjolfsson, a brilliant economist who has done seminal work on tech’s impact on the economy, for helping me think through the contents of this letter. Responsibility for any errors lies with me.
NewsActors Act Against AIPerforming artists are taking action to protect their earning power against scene-stealing avatars. What’s new: Equity, a union of UK performing artists, launched a campaign to pressure the government to prohibit unauthorized use of a performer’s AI-generated likeness. The union published tips to help artists who work on AI projects exercise control over their performances and likenesses.
What performers think of AI: Equity conducted a survey of its members between November 2021 and January 2022. Among the 430 people who responded:
Why it matters: While synthetic images, video, and audio contribute to countless exciting works, they’re an obvious source of concern for artists who wish to preserve — never mind increase — their earning power. These developments also affect members of the audience, who may find that their favorite performers have less and less to do with the productions they nominally appear in.
Winning The Google GameAI startups are helping writers tailor articles that appear near the top of Google’s search results. What’s new: At least 14 companies sell access to software that uses GPT-3, the language model from OpenAI, to generate headlines, product descriptions, blog posts, and video scripts, Wired reported. How it works: The services enable people who have little experience or skill in writing to make content that’s optimized for web search engines.
Machine privilege: Google’s guidelines state that it may take action against automatically generated content. However, a Google spokesperson told Wired that the company may take a more lenient approach toward generated text that has been designed to serve readers rather than manipulate search results. Behind the news: Neural networks are reaching into video production, too. Given a script, Synthesia produces customized videos, rendered by a generative adversarial network, aimed at corporate customers. Given a finished video, Mumbai-based Videoverse tags key highlights and renders them into clips optimized for sharing on social media. Why it matters: Producing text for online marketers is an early commercial use case for text-generation models. The tech gives people who don’t specialize in marketing a leg up and raises the bar for professional writers — assuming it produces consistently high-quality output. In any case, AI has found a lucrative place in advertising and marketing, helping to drive $370 billion in ad sales this year, according to the marketing agency GroupM. We’re thinking: AI may write compelling marketing copy, but it’s still a long way from producing a great newsletter. Right?!
A MESSAGE FROM DEEPLEARNING.AIIn FourthBrain’s new Introduction to MLOps course, you’ll walk through the AI product life cycle by building a minimum viable product using the latest tools. This live course meets on Tuesdays from July 5 to July 26, 2022, 5 p.m. to 8 p.m. Central European Summer Time. Join us! Learn more
Deep Learning for Deep DiscountsWith prices on the rise, an app analyzes user data to deliver cash back on retail purchases. What’s new: Upside, a startup based in Washington, D.C., works with gas stations, grocery stores, and restaurants to offer personalized discounts to consumers, The Markup reported.
Behind the news: Founded in 2015, Upside says its services reach 30 million U.S. users. Lyft and Uber integrate it with their driving app to offset inflation-driven spikes in gas prices. Fuel-saving apps GasBuddy and Checkout51 offer Upside-powered promotions, and DoorDash and Instacart have offered Upside to their drivers. Yes, but: Upside’s algorithmic approach to calculating discounts may leave some customers feeling left out.
Why it matters: Many families, individuals, and employees are on the lookout for ways to cut their expenses, and they may consider surrendering personal information a fair trade. However, the terms of the deal should be transparent and easy to understand. It’s deceptive to offer discounts that don’t pan out or diminish without warning as a casual shopper becomes a steady customer.
Right-Sizing ConfidenceAn object detector trained exclusively on urban images might mistake a moose for a pedestrian and express high confidence in its poor judgment. New work enables object detectors, and potentially other neural networks, to lower their confidence when they encounter unfamiliar inputs. What’s new: Xuefeng Du and colleagues at University of Wisconsin-Madison proposed Virtual Outlier Synthesis (VOS), a training method that synthesizes representations of outliers to make an object detector more robust to unusual examples. Key insight: Neural networks that perform classification (including object detectors) learn to divide high-dimensional space into regions that contain different classes of examples. Having populated a region with examples of a given class, they can include nearby empty areas in that region. Then, given an outlier, they’re likely to confidently label it with a class even if all familiar examples are far away. But a model can learn to recognize when low confidence is warranted by giving it synthetic points that fall into those empty areas and training it to distinguish between synthetic and actual points. How it works: Given an image, an object detector generates two types of outputs: bounding boxes and classifications for those boxes. VOS adds a third: the model’s degree of certainty that the image is an outlier.
Results: VOS maintained object detectors’ classification performance while reducing its false-positive rate. For instance, a ResNet-50 trained using VOS on a dataset that depicts persons, animals, vehicles, and indoor objects achieved object-detection performance of 88.66 percent AUC with a false-positive rate (FPR95) of 49.02 percent. By comparison, a ResNet-50 trained via a method that used a GAN to generate outlier images achieved slightly lower object-detection performance (83.67 percent AUC) and a much higher false-positive rate (60.93 percent FPR95). Why it matters: It’s difficult to teach a neural network that the training dataset is just a subset of a diverse world. Moreover, the data distribution can drift between training and inference. VOS tackles the hard problem of encouraging object detectors to exercise doubt about unfamiliar objects without reducing their certainty with respect to familiar ones. We’re thinking: The typical machine learning model learns about known knowns so it can recognize unknown knowns. While it’s a relief to have a neural network that identifies known unknowns, we look forward to one that can handle unknown unknowns.
Work With Andrew Ng
Full-Stack Ruby On Rails/React Web Developer: ContentGroove is hiring a full-stack developer in North America to join its remote engineering team. This role will work with the product and design teams to help define features from a functional perspective. Join a fast-growing company with an outstanding executive team! Apply here
Frontend Engineer (Taipei): DeepLearning.AI is looking for a frontend engineer with strong computer-science fundamentals and drive to improve learner experiences. In this role, you’ll execute early-stage development of an educational environment for AI-related topics. Apply here
Data Engineer (Latin America): Factored seeks top data engineers with experience in data structures and algorithms, operating systems, computer networks, and object-oriented programming. Experience with Python and excellent English skills are required. Apply here
Software Development Engineer (Latin America): Landing AI is looking for a software engineer with proficiency in best practices, programming languages, and end-to-end product development. In this role, you’ll help to design and develop infrastructure for machine learning services and deliver high-quality AI products. Apply here
UX Designer: Landing AI seeks a UX designer who has experience with enterprise software and applications. In this role, you’ll be central to shaping the company’s products and design culture. Apply here
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|