Dear friends,
The physical world is full of unique details that differ from place to place, person to person, and item to item. In contrast, the world of software is built on abstractions that make for relatively uniform coding environments and user experiences. Machine learning can be a bridge between these two worlds.
Software is largely homogenous. When a search-engine company or smartphone maker upgrades its product, users all over the world are offered the same upgrade. This is economically efficient because, despite high fixed costs for design and manufacturing, it results in low marginal costs for manufacturing and distribution. These economics, in turn, support huge markets that can finance innovation on a grand scale.
In contrast, the real world is heterogeneous. One city is surrounded by mountains, another by plains, yet another by seas. One has paved roads, another dirt tracks. One has street signs in French, another in Japanese. Because of the lack of platforms and standards — or the impossibility of creating them — one size doesn’t fit all. Often it fits very few.
This is one reason why it’s difficult to design a self-driving car. Making a vehicle that could find its way around safely would be much easier if every city were built to a narrow specification. Instead, self-driving systems must be able to handle streets of any width, stop lights in any configuration, and a vast array of other variables. This is a tall order even for the most sophisticated machine learning systems. Software companies have been successful at getting users to adapt to one-size-fits-all products. Yet machine learning could help software capture and interact with the rich diversity of the physical world. Rather than forcing every city to build streets of the same composition, width, color, markings, and so on, we can build learning algorithms that enable us to navigate the world’s streets in all their variety.
We have a long way to go on this journey. Last week, I wrote about how Landing AI is using data-centric AI to make machine learning work under the wide variety of conditions found in factories. When I walk into a factory, I marvel at how two manufacturing lines that make an identical product may be quite different because they were built a few years apart, when different parts were available. Each factory needs its own trained model to recognize its own specific conditions, and much work remains to be done to make machine learning useful in such environments.
I hope that you, too, will see the heterogenous world you live in and marvel at the beautiful diversity of people, buildings, objects, and cultures that surround you. Let’s use machine learning to better adapt our software to the world, rather than limit the world to adapt to our software.
Keep learning! Andrew
NewsPrice Prediction Turns PerilousThe real-estate website Zillow bought and sold homes based on prices estimated by an algorithm — until Covid-19 confounded the model’s predictive power. What’s new: Zillow, whose core business is providing real-estate information for prospective buyers, shut down its house-flipping division after the algorithm proved unable to forecast housing prices with sufficient accuracy, Zillow CEO Rich Barton told investors on a quarterly conference call. Facing losses of over $600 million, the company will lay off around 25 percent of its workforce. (A related algorithm called Zestimate continues to supply price estimates on the website.) What went wrong: The business hinged on purchasing, renovating, and reselling a large number of properties. To turn a profit, it needed to estimate market value after renovation to within a few thousand dollars. Since renovation and re-listing take time, the algorithm had to forecast prices three to six months into the future — a task that has become far more difficult over the past 18 months.
What the CEO said: “Fundamentally, we have been unable to predict future pricing of homes to a level of accuracy that makes this a safe business to be in,” Barton explained on the conference call. “We’ve got these new assumptions [based on experience buying and selling houses] that we’d be naïve not to assume will happen again in the future we pump them into the model, and the model cranks out a business that has a high likelihood, at some point, of putting the whole company at risk.” Behind the News: Zestimate began as an ensemble of roughly 1,000 non-machine-learning models tailored to local markets. Last summer, the company revamped it as a neural network incorporating convolutional and fully connected layers that enable it to learn local patterns while scaling to a national level. The company is exploring uses of AI in natural language search, 3D tours, chatbots, and document understanding, as senior vice president of AI Jasjeet Thind explained in DeepLearning.AI’s exclusive Working AI interview. Why it matters: Zillow’s decision to shut down a promising line of business is a stark reminder of the challenge of building robust models. Learning algorithms that perform well on test data often don’t work well in production because the distribution of input from the real world departs from that of the training set (data drift) or because the function that maps input x to prediction y changes, so a given input demands a different prediction (concept drift). We’re thinking: Covid-19 has wreaked havoc on a wide variety of models that make predictions based on historical data. In a world that can change quickly, teams can mitigate risks by brainstorming potential problems and contingencies in advance, building an alert system to flag data drift and concept drift, using a human-in-the-loop deployment or other way to acquire new labels, and assembling a strong MLOps team.
Who Has the Best Face Recognition?Face recognition algorithms have come under scrutiny for misidentifying individuals. A U.S. government agency tested over 1,000 of them to see which are the most reliable. What’s new: The National Institute of Standards and Technology (NIST) released the latest results of its ongoing Face Recognition Vendor Test. Several showed marked improvement over the previous round. How it works: More than 300 developers submitted 1,014 algorithms to at least one of four tests. The test datasets included mugshots of adults, visa photos, and images of child exploitation.
Behind the news: NIST has benchmarked progress in face recognition since 2000. The first test evaluated five companies on a single government-sponsored image database. In 2018, thanks to deep learning, more than 30 developers beat a high score set in 2013. Why it matters: Top-scoring vendors including Clearview AI, NtechLab, and SenseTime have been plagued by complaints that their products are inaccurate, prone to abuse, and threatening to individual liberty. These evaluations highlight progress toward more reliable algorithms, which may help win over critics. We’re thinking: Companies that make face recognition systems need to undertake rigorous, periodic auditing. The NIST tests are a great start, and we need to go farther still. For instance, ClearView AI founder Hoan Ton-That called his company's high score on the NIST one-to-one task an “unmistakable validation” after widespread critiques of the company’s unproven accuracy and lack of transparency. Yet ClearView AI didn’t participate in the test that evaluated an algorithm’s ability to pick out an individual from a large collection of photos — the heart of its appeal to law enforcement.
A MESSAGE FROM DEEPLEARNING.AIHave you checked out the updated Natural Language Processing Specialization? Courses 3 and 4 now cover state-of-the-art techniques with new and refreshed lectures and labs! Enroll now
This Chatbot Does Its ResearchChatbots often respond to human input with incorrect or nonsensical answers. Why not enable them to search for helpful information? What's new: Mojtaba Komeili, Kurt Shuster, and Jason Weston at Facebook devised a chatbot that taps knowledge from the internet to generate correct, timely conversational responses. Key insight: A chatbot typically knows only what it has learned from its training set. Faced with a subject about which it lacks information, it can only make up an answer. If it can query a search engine, it can gather information it may lack. How it works: The chatbot comprised two BART models. To train and test the system, the authors built a dataset of roughly 10,000 search-assisted dialogs. One human conversant chose a topic and started the conversation, while another, if necessary, queried a search engine and formulated replies. The authors tracked which statements led to a search, and which statements and searches led to which responses.
Results: Human volunteers chatted with both the authors’ system and a BART model without internet access, and scored the two according to various metrics. They rated the authors’ chatbot more consistent (76.1 percent versus 66.5 percent), engaging (81.4 percent versus 69.9 percent), knowledgeable (46.5 percent versus 38.6 percent), and factually correct (94.7 percent versus 92.9 percent). Why it matters: This work enables chatbots to extend and update their knowledge on the fly. It may pave the way to more conversational internet search as well as a convergence of conversational agents and intelligent assistants like Siri, Google Assistant, and Alexa, which already rely on internet search. We're thinking: When it comes to chatbots, things are looking up!
Who Can Afford to Train AI?The cost of training top-performing machine learning models has grown beyond the reach of smaller companies. That may mean less innovation all around. What’s new: Some companies that would like to build a business on state-of-the-art models are settling for less, Wired reported. They’re exploring paths toward higher performance at a lower price. How it works: Models are getting larger, and with them, the amount of computation necessary to train them. The cost makes it hard to take advantage of the latest advances.
Behind the news: In 2020, researchers estimated the cost of training a model of 1.5 billion parameters (the size of OpenAI’s GPT-2) on the Wikipedia and Book corpora at $1.6 million. They gauged the cost to train Google’s Text-to-Text Transformer (T5), which encompasses 11 billion parameters, at $10 million. Since then, Google has proposed Switch Transformer, which scales the parameter count to 1 trillion — no word yet on the training cost. Why it matters: The growing importance of AI coupled with the rising cost of training large models cuts into a powerful competitive advantage of smaller companies: Their ability to innovate without being weighed down by bureaucratic overhead. This doesn't just hurt their economic prospects, it slows down the emergence of ideas that improve people’s lives and deprives the AI community of research contributions by small players. We’re thinking: A much bigger model often can perform much better on tasks in which the data has a long tail and the market supports only one winner. But in some applications — say, recognizing cats in photos — bigger models deliver diminishing returns, and even wealthy leaders won’t be able to stay far ahead of competitors.
Work With Andrew Ng
Head of Digital Marketing: Factored seeks a highly experienced digital marketer with a strong knowledge of paid media, search engine optimization, campaign management, and marketing automation. Experience leading a marketing team and impeccable written and spoken English is required. Apply here
Senior Machine Learning Engineer: Landing AI is looking for a machine learning engineer to work with engineers and customers to build clean datasets and label books. A solid background in machine learning and deep learning is a must, along with proven ability to implement, debug, and deploy machine learning models. The position requires five years of experience in the industry or an academic degree in a related discipline. Apply here
Senior Technical Program Manager: Landing AI is looking for a program manager to bridge our team and business partners in executing engineering programs. The ideal candidate has a strong customer relationship management background, three years of experience in a direct program management position, and two years of experience in a technical role. Apply here
Community and Events Marketing Manager: DeepLearning.AI seeks a community and events marketing manager. The ideal candidate is a talented leader, communicator, and creative producer who is ready to create world-class events that keep the community connected and engaged with each other. Apply here
Digital Marketing Manager: DeepLearning.AI is looking for a digital marketing manager to oversee digital marketing campaigns, manage data and analytics, and optimize workflows and processes. The ideal candidate is a strong project manager, communicator, and technical wizard who can work closely with the content, social, events, and community teams. Apply here
Data Engineer (LatAm): Factored is looking for top data engineers with experience in data structures and algorithms, operating systems, computer networks, and object-oriented programming. Candidates must have experience with Python and excellent English-language skills. Apply here
Sales Development Engineer (Customer Facing): Landing AI seeks a salesperson to generate new business opportunities through calls, strategic preparation, and delivering against quota. Experience with inside sales and enterprise products and a track record of achieving corporate quotas is preferred. Apply here
Machine Learning Engineer (North America): Landing AI is searching for a machine learning engineer to work with internal and external engineers on novel models for customers. A solid background in machine learning and deep learning with a proven ability to implement, debug, and deploy machine learning models is a must. Apply here
Technical Writer: Landing AI seeks a writer to own the product education and documentation effort. The ideal candidate is self-motivated, can learn new tools and Landing AI applications quickly, and communicates effectively. Apply here
Director of Machine Learning: Landing AI seeks a machine learning director to define the vision for its products. This person will build and lead an effective machine learning team to execute projects in collaboration with other teams. Apply here
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|