Dear friends,
In school, most questions have only one right answer. But elsewhere, decisions often come down to a difficult choice among imperfect options. I’d like to share with you some approaches that have helped me make such decisions.
When I was deciding where to set up a satellite office outside the U.S., there were many options. My team and I started by listing important criteria such as supply of talent, availability of local partners, safety and rule of law, availability of visas, and cost. Then we evaluated different options against these criteria and built a matrix with cities along one axis and our criteria along the other. That clarified which country would make a great choice.
When I feel stuck, I find it helpful to write out my thoughts:
Documenting decisions in this way also builds a foundation for further choices. For example, over the years, I’ve collected training data for many different kinds of problems. When I need to select among tactics for acquiring data, having been through the process many times, I know that some of the most important criteria are (i) the time needed, (ii) the number of examples, (iii) accuracy of the labels, (iv) how representative the input distribution is, and (v) cost.
Keep learning!
NewsDeadly Drones Act AloneAutonomous weapons are often viewed as an alarming potential consequence of advances in AI — but they may already have been used in combat. What’s new: Libyan forces unleashed armed drones capable of choosing their own targets against a breakaway rebel faction last year, said a recent United Nations (UN) report. The document, a letter from the organization’s Panel of Experts on Libya to the president of the Security Council, does not specify whether the drones targeted, attacked, or killed anyone. It was brought to light by New Scientist. Killer robots: In March of 2020, amid Libya’s ongoing civil war, the UN-supported Government of National Accord allegedly attacked retreating rebel forces using Kargu-2 quadcopters manufactured by Turkish company STM.
Behind the news: Many nations use machine learning in their armed forces, usually to bolster existing systems, typically with a human in the loop.
Why it matters: Observers have long warned that deploying lethal autonomous weapons on the battlefield could ignite an arms race of deadly machines that decide for themselves who to kill. Assuming the UN report is accurate, the skirmish in Libya appears to have set a precedent. We’re thinking: Considering the problems that have emerged in using today’s AI for critical processes like deploying police, sentencing convicts, and making loans, it’s clear that the technology simply should not be used to make life-and-death decisions. We urge all nations and the UN to develop rules to ensure that the world never sees a real AI war. One Model for Vision-LanguageResearchers have proposed task-agnostic architectures for image classification tasks and language tasks. New work proposes a single architecture for vision-language tasks.
Results: The researchers evaluated GPV-I on COCO classification, COCO captioning, and VQA question answering. They compared its performance with models trained for those tasks. On classification, GPV-I achieved accuracy of 83.6 percent, while a ResNet-50 achieved 83.3 percent. On captioning, GPV-I achieved 1.023 CIDEr-D — a measure of the similarity of generated and ground-truth captions, higher is better — compared to a VLP’s 0.961 CIDEr-D. On question answering, GPV-I achieved 62.5 percent accuracy compared to ViLBERT’s score of 60.1 percent, based on the output’s similarity to a human answer.
Radiologists Eye AIAI lately has achieved dazzling success interpreting X-rays and other medical imagery in the lab. Now it’s catching on in the clinic. What’s new: Roughly one-third of U.S. radiologists use AI in some form in their work, according to a survey by the American College of Radiology. One caveat: Many who responded positively may use older — and questionable — computer-aided detection, a technique for diagnosing breast cancer that dates to the 1980s, rather than newer methods. What they found: The organization queried its membership via email and received 1,861 responses.
Behind the news: AI’s role in medical imaging is still taking shape, as detailed by Stanford radiology professor Curtis Langlotz in the journal Radiology: Artificial Intelligence. In 2016, a prominent oncologist wrote in the New England Journal of Medicine, “machine learning will displace much of the work of radiologists.” Two years later, Harvard Business Review published a doctor-penned essay headlined, “AI Will Change Radiology, but It Won’t Replace Radiologists.” Radiology Business recently asked, “Will AI replace radiologists?” and concluded, “Yes. No. Maybe. It depends.” Why it matters: AI’s recent progress in medical imaging is impressive. Although the reported 30 percent penetration rate probably includes approaches that have been uses for decades, radiologists are on their way to realizing the technology’s promise. We’re thinking: One-third down, two-thirds to go! Machine learning engineers can use such findings to understand what radiologists need and develop better systems for them.
A MESSAGE FROM DEEPLEARNING.AIWe’re proud to launch Practical Data Science, in partnership with Amazon Web Services (AWS)! This new specialization will help you develop the practical skills to deploy data science projects effectively and overcome machine learning challenges using Amazon SageMaker. Enroll now Tesla All-In For Computer VisionTesla is abandoning radar in favor of a self-driving system that relies entirely on cameras. What’s new: The electric car maker announced it will no longer include radar sensors on Model 3 sedans and Model Y compact SUVs sold in North America. Tesla is the only major manufacturer of autonomous vehicles to bet solely on computer vision. Most others rely on a combination of lidar, radar, and cameras. How it works: Tesla has dropped radar only in the U.S. and only in its two most popular models. It aims to gather data and refine the technology before making the change in Model S, Model X, and vehicles sold outside the U.S.
Behind the news: Some people in the self-driving car industry favor using relatively expensive lidar and radar sensors in addition to low-cost cameras because they provide more information and thus greater safety. Camera-only advocates counter that humans can drive safely perceiving only images, so we should build AI that does the same. Most companies working on autonomous vehicles have chosen the more expensive route as the fastest way to reach full autonomy safely. Once they get there, the thinking goes, they can attend to bringing the cost down. Why it matters: If Tesla’s bet on cameras pays off, it could have an outsize influence on future self-driving technology. We’re thinking: While it’s great to see ambitious plans to commercialize computer vision, Tesla’s initiative will require tests on public streets. That means countless drivers will be the company’s unwitting test subjects — a situation that, as ever, demands strong oversight by road-safety authorities.
What AI Knows About ProteinsTransformer models trained on sequences of amino acids that form proteins have had success classifying and generating viable sequences. New research shows that they also capture information about protein structure. What’s new: Transformers can encode the grammar of amino acids in a sequence the same way they do the grammar of words in a language. Jesse Vig and colleagues at Salesforce Research and University of Illinois at Urbana-Champaign developed methods to interpret such models that reveal biologically relevant properties. Key insight: When amino acids bind to one another, the sequence folds into a shape that determines the resulting protein’s biological functions. In a transformer trained on such sequences, a high self-attention value between two amino acids can indicate that they play a significant role in the protein’s structure. For instance, the protein’s folds may bring them into contact. How it works: The authors studied a BERT pretrained on a database of amino acid sequences to predict masked amino acids based on others in the sequence. Given a sequence, they studied the self-attention values in each layer of the model.
Results: The authors compared their model’s findings with those reported in other protein databases. The deeper layers of the model showed an increasing proportion of related pairs in which the amino acids actually were in contact, up to 44.7 percent, while the proportion of all amino acids in contact was 1.3 percent. The chance that the second amino acid in a related pair was part of a binding site didn’t rise steadily across layers, but it reached 48.2 percent, compared to a 4.8 percent chance that any amino acid was part of a binding site. Why it matters: A transformer model trained only to predict missing amino acids in a sequence learned important things about how amino acids form a larger structure. Interpreting self-attention values reveals not only how a model works but also how nature works. We’re thinking: Such tools might provide insight into the structure of viral proteins, helping biologists discover ways to fight viruses including SARS-CoV-2 more effectively.
A MESSAGE FROM DEEPLEARNING.AIWork With Andrew Ng
Head of Applied Science: Workera is looking for a head of applied science to make our data valid and reliable and leverage it to create algorithms that solve novel problems in talent and learning. You will own our data science and machine learning practice and work on challenging and exciting problems. Apply here
Head of Engineering: Workera is looking for an engineering leader to manage and grow our world-class team and unlock its potential. You will lead engineering execution and delivery, make Workera a rewarding workplace for engineers, and participate in company oversight. Apply here
Company Builder (U.S.): AI Fund is looking for a senior associate or principal-level candidate to join our investment team to help us build and incubate ideas that are generated internally. Strong business acumen and market research capabilities are more important than technical background. Apply here
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|