Dear friends,
Should we be optimistic or pessimistic about the prospects for ethical AI? I meet people who are encouraged by the progress we’ve made toward making AI more responsible and free of bias. I also see people who are dismayed by the daunting challenges we face.
Whether one is an optimist or pessimist often depends on the frame of comparison. Do you compare where we are with how far we’ve come or how far we’ve yet to go? Beyond AI, society has made remarkable progress against racism in the last few decades. Within the past year, the Black Lives Matter movement has raised awareness of racism in the U.S. and George Floyd’s murderer was convicted. Yet the work ahead is daunting. Deeply rooted problems like racism and sexism seem nearly impossible to cure. Will we ever get past them?
Keep learning! Andrew
NewsMinorities ReportedAn independent investigation found evidence of racial and economic bias in a crime-prevention model used by police departments in at least nine U.S. states. What’s new: Geolitica, a service that forecasts where crimes will occur, disproportionately targeted Black, Latino, and low-income populations, according to an analysis of leaked internal data by Gizmodo and The Markup. The reporters found the data on an unsecured police website. Geolitica, formerly called PredPol, changed its name in March. How it works: The model predicts where crimes are likely to occur, helping police departments use allocate personnel. The company trains a separate model for each jurisdiction on two to five years of crime dates, locations, and types.
Sources of bias: Critics point to pervasive biases in the models’ training data as well as potential adverse social effects of scheduling patrols according to automated crime predictions.
The response: Geolitica confirmed that the data used in the investigation “appeared to be” authentic, but it took issue with the analysis:
Why it matters: Over 70 U.S. law enforcement jurisdictions use Geolitica’s service, and it is used in other countries as well. Yet this report is the first independent analysis of the algorithm’s performance based on internal data. Its findings underscore concerns that predictive policing systems invite violations of civil liberties, which have prompted efforts to ban such applications.
Recognizing AutismClassical machine learning techniques could help children with autism receive treatment earlier in life. What’s new: Researchers led by Ishanu Chattopadhyay at University of Chicago developed a system that classified autism in young children based on data collected during routine checkups. Key insight: Autistic children have higher rates of certain conditions — such as asthma, gastrointestinal problems, and seizures — than their non-autistic peers. Incidence of these diseases could be a useful diagnostic signal. How it works: The authors used Markov models, which predict the likelihood of a sequence of actions occurring, to feed a gradient boosting machine (an ensemble of decision trees). The dataset comprised weekly medical reports on 30 million children aged 0 to 6 years.
Results: The system’s precision — the percentage of kids it classified as autistic who actually had the condition — was 33.6 percent at 26 months. Classifying children of the same age, a questionnaire often used to diagnose children between 18 and 24 months of age achieved 14.1 percent precision. The model was able to achieve sensitivity — the percentage of children it classified correctly as autistic — as high as 90 percent, with 30 percent fewer false positives than the questionnaire at a lower sensitivity. Why it matters: It may be important to recognize autism early. Although there’s no consensus, some experts believe that early treatment yields the best outcomes. This system appears to bring that goal somewhat closer by cutting the false-positive rate in half compared to the questionnaire. Nonetheless, it misidentified autism two-thirds of the time, and the authors caution that it, too, could lead to over-diagnosis. We’re thinking: Data drift and concept drift, which cause learning algorithms to generalize poorly to populations beyond those represented in the training data, has stymied many healthcare applications. The authors' large 30 million-patient dataset makes us optimistic that their approach can generalize in production.
A MESSAGE FROM DEEPLEARNING.AIHave you checked out our Practical Data Science Specialization? This specialization will help you develop the practical skills to deploy data science projects and teach you how to overcome challenges at each step using Amazon SageMaker. Corporate Ethics CounterbalanceOne year after her acrimonious exit from Google, ethics researcher Timnit Gebru launched an independent institute to study neglected issues in AI. What’s new: The Distributed Artificial Intelligence Research Institute (DAIR) is devoted to countering the influence of large tech companies on the research, development, and deployment of AI. The organization is funded by $3 million in grants from the Ford Foundation, MacArthur Foundation, Kapor Center, and Open Society Foundation. How it works: DAIR is founded upon Gebru’s belief that large tech companies, with their focus on generating profit, lack the incentive to assess technology’s harms and the motivation to address them. It will present its first project this week at NeurIPS.
Behind the news: Gebru was the co-lead of Google’s Ethical AI group until December 2020. The company ousted her after she refused to retract or alter a paper that criticized its BERT language model. A few months later, it fired her counterpart and established a new Responsible AI Research and Engineering group to oversee various initiatives including Ethical AI. Why it matters: AI has the potential to remake nearly every industry as well as governments and social institutions, and the AI community broadly agrees on the need for ethical principles to guide the process. Yet the companies at the center of most research, development, and deployment have priorities that may overwhelm or sidetrack ethical considerations. Independent organizations like DAIR can call attention to the ways in which AI may harm some groups and use the technology to shed light on problems that may be overlooked by large, mainstream institutions.
Reinforcement Learning TransformedTransformers have matched or exceeded earlier architectures in language modeling and image classification. New work shows they can achieve state-of-the-art results in some reinforcement learning tasks as well. What’s new: Lili Chen and Kevin Lu at UC Berkeley with colleagues at Berkeley, Facebook, and Google developed Decision Transformer, which models decisions and their outcomes. Key insight: A transformer learns from sequences, and a reinforcement learning task can be modeled as a repeating sequence of state, action, and reward. Given such a sequence, a transformer can learn to predict the next action (essentially recasting the reinforcement learning task as a supervised learning task). But this approach introduces a problem: If the transformer chooses the next action based on earlier rewards, it won’t learn to take actions that, though they may bring negligible rewards on their own, lay a foundation for winning higher rewards in the future. The solution is to tweak the reward part of the sequence. Instead of showing the model the reward for previous actions, the authors provided the sum of rewards remaining to be earned by completing the task. This way, the model took actions likely to reach that sum. How it works: The researchers trained a generative pretrained transformer (GPT) on recorded matches of three types of games: Atari games with a fixed set of actions, OpenAI Gym games that require continuous control, and Key-to-Door. Winning Key-to-Door requires learning to pick up a key, which brings no reward, and using it to open a door and receive a reward.
Results: The authors compared Decision Transformer with the previous state-of-the-art method, Conservative Q-Learning (CQL).They normalized scores of Atari and OpenAI Gym games to make 0 on par with random actions and 100 on par with a human expert. In Atari games, the authors’ approach did worse, earning an average score of 98 versus CQL’s 107. However, it excelled in the more complex games. In OpenAI Gym, averaged 75 versus CQL’s 64. In Key-to-Door, it succeeded 71.8 percent of the time versus CQL’s 13.1 percent. Why it matters: How to deal with actions that bring a low reward in the present but contribute to greater benefits in the future is a classic issue in reinforcement learning. Decision Transformer learned to solve that problem via self-attention during training. We’re thinking: It’s hard to imagine using this approach for online reinforcement learning, as the sum of future rewards would be unknown during training. That said, it wouldn’t be difficult to run a few experiments, train offline, and repeat.
Work With Andrew Ng
AI and Data Transformation Specialist: Workera is looking for a specialist to help enterprise clients implement transformation strategies in areas including AI, data, software, cloud, and cybersecurity. You will improve Workera’s skill-transformation playbook, explain technical functions and skills taxonomies to senior technical leaders, and accelerate business opportunities. Apply here
Senior Technical Program Manager: Landing AI is looking for a program manager to bridge our team and business partners in executing engineering programs. The ideal candidate has a strong customer relationship management background, three years of experience in a direct program management position, and two years of experience in a technical role. Apply here
Community and Events Marketing Manager: DeepLearning.AI seeks a community and events marketing manager. The ideal candidate is a talented leader, communicator, and creative producer who is ready to create world-class events that keep the community connected and engaged with each other. Apply here
Digital Marketing Manager: DeepLearning.AI is looking for a digital marketing manager to oversee digital marketing campaigns, manage data and analytics, and optimize workflows and processes. The ideal candidate is a strong project manager, communicator, and technical wizard who can work closely with the content, social, events, and community teams. Apply here
Data Engineer (LatAm): Factored is looking for top data engineers with experience in data structures and algorithms, operating systems, computer networks, and object-oriented programming. Candidates must have experience with Python and excellent English-language skills. Apply here
Head of Digital Marketing: Factored seeks a highly experienced marketer with a strong knowledge of paid media, search engine optimization, campaign management, and marketing automation. Experience leading a marketing team and impeccable written and spoken English is required. Apply here
Machine Learning Engineer (North America): Landing AI is searching for a machine learning engineer to work with internal and external engineers on novel models for customers. A solid background in machine learning and deep learning with a proven ability to implement, debug, and deploy machine learning models is a must Apply here
Technical Writer: Landing AI seeks a writer to own the product education and documentation effort. The ideal candidate is self-motivated, can learn new tools and Landing AI applications quickly, and communicates effectively. Apply here
Director of Machine Learning: Landing AI seeks a machine learning director to define the vision for its products. This person will build and lead an effective machine learning team to execute projects in collaboration with other teams. Apply here
Frontend Desktop Application Engineer (LatAm): Landing AI is looking for a software development engineer to develop AI applications for clients in manufacturing, agriculture, and healthcare. Proficiency in programming languages and experience with end-to-end product development is preferred. Apply here
Software Development Engineer (LatAm): Landing AI seeks a software development engineer to build scalable AI applications and deliver optimized inference software. A strong background in Docker, Kubernetes, infrastructure, network security, or cloud-based development is preferred. Apply here
Frontend Desktop Application Engineer (LatAm): Landing AI is looking for a software development engineer to develop AI applications for clients in manufacturing, agriculture, and healthcare. Proficiency in programming languages and experience with end-to-end product development is preferred. Apply here
Part-time Machine Learning Instructor: FourthBrain is seeking machine learning practitioners or educators to teach cohort-based programs in practical machine learning. Apply here
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|