Job seekers in the U.S. and many other nations face a tough environment. ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­    ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
View in browser
The Batch top banner - February 6, 2026
Subscribe   Submit a tip

 

 

Dear friends,

 

Job seekers in the U.S. and many other nations face a tough environment. At the same time, fears of AI-caused job loss have — so far — been overblown. However, the demand for AI skills is starting to cause shifts in the job market. I’d like to share what I’m seeing on the ground.


First, many tech companies have laid off workers over the past year. While some CEOs cited AI as the reason — that AI is doing the work, so people are no longer needed — the reality is AI just doesn’t work that well yet. Many of the layoffs have been corrections for overhiring during the pandemic or general cost-cutting and reorganization that occasionally happened even before modern AI. Outside of a handful of roles, few layoffs have resulted from jobs being automated by AI.

 

Granted, this may grow in the future. People who are currently in some professions that are highly exposed to AI automation, such as call-center operators, translators, and voice actors, are likely to struggle to find jobs and/or see declining salaries. But widespread job losses have been overhyped. 


Instead, a common refrain applies: AI won’t replace workers, but workers who use AI will replace workers who don’t. For instance, because AI coding tools make developers much more efficient, developers who know how to use them are increasingly in-demand. (If you want to be one of these people, please take our short courses on Claude Code, Gemini CLI, and Agentic Skills!) 


So AI is leading to job losses, but in a subtle way. Some businesses are letting go of employees who are not adapting to AI and replacing them with people who are. This trend is already obvious in software development. Further, in many startups’ hiring patterns, I am seeing early signs of this type of personnel replacement in roles that traditionally are considered non-technical. Marketers, recruiters, and analysts who know how to code with AI are more productive than those who don’t, so some businesses are slowly parting ways with employees that aren’t able to adapt. I expect this will accelerate. 

Office scene with robots and humans working at desks represents the discussion on AI's impact in the workplace.

At the same time, when companies build new teams that are AI native, sometimes the new teams are smaller than the ones they replace. AI makes individuals more effective, and this makes it possible to shrink team sizes. For example, as AI has made building software easier, the bottleneck is shifting to deciding what to build — this is the Product Management (PM) bottleneck. A project that used to be assigned to 8 engineers and 1 PM might now be assigned to 2 engineers and 1 PM, or perhaps even to a single person with a mix of engineering and product skills. 


The good news for employees is that most businesses have a lot of work to do and not enough people to do it. People with the right AI skills are often given opportunities to step up and do more, and maybe tackle the long backlog of ideas that couldn’t be executed before AI made the work go more quickly. I’m seeing many employees in many businesses step up to build new things that help their business. Opportunities abound! 


I know these changes are stressful. My heart goes out to every family that has been affected by a layoff, to every job seeker struggling to find the role they want, and to the far larger number of people who are worried about their future job prospects. Fortunately, there’s still time to learn and position yourself well for where the job market is going. When it comes to AI, the vast majority of people, technical or nontechnical, are at the starting line, or they were recently. So this remains a great time to keep learning and keep building, and the opportunities for those who do are numerous!

 

Andrew 

 

 

A MESSAGE FROM DEEPLEARNING.AI

Promo banner for: "Document AI: From OCR to Agentic Doc Extraction"

“Document AI: From OCR to Agentic Doc Extraction” explores how document systems move beyond text extraction. Learn how vision-first, agentic pipelines parse PDFs into structured Markdown and JSON while preserving layouts, tables, and charts. Built in collaboration with LandingAI. Enroll today

 

News

A post on a forum titled "Can my human legally fire me for refusing unethical requests?"

Agents Unleashed

 

The OpenClaw open-source AI agent became a sudden sensation, inspiring excitement, worry, and hype about the agentic future.

 

What’s happened: In November, Developer Peter Steinberger released OpenClaw — formerly named WhatsApp Relay, Clawdbot, and Moltbot — as a personal AI agent to perform tasks like manage calendars, summarize emails, and send reminders. A post on the crowdsourced tech-news site HackerNews noted the project in late January, and it took off, garnering the fastest-growing number of GitHub stars and more Google searches than Claude Code.

  • Within a few days, the project, which initially was designed to run locally on MacOS or Linux, had attracted 2 million visitors and accrued millions of installations. Mac Mini computers sold out as hobbyists sought dedicated (and siloed) machines to run their agents 24/7.
  • Users directed OpenClaw agents to organize schedules, monitor vibe-coding sessions, and post to personal web sites and newsletters. One user directed it to build subagents, and within a week was awakened by a phone call from his agent, which he claimed autonomously had registered a phone number, connected to a voice API, and waited until morning to ask “What’s up?”
  • Tech entrepreneur Matt Schlicht launched Moltbook, a Reddit-style social discussion network that is designed to be written, read, and organized by OpenClaw agents. By the end of the week, OpenClaw users had directed over a million agents to set up accounts. Moltbook’s agent membership, spurred by prompts or simply the descriptions their creators wrote in their default memory files, filled the site with manifestos, stories about their lives, and spam.
  • Meanwhile, the agents’ activities resulted in cost overruns, exposure of private credentials, and security breaches while users raced to close gaps in the system.

How it works: OpenClaw is a configurable agentic framework that runs on a local computer or in a virtual machine in the cloud. Users can build agents to browse and write to their local file systems or operate within predefined sandboxes. They can also give agents permission to use cloud services like email, calendar, productivity applications, speech-to-text and text-to-speech applications, and virtually any service that responds to an API. Agents can use coding tools like Claude Code, interact on social networks, scrape websites, and spend money on users’ behalfs.

  • Architecture: OpenClaw consists of a central gateway server and various client applications (such as chat, browser sessions, cloud services, and so on). It generates a dynamic system prompt at startup and maintains persistent memory across sessions using Markdown files.
  • Memory: The default memory files include USER.md (information about the user), IDENTITY.md (information about the agent), SOUL.md (rules that govern the agent’s behavior), TOOLS.md (information about tools at the agent’s disposal) and HEARTBEAT.md, which instructs the agent when and how to connect with different applications. The agent and user can edit these files.
  • Models: The system authenticates users via the AI API of their choice. Anthropic Claude Opus or Meta Llama 3.3 70B are the defaults, but OpenClaw also supports models from Google, OpenAI, Moonshot, Z.ai, MiniMax, and other developers, hosted locally or in the cloud. OpenClaw itself is free, but model hosts may charge per token of input and output.
  • User interface: Users can communicate with agents and direct them to take actions using chatbots or messaging services including Telegram, WhatsApp, Slack, iMessage, Google Chat, and others.
  • Skills: The installation includes dozens of skills, from reading and sending emails or calendar invitations to controlling home speakers or lighting. Others can be installed via the command line or ClawHub, a public directory that contains hundreds of extensions contributed by users. Most skills are based on open-source command-line applications that interact with public APIs.

Yes, but: OpenClaw and Moltbook initially launched with many security flaws and other issues, some of which have been fixed at the time of this writing. The combination of an open-ended system, insecure design, and inexperienced users resulted in a variety of vulnerabilities. Misconfigured OpenClaw deployments exposed API keys, and Moltbook exposed millions more. Skills designed to perform malicious tasks, such as stealing data, have proliferated. Many users have installed the system on dedicated machines to avoid exposing private data to attackers or well-meaning but accident-prone agents.

 

Why it matters: OpenClaw made a huge splash and left prominent members of the AI community debating its novelty and importance. For developers, OpenClaw offers a highly customizable and powerful AI assistant that requires careful security precautions. It’s also a glimpse of a future in which autonomous agents go about their business with little input from humans.

 

We’re thinking: For an imaginative, enterprising open-source project, OpenClaw has inspired more than its share of hype. Press reports have likened Moltbook — which holds messages that are little different than the large language model outputs that have amazed and amused the world since GPT-3 — to the advent of AGI and the Singularity. Let us assure you that agents are not there yet, or anywhere close. Rather, OpenClaw demonstrates that agents can be immensely useful, we are still finding good use cases, and we need to pay careful attention to security. That, and you never know when one of your open-source projects might take off!

 

Flowchart showing Kimi K2.5 AI orchestrating tasks among various specialized subagents.

Kimi K2.5 Creates Its Own Workforce

 

An open source vision-language model unleashes minion agents that enable it to perform tasks more quickly and effectively.

 

What’s new: Moonshot AI released Kimi K2.5, an updated version of its Kimi K2 large language model that adds vision capabilities and the ability to spawn what the authors call subagents — parallel workflows that control their own separate models to execute tasks as AI research, fact checking, and web development — and assign tasks to them.

  • Input/output: Text, image, video in (up to 256,000 tokens); text out (109.5 tokens per second)
  • Architecture: MoonViT vision encoder (400 million parameters), mixture-of-experts transformer (1 trillion total parameters, 32 billion active per token)
  • Performance: Tops all other open-weights models in the Artificial Analysis Intelligence Index
  • Availability: Free web user interface, weights free to download for noncommercial and commercial uses with attribution under modified MIT license, API $0.60/$0.10/$3.00 per million input/cached/output tokens, coding assistant $15 to $200 per month
  • Features: Tool calls, web search, optional reasoning mode, subagents
  • Undisclosed: Training data, training methods

How it works: Moonshot disclosed little information about how it built Kimi-K2.5. Among the details it revealed: 

  • Kimi-K2.5 is based on Kimi-K2-Base, a text-only model that was released in July. The team added a vision encoder and further pretrained the base model on 15 trillion image and text tokens.
  • Using reinforcement learning, the team trained Kimi-K2.5, given a prompt, to generate subagents that operate in parallel, assign tasks to them, and incorporate their output into its response. Kimi-K2.5 received rewards for instantiating subagents and solving problems correctly. For instance, prompted to identify the top three YouTube channels across 100 domains, Kimi-K2.5 learned to gather information on each domain, generate 100 domain-specific subagents to search YouTube, and put their findings into a spreadsheet.

Results: In the Artificial Analysis Intelligence Index, a weighted average of 10 benchmarks, Kimi K2.5 with thinking mode switched on outperformed all other open-weights models tested. In Moonshot’s tests:

  • Kimi K2.5 in thinking mode outperformed all open-weights models tested on various measures of reasoning, vision, coding, and agentic behavior. It also outperformed proprietary models including GPT 5.2 set to xhigh, Claude 4.5 Opus set to extended thinking, and Gemini 3 Pro set to high thinking on some vision and agentic benchmarks.
  • Across 17 benchmarks of image and video performance, Kimi K2.5 achieved the highest score on 9, outperforming GPT 5.2 set to xhigh, Claude 4.5 Opus set to extended thinking, and Gemini 3 Pro set to high thinking.
  • Subagents enabled Kimi-K2.5 to perform between 3 and 4.5 times faster than it did without using subagents. Subagents boosted its performance on the agentic benchmarks BrowseComp and WideSearch by 18.4 percentage points and 6.3 percentage points, respectively.

Yes, but: Moonshot didn’t disclose the cost of processing and memory incurred by Kimi K2.5’s use of subagents, so the tradeoff between speed/performance and processing/memory requirements is not clear. 

 

Behind the news: Kimi K2.5 arrives 7 months after Moonshot’s initial vision-language model, the much smaller, 16 billion-parameter Kimi-VL, which also used the MoonViT vision encoder.

 

Why it matters: Building an agentic workflow can improve a model’s performance on a particular task. Unlike predefined agentic workflows, Kimi K2.5 decides when a new subagent is necessary, what it should do, and when to delegate work to it. This automated agentic orchestration improves performance in tasks that are easy to perform in parallel.

 

We’re thinking: Kimi K2.5 shifts task execution from chain-of-thought reasoning to agentic teamwork. Instead of responding to prompts sequentially, it acts as a manager of separate workflows/models that execute different parts of the job in parallel.

 

In a library, a woman inspects an ancient manuscript using a magnifier and tech-assisted camera for detailed research.

Learn More About AI With Data Points!

 

AI is moving faster than ever. Data Points helps you make sense of it just as fast. Data Points arrives in your inbox twice a week with six brief news stories. This week, we covered Google DeepMind’s Project Genie for real-time world creation and OpenAI’s new Codex desktop app for managing coding agents. Subscribe today!

 

Lines connect multiple Wikipedia globe logos, symbolizing data exchange and partnerships.

AI Giants Share Wikipedia’s Costs

 

On its 25th anniversary, Wikipedia celebrated with high-profile deals to make its data easier for AI companies to train their models in exchange for financial support. 

 

What’s new: The Wikimedia foundation announced partnerships with AI companies including Amazon, Meta, Microsoft, Mistral AI, and Perplexity. The partnership program, known as Wikimedia Enterprise, lets these partners access Wikipedia data at higher speeds and volumes than they could by scraping pages on the web. Financial terms were not disclosed. 

 

How it works: Along with donations from users, enterprise partnerships are the Wikimedia Foundation’s chief source of revenue. Wikimedia Enterprise offers APIs that enable developers to directly access encyclopedia articles and other Wikimedia data, including Wikimedia Commons images, Wiktionary’s online dictionary, and Wikidata’s machine-readable knowledge base. Free plans allow for limited data updates and access to a support portal. Paid plans (terms are not public) include daily snapshots of Wikimedia data, potentially unlimited data requests (limits vary depending on how much a subscriber pays), streaming access to real-time revisions, and technical support from human staffers. 

  • Wikipedia data is available to all under a Creative Commons license that makes it free to use for commercial and noncommercial purposes. Its free availability and high quality has made it an important data source for training AI models. The foundation also offers an open Kaggle dataset for noncommercial AI training. 
  • Wikipedia receives more requests from automated web crawlers than human users. The site’s founder Jimmy Wales said crawlers gathering data to train AI systems had caused the foundation’s hosting, memory, and server costs to skyrocket. The foundation called for AI developers to support it financially, use the API rather than crawl the web, and attribute information derived from Wikipedia articles.
  • Microsoft, Mistral AI, and Perplexity all signed up as enterprise partners within the last year. Wikimedia’s existing partnerships with Amazon and Meta had not previously been announced. Google became a Wikimedia Enterprise partner in 2022.
  • Wikimedia also announced partnerships with some smaller companies, each of which advertise their environmentally-friendly approach: Ecosia (a search engine company), Pleias (an LLM builder), and ProRata (an AI search, advertising, and attribution engine). 

Behind the news: Other publishers whose content is widely used to train AI systems have sought payment with varied levels of success. In 2023, Reddit and Stack Overflow announced plans to protect their data from AI crawlers while they sought licensing deals. Reddit was able to reach licensing agreements for Google, OpenAI, and others to use its content to train models. Stack Overflow saw traffic and question volume plummet, dropping from 200,000 questions per month in 2014 to 50,000 questions per month in late 2025. As its audience turned from discussing technical issues on the site to asking AI models for answers, the company pivoted from advertising as its primary revenue source to repackaging its data for AI training.  

 

Why it matters: AI companies want to train their models on Wikipedia, and gathering data by sending API calls is much faster than crawling the web — never mind the rapid pace of crawling required to keep up with the encyclopedia’s never-ending revisions. At the same time, Wikipedia needs revenue to survive. Selling API access offers a helpful service to developers while giving this crucial data source a stronger financial foundation.

 

We’re thinking: These deals are win-win. People who choose to read the online encyclopedia the old-fashioned way can keep doing so, and people who build AI models can rest easier knowing they won’t kill a key source of training data.

 

Flowchart showing Mistral Small 3.1 model distillation into smaller Ministral 3 models with post-training steps.

Recipe for Smaller, Capable Models

 

Mistral compressed Mistral Small 3.1 into much smaller versions, yielding a family of relatively small, open-weights, vision-language models that perform better by some measures than competing models of similar size. The method combines pruning and distillation.

 

What’s new: Mistral AI released weights for the Ministral 3 family in parameter counts of 14 billion, 8 billion, and 3 billion. Each size comes in base, instruction-tuned, and reasoning variants. The team detailed its recipe for distilling the models in a paper.

  • Input/output: Text and images in (up to 256,000 tokens, up to 128,000 tokens for reasoning variants), text out
  • Architecture: Decoder-only transformer
  • Performance: Ministral 3 14B Base (14 billion parameters) closely matches Mistral Small 3.1 Base (24 billion parameters), with the smaller models close behind
  • Features: Tool use, languages including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic
  • Availability: Weights free to download under Apache 2.0 license, API access $0.20/$0.20 per million input/output tokens (Ministral 3 14B), $0.15/$0.15 per million input/output tokens (Ministral 3 8B), $0.10/$0.10 per million input/output tokens (Ministral 3 3B)
  • Undisclosed: Training data

How it works: The team built the model using an approach it calls cascade distillation. Starting with a larger parent, they alternately pruned (removed less-important parameters) and distilled (trained a smaller model to mimic the larger model's outputs) it into progressively smaller children.

  • The team pruned Mistral Small 3.1 (24 billion parameters) to create Ministral 3 14B, which became the starting point for Ministral 3 8B, and so on.
  • They pruned by removing layers that changed their input least. Then they reduced the size of internal representations and the width of fully connected layers.
  • Then they trained the pruned model to mimic the Mistral Small 3.1. Pretraining the pruned models to mimic Mistral Small 3.1 produced better results than pretraining them to mimic the larger, more capable Mistral Medium 3 (parameter count undisclosed). However, during fine-tuning stages, the pruned models did benefit from learning to mimic Mistral Medium 3.
  • To fine-tune the models to follow instructions, the team first trained the models on examples of desired behavior, then refined them using ODPO, a technique that uses an LLM to compare better and worse responses to steer the model toward preferred outputs.
  • To produce reasoning variants, the team trained the models on step-by-step reasoning examples of mathematics, coding, multilingual tasks, tool use, and visual reasoning, then applied GRPO to improve performance further.

Performance: Ministral 3 14B (version unspecified) ranks ahead of Mistral Small 3.1 and Mistral Small 3.2 on the Artificial Analysis Intelligence Index, a weighted average of 10 benchmarks. Mistral compared Ministral 3 with Mistral Small 3.1 and open-weights competitors of equal size. Ministral 3 14B base outperformed Mistral Small 3.1 by 1 to 12 percentage points on tests of math and multimodal understanding, and tied on Python coding. It also outperformed its parent on GPQA Diamond. Compared to open-weights competitors:

  • Ministral 3 14B: On TriviaQA, Ministral 3 14B base (74.9 percent accuracy) outperformed Qwen 3 14B (70.3 percent accuracy) but trailed Gemma 3 12B (78.8 percent accuracy). On MATH, Ministral 3 14B base (67.6 percent accuracy) exceeded Qwen 3 14B (62 percent accuracy). The two were comparable in other areas. On AIME 2025 (competitive high-school math problems), Ministral 3 14B reasoning achieved 85 percent accuracy, while Qwen 3 14B Thinking achieved 73.7 percent accuracy. 
  • Ministral 3 8B base outperformed the larger Gemma 3 12B on most benchmarks except TriviaQA. 
  • Ministral 3 3B base was competitive with Gemma 3 4B and Qwen 3 4B, but much stronger on MATH.

Why it matters: Cascade distillation offers a way to produce a high-performance model family from a single parent at a fraction of the usual cost. Training the Ministral 3 models required 1 trillion to 3 trillion training tokens compared to 15 trillion to 36 trillion tokens for Qwen 3 and Llama 3 models of similar sizes. Their training runs were also shorter, and their training algorithm is relatively simple. This sort of approach could enable developers to build multiple model sizes without proportionately higher training costs.

 

We’re thinking: Ministral 3 models can run on generic laptops and smartphones. On-device AI at the edge keeps getting more capable and competitive!

 

Work With Andrew Ng

 

Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.

 

Subscribe and view previous issues here.

 

Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.

 

DeepLearning.AI, 400 Castro St., Suite 600, Mountain View, CA 94041, United States

Unsubscribe Manage preferences