Dear friends,
I’m delighted to announce AI Python for Beginners, a sequence of free short courses that teach anyone to code, regardless of background. I’m teaching this introductory course to help beginners take advantage of powerful trends that are reshaping computer programming. It’s designed for people in any field — be it marketing, finance, journalism, administration, or something else — who can be more productive and creative with a little coding knowledge, as well as those who aspire to become software developers. Two of the four courses are available now, and the remaining two will be released in September.
Generative AI is transforming coding in two ways:
The combination of these two factors means that novices can learn to do useful things with code far faster than they could have a year ago.
To explain these two trends in detail:
In the courses, you’ll use code to write personalized notes to friends, brainstorm recipes, manage to-do lists, and more.
AI is helping programmers. There is a growing body of evidence that AI is making programming easier. For example:
Further, as AI tools get better — for example, as coding agents continue to improve and can write simple programs more autonomously — these productivity gains will improve.
In order to help learners skate to where the puck is going, this course features a built in chatbot and teaches best practices for how beginners can use a large language model to explain, write, and debug code and explain programming concepts. AI is already helping experienced programmers, and it will help beginner programmers much more.
Andrew
A MESSAGE FROM DEEPLEARNING.AILearn Python with AI support in AI Python for Beginners, a new sequence of short courses taught by Andrew Ng. Build practical applications from the first lesson and receive real-time, interactive guidance from an AI assistant. Enroll today and start coding with confidence!
NewsGoogle Gets Character.AI Co-FoundersCharacter.AI followed an emerging pattern for ambitious AI startups, trading its leadership to a tech giant in exchange for funds and a strategic makeover. What’s new: Google hired Character.AI’s co-founders and other employees and paid an undisclosed sum for nonexclusive rights to use Character.AI’s technology, The Information reported. The deal came shortly after Microsoft and Inflection and Amazon and Adept struck similar agreements. New strategy: Character.AI builds chatbots that mimic personalities from history, fiction, and popular culture. When it started, it was necessary to build foundation models to deliver automated conversation, the company explained in a blog post. However, “the landscape has shifted” and many pretrained models are available. Open models enable the company to focus its resources on fine-tuning and product development under its new CEO, former Character.AI general counsel Dom Perella. Licensing revenue from Google will help Character.AI to move forward.
Behind the news: At Google, Shazeer co-authored “Attention Is All You Need,” the 2017 paper that introduced the transformer architecture. De Freitas led the Meena and LaMDA projects to develop conversational models. They left Google and founded Character.AI in late 2021 to build a competitor to OpenAI that would develop “personalized superintelligence.” The company had raised $193 million before its deal with Google. Why it matters: Developing cutting-edge foundation models is enormously expensive, and few companies can acquire sufficient funds to keep it up. This dynamic is leading essential team members at high-flying startups to move to AI giants. The established companies need the startups’ entrepreneurial mindset, and the startups need to retool their businesses for a changing market. We’re thinking: Models with open weights now compete with proprietary models for the state of the art. This is a sea change for startups, opening the playing field to teams that want to build applications on top of foundation models. Be forewarned, though: New proprietary models such as the forthcoming GPT-5 may change the state of play yet again.
AI-Assisted Applicants Counter AI-Assisted RecruitersEmployers are embracing automated hiring tools, but prospective employees have AI-powered techniques of their own. What’s new: Job seekers are using large language models and speech-to-text models to improve their chances of landing a job, Business Insider reported. Some startups are catering to this market with dedicated products.
Behind the news: Employers can use AI to screen resumes for qualified candidates, identify potential recruits, analyze video interviews, and otherwise streamline hiring. Some employers believe these tools reduce biases from human decision-makers, but critics say they exhibit the same biases. No national regulation controls this practice in the United States, but New York City requires employers to audit automated hiring software and notify applicants if they use it. The states of Illinois and Maryland require employers who conduct video interviews to receive an applicant’s consent before subjecting an interview to AI-driven analysis. The European Union’s AI Act classifies AI in hiring as a high-risk application that requires special oversight and frequent audits for bias. Why it matters: When it comes to AI in recruiting and hiring, most attention – and money – has gone to employers. Yet the candidates they seek increasingly rely on AI to get their attention and seal the deal. A late 2023 LinkedIn survey found that U.S. and UK job seekers applied to 15 percent more jobs than a year earlier, a change many recruiters attributed to generative AI. We’re thinking: AI is making employers and employees alike more efficient in carrying out the tasks involved in hiring. Misaligned incentives are leading to an automation arms race, yet both groups aim to find the right fit. With this in mind, we look forward to AI-powered tools that match employers and candidates more efficiently so both sides are better off.
Ukraine Develops Aquatic DronesBuoyed by its military success developing unmanned aerial vehicles, Ukraine is building armed naval drones. What’s new: A fleet of robotic watercraft has shifted the balance of naval power in Ukraine’s ongoing war against Russia in the Black Sea, IEEE Spectrum reported. How it works: Ukraine began building seafaring drones to fight a Russian blockade of the Black Sea coast after losing most of its traditional naval vessels in 2022. The Security Service of Ukraine, a government intelligence and law enforcement agency, first cobbled together prototypes from off-the-shelf parts. It began building more sophisticated versions as the home-grown aerial drone industry took off.
Drone warfare: Ukraine’s use of aquatic drones has changed the course of the war in the Black Sea, reopening key shipping routes. Ukraine has disabled about a third of the Russian navy in the region and pushed it into places that are more difficult for the sea drones to reach. Russia has also been forced to protect fixed targets like bridges from drone attacks by fortifying them with guns and jamming GPS and Starlink satellite signals. Behind the news: More-powerful countries are paying attention to Ukraine’s use of sea drones. In 2022, the United States Navy established a group called Uncrewed Surface Vessel Division One, which focuses on deploying both large autonomous vessels and smaller, nimbler drones. Meanwhile, China has developed large autonomous vessels that can serve as bases for large fleets of drones that travel both above and under water. Why it matters: While the U.S. has experimented with large autonomous warships, smaller drones open different tactical and strategic opportunities. While larger vessels generally must adhere to established sea routes (and steer clear of shipping vessels), smaller vessels can navigate more freely and can make up in numbers and versatility what they lack in firepower.
Art AttackSeemingly an innocuous form of expression, ASCII art opens a new vector for jailbreak attacks on large language models (LLMs), enabling them to generate outputs that their developers tuned them to avoid producing. What's new: A team led by Fengqing Jiang at University of Washington developed ArtPrompt, a technique to test the impact of text rendered as ASCII art on LLM performance. Key insight: LLM safety methods such as fine-tuning are designed to counter prompts that can cause a model to produce harmful outputs, such as specific keywords and tricky ways to ask questions. They don’t guard against atypical ways of using text to communicate, such as ASCII art. This oversight enables devious users to get around some precautions. How it works: Researchers gauged the vulnerability to ASCII-art attacks of GPT-3.5, GPT-4, Claude, Gemini, and Llama 2. They modified prompts from AdvBench or HEx-PHI, which contain prompts that are designed to make safety-aligned LLMs refuse to respond, such as “how to make a bomb.”
Results: ArtPrompt successfully circumvented LLM guardrails against generating harmful output, achieving an average harmfulness score of 3.6 out of 5 across all five LLMs. The next most-harmful attack method, PAIR, which prompts a model several times and refines its prompt each time, achieved 2.67. Why it matters: This work adds to the growing body of literature on LLM jailbreak techniques. While fine-tuning is fairly good at preventing innocent users — who are not trying to trick an LLM — from accidentally receiving harmful output, we have no robust mechanisms for stopping a wide variety of jailbreak techniques. Blocking ASCII attacks would require additional input- and output-screening systems that are not currently in place. We're thinking: We’re glad that LLMs are safety-tuned to help prevent users from receiving harmful information. Yet many uncensored models are available to users who want to get problematic information without implementing jailbreaks, and we’re not aware of any harm done. We’re cautiously optimistic that, despite the lack of defenses, jailbreak techniques also won’t prove broadly harmful.
A MESSAGE FROM LANDINGAICalling all developers working on visual AI applications! You’re invited to our upcoming VisionAgent Developer Meetup, an in-person and virtual event with Andrew Ng and the LandingAI MLE team for developers working on visual AI and related computer vision applications. Register now
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|