This Week in AI: VCs (and devs) are enthusiastic about AI coding tools


Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

This week in AI, two startups developing tools to generate and suggest code — Magic and Codeium — raised nearly half a billion dollars combined. The rounds were high even by AI sector standards, especially considering that Magic hasn’t launched a product or generated revenue yet.

So why the investor enthusiasm? Well, coding isn’t an easy — or inexpensive — business. And there’s demand from both companies and individual developers for ways to streamline the more arduous processes around it.

According to one survey, the average dev spends close to 20% of their workweek maintaining existing code rather than writing anything new. In a separate study, companies said that excessive code maintenance (including addressing technical debt and fixing poorly performing code) costs them $85 billion per year in lost opportunities.

AI tools can assist here, many devs and firms believe. And, for what it’s worth, consultants agree. In a 2023 report, analysts at McKinsey wrote that AI coding tools can enable devs to write new code in half the time and optimize existing code in roughly two-thirds the time.

Now, a coding AI isn’t a silver bullet. The McKinsey report also found that certain, more complex workloads — like those requiring familiarity with a specific programming framework — didn’t necessarily benefit from AI. In fact, it took junior developers longer to finish some tasks with AI versus without, according to the report’s co-authors.

“Participant feedback indicates that developers actively iterated with the tools to achieve [high] quality, signaling that the technology is best used to augment developers rather than replace them,” the co-authors wrote, driving the point home that AI is no substitute for experience. “Ultimately, to maintain code quality, developers need to understand the attributes that make up quality code and prompt the tool for the right outputs.”

AI coding tools also have unresolved security- and IP-related issues. Some analyses show the tools have resulted in more mistaken code being pushed to codebases over the past few years. Code-generating tools trained on copyrighted code, meanwhile, have been caught regurgitating that code when prompted in a certain way, posing a liability risk to the developers using them.

But that’s not dampening enthusiasm for coding AI from devs — or their employers, for that matter.

The majority of developers (upward of 97%) in a 2024 GitHub poll said that they’ve adopted AI tools in some form. According to that same poll, 59% to 88% of companies are encouraging — or now allowing — the use of assistive programming tools.

So it’s not terribly surprising that the AI coding tools market could be worth some $27 billion by 2032 (per Polaris Research) — particularly if, as Gartner predicts, 75% of enterprise software devs use AI coding assistants by 2028.

The market’s already hot. Generative AI coding startups Cognition, Poolside and Anysphere have closed mammoth rounds in the past year — and GitHub’s AI coding tool Copilot has over 1.8 million paying users. The productivity gains the tools could deliver have been sufficient to convince investors — and customers — to ignore their flaws. But we’ll see if the trend holds — and exactly for how long.

News

“Emotion AI” attracts investments: Julie writes how some VCs and businesses are being drawn to “emotion AI,” the more sophisticated sibling of sentiment analysis, and how this could be problematic.

Why home robots still suck: Brian explores why many of the attempts at home robots have failed spectacularly. It comes down to pricing, functionality and efficacy, he says.

Amazon hires Covariant founders: On the subject of robots, Amazon last week hired robotics startup Covariant’s founders along with “about a quarter” of the company’s employees. It also signed a nonexclusive license to use Covariant’s AI robotics models.

NightCafe, the OG image generator: Yours truly profiled NightCafe, one of the original image generators and a marketplace for AI-generated content. It’s still alive and kicking, despite moderation challenges.

Midjourney gets into hardware: NightCafe rival Midjourney is getting into hardware. The company made the announcement in a post on X; its new hardware team will be based in San Francisco, it said.

SB 1047 passes: California’s legislature just passed AI bill SB 1047. Max writes about why some hope the governor won’t sign it.

Google rolls out election safeguards: Google is gearing up for the U.S. presidential election by rolling out safeguards for more of its generative AI apps and services. As part of the restrictions, most of the company’s AI products won’t respond to election-related topics.

Apple and Nvidia could invest in OpenAI: Nvidia and Apple are reportedly in talks to contribute to OpenAI’s next fundraising round — a round that could value the ChatGPT maker at $100 billion.

Research paper of the week

Who needs a game engine when you have AI?

Researchers at Tel Aviv University and DeepMind, Google’s AI R&D division, last week previewed GameNGen, an AI system that can simulate the game Doom at up to 20 frames per second. Trained on extensive footage of Doom gameplay, the model can effectively predict the next “gaming state” when a player “controls” the character in the simulation. It’s a game generated in real time.

DeepMind Doom
A Doom-like level, generated by AI.
Image Credits: Google

GameNGen isn’t the first model to do so. OpenAI’s Sora can simulate games, including Minecraft, and a group of university researchers unveiled an Atari-game-simulating AI early this year. (Other models along these lines run the gamut from World Models to GameGAN and Google’s own Genie.)

But GameNGen is one of the more impressive game-simulating attempts yet in terms of its performance. The model isn’t without big limitations, namely graphical glitches and an inability to “remember” more than three seconds of gameplay (meaning GameNGen can’t create a functional game, really). But it could be a step toward entirely new sorts of games — like procedurally generated games on steroids.

Model of the week

As my colleague Devin Coldewey has written about before, AI is taking over the field of weather forecasting, from a quick, “How long will this rain last?” to a 10-day outlook, all the way out to century-level predictions.

One of the newest models to hit the scene, Aurora is the product of Microsoft’s AI research org. Trained on various weather and climate datasets, Aurora can be fine-tuned to specific forecasting tasks with relatively little data, Microsoft claims.

Microsoft Aurora
Image Credits: Microsoft

“Aurora is a machine learning model that can predict atmospheric variables, such as temperature,” Microsoft explains on the model’s GitHub page. “We provide three specialized versions: one for medium-resolution weather prediction, one for high-resolution weather prediction and one for air pollution prediction.”

Aurora’s performance appears to be quite good relative to other atmosphere-tracking models. (In less than a minute, it can produce a five-day global air pollution forecast or a ten-day high-resolution weather forecast.) But it’s not immune to the hallucinatory tendencies of other AI models. Aurora can make mistakes, which is why Microsoft cautions that it shouldn’t be “used by people or businesses to plan their operations.”

Grab bag

Last week, Inc. reported that Scale AI, the AI data-labeling startup, laid off scores of annotators — the folks responsible for labeling the training datasets used to develop AI models.

As of publication time, there hasn’t been an official announcement. But one former employee told Inc. that as many as hundreds were let go. (Scale AI disputes this.)

Most of the annotators who work for Scale AI aren’t employed by the company directly. Rather, they’re hired by one of Scale’s subsidiaries or a third-party firm, giving them less job security. Labelers sometimes go long stretches without receiving work. Or they’re unceremoniously booted off Scale’s platform, as happened to contractors in Thailand, Vietnam, Poland and Pakistan recently.

Of the layoffs last week, a Scale spokesperson told TechCrunch that it hires contractors through a company called HireArt. “These individuals [i.e., those who lost their jobs] were employees of HireArt and received severance and COBRA benefits through the end of the month from HireArt. Last week, less than 65 people were laid off. We built up this contracted workforce and scaled it to appropriate sizing as our operating model evolved over the past nine months, less than 500 have been laid off in the United States.”

It’s a little hard to parse exactly what Scale AI means with this carefully worded statement, but we are looking into it. If you are a former employee of Scale AI or a contractor who was recently laid off, contact us however you feel comfortable doing so.

Leave a Comment

url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url