This Week in AI: OpenAI’s new Strawberry model may be smart, yet sluggish


Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

This week in AI, OpenAI’s next major product announcement is imminent, if a piece in The Information is to be believed.

The Information reported on Tuesday that OpenAI plans to release Strawberry, an AI model that can effectively fact-check itself, in the next two weeks. Strawberry will be a stand-alone product but will be integrated with ChatGPT, OpenAI’s AI-powered chatbot platform.

Strawberry is reportedly better at programming and math problems than other top-end generative AI models (including OpenAI’s own GPT-4o). And it avoids some of the reasoning pitfalls that normally trip up those same models. But the improvements come at a cost: Strawberry is said to be slow — quite slow. Sources tell The Information that the model takes 10-20 seconds to answer a single question.

Granted, OpenAI will likely position Strawberry as a model for mission-critical tasks where accuracy is paramount. This could resonate with businesses, many of which have grown frustrated with the limitations of today’s generative AI tech. A survey this week by HR specialist Peninsula found that inaccuracies are a key concern for 41% of firms exploring generative AI, and Gartner predicts that a third of all generative AI projects will be abandoned by the end of the year due to adoption blockers.

But while some companies might not mind chatbot lag time, I think the average person will.

Hallucinatory tendencies aside, today’s models are fast — incredibly fast. We’ve grown accustomed to this; the speed makes interactions feel more natural, in fact. If Strawberry’s “processing time” is indeed an order of magnitude longer than that of existing models, it’ll be challenging to avoid the perception that Strawberry is a step backward in some aspect.

That’s assuming the best-case scenario: that Strawberry answers questions consistently correctly. If it’s still error-prone, like the reporting suggests, the lengthy wait times will be even tougher to swallow.

OpenAI’s no doubt feeling the pressure to deliver as it burns through billions spending on AI training and staffing efforts. Its investors and potential new backers hope to see a return sooner rather than later, one imagines. But rushing to put out an unpolished model such as Strawberry — and considering charging substantially more for it — seems ill-advised.

I’d think the wiser move would be to let the tech mature a bit. As the generative AI race grows fiercer, perhaps OpenAI doesn’t have the luxury.

News

Apple rolls out visual search: The Camera Control, the new button on the iPhone 16 and 16 Plus, can launch what Apple calls “visual intelligence” — basically a reverse image search combined with some text recognition. The company is partnering with third parties, including Google, to power search results.

Apple punts on AI: Devin writes about how many of Apple’s generative AI features are pretty basic when it comes down to it — contrary to what the company’s bombastic marketing would have you believe.

Audible trains AI for audiobooks: Audible, Amazon’s audiobook business, said that it’ll use AI trained on professional narrators’ voices to generate new audiobook recordings. Narrators will be compensated for any audiobooks created using their AI voices on a title-by-title, royalty-sharing basis.

Musk denies Tesla-xAI deal: Elon Musk has pushed back against a Wall Street Journal report that one of his companies, Tesla, has discussed sharing revenue with another of his companies, xAI, so that it can use the latter’s generative AI models.

Bing gets deepfake-scrubbing tools: Microsoft says it’s collaborating with StopNCII — an organization that allows victims of revenge porn to create a digital fingerprint of explicit images, real or not — to help remove nonconsensual porn from Bing search results.

Google’s Ask Photos launches: Google’s AI-powered search feature Ask Photos began rolling out to select Google Photos users in the U.S. late last week. Ask Photos allows you to ask complex queries like “Show the best photo from each of the National Parks I visited,” “What did we order last time at this restaurant?,” and “Where did we camp last August?”

U.S. and EU sign AI treaty: At a summit this past week, the U.S., U.K., and EU signed up to a treaty on AI safety laid out by the Council of Europe (COE), an international standards and human rights organization. The COE describes the treaty as “the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.”

Research paper of the week

Every biological process depends on interactions between proteins, which occur when proteins bind together. “Binder” proteins — proteins that bind to specific target molecules — have applications in drug development, disease diagnosis, and more.

But creating binder proteins is often a laborious and costly undertaking — and comes with a risk of failure.

In search of an AI-powered solution, Google’s AI lab DeepMind developed AlphaProteo, a model that predicts proteins to bind to target molecules. Given a few parameters, AlphaProteo can output a candidate protein that binds to a molecule at a specified binding site.

In tests with seven target molecules, AlphaProteo generated protein binders with 3x to 300x better “binding affinity” (i.e., molecule-binding strength) than previous binder-finding methods managed to create. Moreover, AlphaProteo became the first model to successfully develop a binder for a protein associated with cancer and complications arising from diabetes (VEGF-A).

DeepMind admits, however, that AlphaProteo failed on an eighth testing attempt — and that strong binding is usually only the first step in creating proteins that might be useful for practical applications.

Model of the week

There’s a new, highly capable generative AI model in town — and anyone can download, fine-tune, and run it.

The Allen Institute for AI (AI2), together with startup Contextual AI, developed a text-generating English-language model called OLMoE, which has a 7-billion-parameter mixture-of-experts (MoE) architecture. (“Parameters” roughly correspond to a model’s problem-solving skills, and models with more parameters generally — but not always — perform better than those with fewer parameters.)

MoEs break down data processing tasks into subtasks and then delegate them to smaller, specialized “expert” models. They aren’t new. But what makes OLMoE noteworthy — besides the fact that it’s openly licensed — is the fact that it outperforms many models in its class, including Meta’s Llama 2, Google’s Gemma 2, and Mistral’s Mistral 7B, on a range of applications and benchmarks.

Several variants of OLMoE, along with the data and code used to create them, are available on GitHub.

Grab bag

This week was Apple week. The company held an event on Monday where it announced new iPhones, Apple Watch models, and apps. Here’s a rundown in case you weren’t able to tune in.

Apple Intelligence, Apple’s suite of AI-powered services, predictably got airtime. Apple reaffirmed that ChatGPT would be integrated with the experience in several key ways. But curiously, there wasn’t any mention of AI partnerships beyond the previously announced OpenAI deal — despite Apple lightly telegraphing such partnerships earlier this summer.

In June at WWDC 2024, SVP Craig Federighi confirmed Apple’s plans to work with additional third-party models, including Google’s Gemini, in the future. “Nothing to announce right now,” he said, “but that’s our general direction.”

It’s been radio silence since.

Perhaps the necessary paperwork is taking longer to hammer out than expected — or there’s been a technical setback. Or maybe Apple’s possible investment in OpenAI rubbed some model partners the wrong way.

Whatever the case may be, it seems that ChatGPT will be the solo third-party model in Apple Intelligence for the foreseeable future. Sorry, Gemini fans.

Leave a Comment

url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url