Toggle light / dark theme

Anthropic, an artificial intelligence company founded by exiles from OpenAI, has introduced the first AI model that can produce either conventional output or a controllable amount of reasoning needed to solve more grueling problems.

Anthropic says the new hybrid model, called Claude 3.7, will make it easier for users and developers to tackle problems that require a mix of instinctive output and step-by-step cogitation. The user has a lot of control over the behavior—how long it thinks, and can trade reasoning and intelligence with time and budget, says Michael Gerstenhaber, product lead, AI platform at Anthropic.

Claude 3.7 also features a new scratchpad that reveals the model’s reasoning process. A similar feature proved popular with the Chinese AI model DeepSeek. It can help a user understand how a model is working over a problem in order to modify or refine prompts.

Dianne Penn, product lead of research at Anthropic, says the scratchpad is even more helpful when combined with the ability to ratchet a model’s reasoning up and down. If, for example, the model struggles to break down a problem correctly, a user can ask it to spend more time working on it.

Frontier AI companies are increasingly focused on getting the models to reason over problems as a way to increase their capabilities and broaden their usefulness. OpenAI, the company that kicked off the current AI boom with ChatGPT, was the first to offer a reasoning AI model, called o1, in September 2024.

OpenAI has since introduced a more powerful version called o3, while rival Google has released a similar offering for its model Gemini, called Flash Thinking. In both cases, users have to switch between models to access the reasoning abilities—a key difference compared to Claude 3.7.

(https://open.substack.com/pub/remunerationlabs/p/anthropic-l…Share=true)


Claude 3.7, the latest model from Anthropic, can be instructed to engage in a specific amount of reasoning to solve hard problems.

Leave a Comment