Anthropic, an artificial intelligence company founded by exiles from OpenAI, has introduced the first AI model that can produce either conventional output or a controllable amount of reasoning needed to solve more grueling problems.
Anthropic says the new hybrid model, called Claude 3.7, will make it easier for users and developers to tackle problems that require a mix of instinctive output and step-by-step cogitation. The user has a lot of control over the behavior—how long it thinks, and can trade reasoning and intelligence with time and budget, says Michael Gerstenhaber, product lead, AI platform at Anthropic.
Claude 3.7 also features a new scratchpad that reveals the model’s reasoning process. A similar feature proved popular with the Chinese AI model DeepSeek. It can help a user understand how a model is working over a problem in order to modify or refine prompts.
Dianne Penn, product lead of research at Anthropic, says the scratchpad is even more helpful when combined with the ability to ratchet a model’s reasoning up and down. If, for example, the model struggles to break down a problem correctly, a user can ask it to spend more time working on it.