Toggle light / dark theme

Exclusive: Start-up FutureHouse debuts powerful AI ‘reasoning model’ for science

As artificial intelligence (AI) tools shake up the scientific workflow, Sam Rodriques dreams of a more systemic transformation. His start-up company, FutureHouse in San Francisco, California, aims to build an ‘AI scientist’ that can command the entire research pipeline, from hypothesis generation to paper production.

Today, his team took a step in that direction, releasing what it calls the first true ‘reasoning model’ specifically designed for scientific tasks. The model, called ether0, is a large language model (LLM) that’s purpose-built for chemistry, which it learnt simply by taking a test of around 500,000 questions. Following instructions in plain English, ether0 can spit out formulae for drug-like molecules that satisfy a range of criteria.

Science’s ‘Gollum effect’: PhDs bear brunt of territorial behaviour

Almost half of the scientists who responded to a survey have experienced territorial and undermining behaviours from other scientists — most commonly during their PhD studies1. Of those affected, nearly half said that the perpetrator was a high-profile researcher, and one-third said it was their own supervisor.

Most of the survey respondents were ecologists, but the study’s organizers suspect that surveys focusing on other disciplines would yield similar results.

The gatekeeping behaviours that the study documents “damage careers, particularly of early-career and marginalized researchers”, says lead author Jose Valdez, an ecologist at the German Centre for Integrative Biodiversity Research in Leipzig. “Most alarming was that nearly one in five of those affected left academia or science entirely.”

/* */