Toggle light / dark theme

AIs generate more novel and exciting research ideas than human experts

The first statistically significant results are in: not only can Large Language Model (LLM) AIs generate new expert-level scientific research ideas, but their ideas are more original and exciting than the best of ours – as judged by human experts.

Recent breakthroughs in large language models (LLMs) have excited researchers about the potential to revolutionize scientific discovery, with models like ChatGPT and Anthropic’s Claude showing an ability to autonomously generate and validate new research ideas.

This, of course, was one of the many things most people assumed AIs could never take over from humans; the ability to generate new knowledge and make new scientific discoveries, as opposed to stitching together existing knowledge from their training data.

Combining existing sensors with machine learning algorithms improves robots’ intrinsic sense of touch

A team of roboticists at the German Aerospace Center’s Institute of Robotics and Mechatronics finds that combining traditional internal force-torque sensors with machine-learning algorithms can give robots a new way to sense touch.

In their study published in the journal Science Robotics, the group took an entirely new approach to give robots a that does not involve artificial skin.

For living creatures, touch is a two-way street; when you touch something, you feel its texture, temperature and other features. But you can also be touched, as when someone or something else comes in contact with a part of your body. In this new study, the research team found a way to emulate the latter type of touch in a robot by combining internal force-torque sensors with a machine-learning algorithm.

OpenAI releases reasoning AI with eye on safety, accuracy

ChatGPT creator OpenAI on Thursday released a new series of artificial intelligence models designed to spend more time thinking—in hopes that generative AI chatbots provide more accurate and beneficial responses.

The new models, known as OpenAI o1-Preview, are designed to tackle and solve more challenging problems in science, coding and mathematics, something that earlier models have been criticized for failing to provide consistently.

Unlike their predecessors, these models have been trained to refine their thinking processes, try different methods and recognize mistakes, before they deploy a final answer.

Declassified Nuclear Tests — Enhanced with AI — Slow Motion

The following declassified nuclear test footage has been enhanced using AI with techniques such as slow motion, frame interpolation, upscaling, and colorization. This helps improve the clarity and visual quality of the original recordings, which were often degraded or limited by the technology of the time. Experiencing these shots with enhanced detail brings the devastating power of atomic weapons into focus and offers a clearer perspective on their catastrophic potential and impact.

Music generated with Suno AI.

Instagram: / hashem.alghaili \r.
Facebook: / sciencenaturepage.

OpenAI Launching “Strawberry” Model With “Human-Like Reasoning” as Soon as This Week

ChatGPT maker OpenAI is rumored to be imminently releasing a brand-new AI model, internally dubbed “Strawberry,” that has a “human-like” ability to reason.

As Bloomberg reports, a person familiar with the project says it could be released as soon as this week.

We’ve seen rumors surrounding an OpenAI model capable of reasoning swirl for many months now. In November, Reuters and The Information reported that the company was working on a shadowy project called Q — pronounced Q-Star — which was alleged to represent a breakthrough in OpenAI’s efforts to realize artificial general intelligence, the theoretical point at which an AI could outperform a human.

Oracle Unveils World’s First Zettascale AI Supercomputer with 131K NVIDIA Blackwell GPUs

Elon Musk is not smiling: Oracle and Elon Musk’s AI startup xAI recently ended talks on a potential $10 billion cloud computing deal, with xAI opting to build its own data center in Memphis, Tennessee.

At the time, Musk emphasised the need for speed and control over its own infrastructure. “Our fundamental competitiveness depends on being faster than any other AI company. This is the only way to catch up,” he added.

XAI is constructing its own AI data center with 100,000 NVIDIA chips. It claimed that it will be the world’s most powerful AI training cluster, marking a significant shift in strategy from cloud reliance to full infrastructure ownership.

/* */