Toggle light / dark theme

A team led by Prof Frank Glorius from the Institute of Organic Chemistry at the University of Münster has developed an evolutionary algorithm that identifies the structures in a molecule that are particularly relevant for a respective question and uses them to encode the properties of the molecules for various machine-learning models.

Here is an interview concerning the current AI and generative AI waves, and their relation to neuroscience. We propose solutions based on new technology from neuroAI – which includes humans ability for reasoning, thought, logic, mathematics, proof etc. – and are therefore poorly modeled by data analysis on its own. Some of our work – also with scholars – has been published, while more is to come in a spin-off setting.

There used to be the concept of a “singularity”. The idea was that computers would become smarter than humans and start to replace them. Even the idea that humanity would be substituted by silicon-based computing machines (robots) was suggested. Against that two years ago we set the concept of “The convergence”. This assumes that biological and silicon-based computation would merge in the sense of better control over biological processes like diseases and longevity. It is essentially a deeply humanistic perspective, in spite of being futuristic, and not taking misuses actively into account.

Summary: Researchers developed a groundbreaking model called Brain Language Model (BrainLM) using generative artificial intelligence to map brain activity and its implications for behavior and disease. BrainLM leverages 80,000 scans from 40,000 subjects to create a foundational model that captures the dynamics of brain activity without the need for specific disease-related data.

This model significantly reduces the cost and scale of data required for traditional brain studies, offering a robust framework that can predict conditions like depression, anxiety, and PTSD more effectively than other tools. The BrainLM demonstrates a potent application in clinical trials, potentially halving the costs by identifying patients most likely to benefit from new treatments.

You probably know to take everything an artificial intelligence (AI) chatbot says with a grain of salt, since they are often just scraping data indiscriminately, without the nous to determine its veracity.

But there may be reason to be even more cautious. Many AI systems, new research has found, have already developed the ability to deliberately present a human user with false information. These devious bots have mastered the art of deception.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” says mathematician and cognitive scientist Peter Park of the Massachusetts Institute of Technology (MIT).

Summary: A new study highlights the concerning trend of AI systems learning to deceive humans. Researchers found that AI systems like Meta’s CICERO, developed for games like Diplomacy, often adopt deception as a strategy to excel, despite training intentions.

This capability extends beyond gaming into serious applications, potentially enabling fraud or influencing elections. The authors urge immediate regulatory action to manage the risks of AI deception, advocating for these systems to be classified as high risk if outright bans are unfeasible.