Toggle light / dark theme

Decoding the Black Box of AI — Scientists Uncover Unexpected Results

Artificial intelligence (AI) has been advancing rapidly, but its inner workings often remain obscure, characterized by a “black box” nature where the process of reaching conclusions is not visible. However, a significant breakthrough has been made by Prof. Dr. Jürgen Bajorath and his team, cheminformatics experts at the University of Bonn. They have devised a technique that uncovers the operational mechanisms of certain AI systems used in pharmaceutical research.

Surprisingly, their findings indicate that these AI models primarily rely on recalling existing data rather than learning specific chemical interactions for predicting the effectiveness of drugs. Their results have recently been published in Nature Machine Intelligence.

Which drug molecule is most effective? Researchers are feverishly searching for efficient active substances to combat diseases. These compounds often dock onto protein, which usually are enzymes or receptors that trigger a specific chain of physiological actions.

Two Space Stories In 2024 Will Determine The Future Of Humanity

A long-awaited space mission in the coming year could herald the start of a new era where so many science fiction dreams finally begin to cement themselves as science fact. But first we must pass a critical test of our own making that pits our technological expansion into orbit against the sun itself.

It’s not that difficult to predict what science stories we’ll be talking about over the next year: artificial intelligence, climate change and advances in biotechnology will remain front of mind. But there’s a pair of happenings just beyond our planet that I’ll be watching closely, because they amount to tests of a sort that could determine the trajectory of our species.

The first story you’ve probably already heard about. NASA aims to launch its Artemis II mission by the end of the year, carrying humans on a journey around the moon and back. This marks the first time anyone has traveled farther than low-earth orbit in more than 50 years.

Thanks to AI, you don’t need a computer science degree to get a job in tech, IBM AI chief says

According to Candy, the rise of AI would instead put a premium on soft skills like critical and creative thinking.

“Questioning, creativity skills, and innovation are going to be hugely important because I think AI’s going to free up more capacity for creative thought processes,” he told Fortune earlier.

It’s not just jobs in tech, though. Candy said that advances in AI image-generation technology could also affect those working in the arts.

Company executives can ensure generative AI is ethical with these steps

Businesses must also ensure they are prepared for forthcoming regulations. President Biden signed an executive order to create AI safeguards, the U.K. hosted the world’s first AI Safety Summit, and the EU brought forward their own legislation. Governments across the globe are alive to the risks. C-suite leaders must be too — and that means their generative AI systems must adhere to current and future regulatory requirements.

So how do leaders balance the risks and rewards of generative AI?

Businesses that leverage three principles are poised to succeed: human-first decision-making, robust governance over large language model (LLM) content, and a universal connected AI approach. Making good choices now will allow leaders to future-proof their business and reap the benefits of AI while boosting the bottom line.

CMU and Emerald Cloud Lab Researchers Unveil Coscientist: An Artificial Intelligence System Powered by GPT-4 for Autonomous Experimental Design and Execution in Diverse Fields

Integrating large language models (LLMs) into various scientific domains has notably reshaped research methodologies. Among these advancements, an innovative system named Coscientist has emerged, as outlined in the paper “Autonomous chemical research with large language models,” authored by researchers from Carnegie Mellon University and Emerald Cloud Lab. This groundbreaking system, powered by multiple LLMs, is a pivotal achievement in the convergence of language models and laboratory automation technologies.

Coscientist comprises several intricately designed modules, with its cornerstone being the ‘Planner.’ This module operates using a GPT-4 chat completion instance, functioning as an interactive assistant capable of understanding user commands such as ‘GOOGLE,’ ‘PYTHON,’ ‘DOCUMENTATION,’ and ‘EXPERIMENT.’ Additionally, the ‘Web Searcher’ module, fueled by GPT-4, significantly enhances synthesis planning. Notably, it has exhibited exceptional performance in trials involving acetaminophen, aspirin, nitroaniline, and phenolphthalein. The ‘Code execution’ module, triggered by the ‘PYTHON’ command, facilitates experiment preparation calculations. Meanwhile, the ‘Automation’ command, guided by the ‘DOCUMENTATION’ module, implements experiment automation via APIs.

The prowess of the GPT-4-powered Web Searcher module in synthesis planning is evident in its success across diverse trials, demonstrating a capacity for efficient exploration and decision-making in chemical synthesis. Furthermore, the documentation search module equips Coscientist with the ability to utilize tailored technical documentation efficiently, enhancing its API utilization accuracy and improving overall experiment automation performance.

The ‘Effective Accelerationism’ movement doesn’t care if humans are replaced by AI as long as they’re there to make money from it

The Effective Accelerationism movement — a staunchly pro-AI ideology that has Silicon Valley split over how artificial intelligence should be regulated — appears to be walking a razor’s edge between being a techno-libertarian philosophy and a nihilistic, even reckless, approach to advancing one of…


Silicon Valley’s new ideological faction, called Effective Accelerationism or e/acc, is focused on the pursuit of AI development with no guardrails to slow its growth.