Toggle light / dark theme

In 1956, a group of pioneering minds gathered at Dartmouth College to define what we now call artificial intelligence (AI). Even in the early 1990s when colleagues and I were working for early-stage expert systems software companies, the notion that machines could mimic human intelligence was an audacious one. Today, AI drives businesses, automates processes, creates content, and personalizes experiences in every industry. It aids and abets more economic activity than we “ignorant savages” (as one of the founding fathers of AI, Marvin Minsky, referred to our coterie) could have ever imagined. Admittedly, the journey is still early—a journey that may take us from narrow AI to artificial general intelligence (AGI) and ultimately to artificial superintelligence (ASI).

As business and technology leaders, it’s crucial to understand what’s coming: where AI is headed, how far off AGI and ASI might be, and what opportunities and risks lie ahead. To ignore this evolution would be like a factory owner in 1900 dismissing electricity as a passing trend.

Let’s first take stock of where we are. Modern AI is narrow AI —technologies built to handle specific tasks. Whether it’s a large language model (LLM) chatbot responding to customers, algorithms optimizing supply chains, or systems predicting loan defaults, today’s AI excels at isolated functions.

Chris McHenry is Vice President of Product Management at Aviatrix.

Enterprise reliance on cloud computing is no longer a question of “if” but “how much” and “how secure.” The cloud has become the backbone of modern business, enabling rapid scaling, seamless integration and global reach.

However, as cloud adoption matures, so do its associated costs—driven significantly by the rise of artificial intelligence (AI) and the escalating energy demands of data centers. For instance, OpenAI recently revealed plans to increase its prices by 120% over the next five years, even after securing an industry-record $6.6 billion in funding.

The notion of entropy grew out of an attempt at perfecting machinery during the industrial revolution. A 28-year-old French military engineer named Sadi Carnot set out to calculate the ultimate efficiency of the steam-powered engine. In 1824, he published a 118-page book(opens a new tab) titled Reflections on the Motive Power of Fire, which he sold on the banks of the Seine for 3 francs. Carnot’s book was largely disregarded by the scientific community, and he died several years later of cholera. His body was burned, as were many of his papers. But some copies of his book survived, and in them lay the embers of a new science of thermodynamics — the motive power of fire.

Carnot realized that the steam engine is, at its core, a machine that exploits the tendency for heat to flow from hot objects to cold ones. He drew up the most efficient engine conceivable, instituting a bound on the fraction of heat that can be converted to work, a result now known as Carnot’s theorem. His most consequential statement comes as a caveat on the last page of the book: “We should not expect ever to utilize in practice all the motive power of combustibles.” Some energy will always be dissipated through friction, vibration, or another unwanted form of motion. Perfection is unattainable.

Reading through Carnot’s book a few decades later, in 1865, the German physicist Rudolf Clausius coined a term for the proportion of energy that’s locked up in futility. He called it “entropy,” after the Greek word for transformation. He then laid out what became known as the second law of thermodynamics: “The entropy of the universe tends to a maximum.”

Physicists of the era erroneously believed that heat was a fluid (called “caloric”). Over the following decades, they realized heat was rather a byproduct of individual molecules bumping around. This shift in perspective allowed the Austrian physicist Ludwig Boltzmann to reframe and sharpen the idea of entropy using probabilities.

Boltzmann distinguished the microscopic properties of molecules, such as their individual locations and velocities, from bulk macroscopic properties of a gas like temperature and pressure…

The field of artificial intelligence (AI) has witnessed extraordinary advancements in recent years, ranging from natural language processing breakthroughs to the development of sophisticated robotics. Among these innovations, multi-agent systems (MAS) have emerged as a transformative approach for solving problems that single agents struggle to address. Multi-agent collaboration harnesses the power of interactions between autonomous entities, or “agents,” to achieve shared or individual objectives. In this article, we explore one specific and impactful technique within multi-agent collaboration: role-based collaboration enhanced by prompt engineering. This approach has proven particularly effective in practical applications, such as developing a software application.

Originally published on Towards AI.

One of the major challenges in using LLMs in business is that LLMs hallucinate. How can you entrust your clients to a chatbot that can go mad and tell them something inappropriate at any moment? Or how can you trust your corporate AI assistant if it makes things up randomly?

That’s a problem, especially given that an LLM can’t be fired or held accountable.

PlantRNA-FM, an AI model trained on RNA data from over 1,100 plants, decodes genetic patterns to advance plant science, improve crops, and tackle global agricultural challenges.

A groundbreaking Artificial Intelligence (AI) model designed to decode the sequences and structural patterns that form the genetic “language” of plants has been launched by a research collaboration.

Named Plant RNA-FM, this innovative model is the first of its kind and was developed by a partnership between plant researchers at the John Innes Centre and computer scientists at the University of Exeter.