Toggle light / dark theme

A tiny cooling device can automatically reset malfunctioning components of a quantum computer. Its performance suggests that manipulating heat could also enable other autonomous quantum devices.

Quantum computers aren’t yet fully practical because they make too many errors. In fact, if qubits – key components of this type of computer – accidentally heat up and become too energetic, they can end up in an erroneous state before the calculation even begins. One way to “reset” the qubits to their correct states is to cool them down.

Image: chalmers university of technology, lovisa håkansson.


A tiny quantum “refrigerator” can ensure that a quantum computer’s calculations start off error-free – without requiring oversight or even new hardware.

By Karmela Padavic-Callaghan

Adobe has added numerous features to its Firefly GenAI suite since its introduction in 2023. The latest update enables companies to adjust images in bulk – and when they say bulk they are claiming by the thousands, if necessary. Known as Firefly Bulk Create, this tool aims to accelerate advertising and messaging campaigns by making image alterations more efficient. While some critics worry that this technology might erode human artistry in advertising, Adobe’s press release promotes the new tools as a means to cut through tedious work.

Increasingly, AI systems are interconnected, which is generating new complexities and risks. Managing these ecosystems effectively requires comprehensive training, designing technological infrastructures and processes so they foster collaboration, and robust governance frameworks. Examples from healthcare, financial services, and legal profession illustrate the challenges and ways to overcome them.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.397802 data-title= A Guide to Managing Interconnected AI Systems data-url=/2024/12/a-guide-to-managing-interconnected-ai-systems data-topic= AI and machine learning data-authors= I. Glenn Cohen; Theodoros Evgeniou; Martin Husovec data-content-type= Digital Article data-content-image=/resources/images/article_assets/2024/12/Dec24_13_BrianRea-383x215.jpg data-summary=

The risks and complexities of these ecosystems require specific training, infrastructure, and governance.

The once shiny, exciting use cases for quantum technology may turn out to be pretty mundane if a small, but courageous band of researchers proves their theories correct. After all, using quantum computers to find new drug treatments, navigate the world without global positioning systems, and optimize complex portfolios may seem downright boring compared to using them to explore the myriad of questions that surround the hard problems of consciousness. Questions like: what the heck even is consciousness — and, does it have a connection to quantum mechanics? And, can quantum computing help make robots conscious — and should we make them conscious?

Tough questions, for sure, but here we’ll introduce a few researchers and entrepreneurs who are heading in that direction right now and leaning into what might turn out to be the ultimate quantum computing use case of all time: consciousness.

Hartmut Neven, a physicist and computational neuroscientist leading Google’s Quantum Artificial Intelligence Lab, believes quantum computing could help explore consciousness. Speaking to New Scientist, Neven outlined experiments and theories suggesting consciousness might emerge from quantum phenomena, such as entanglement and superposition, within the human brain. He proposes leveraging quantum computers to test these ideas, potentially expanding our understanding of how the mind interacts with the physical world.

Quantum computers may soon dramatically enhance our ability to solve problems modeled by nonreversible Markov chains, according to a study published on the pre-print server arXiv.

The researchers from Qubit Pharmaceuticals and Sorbonne University, demonstrated that quantum algorithms could achieve exponential speedups in sampling from such chains, with the potential to surpass the capabilities of classical methods. These advances — if fully realized — have a range of implications for fields like drug discovery, machine learning and financial modeling.

Markov chains are mathematical frameworks used to model systems that transition between various states, such as stock prices or molecules in motion. Each transition is governed by a set of probabilities, which defines how likely the system is to move from one state to another. Reversible Markov chains — where the probability of moving from, let’s call them, state A to state B equals the probability of moving from B to A — have traditionally been the focus of computational techniques. However, many real-world systems are nonreversible, meaning their transitions are biased in one direction, as seen in certain biological and chemical processes.

A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.

AI’s influence is growing fast. A quick search of AI-related science stories reveals how fundamental a tool it has become. Thousands of AI-assisted, AI-supported and AI-driven analyses and decision-making tools help scientists improve their research.

AI has also become more integrated into , from virtual assistants to complex information and decision support. Increased usage is beginning to influence how people think, especially impactful among , who are avid users of the technology in their personal lives.

AI applications like ChatGPT are based on artificial neural networks that, in many respects, imitate the nerve cells in our brains. They are trained with vast quantities of data on high-performance computers, gobbling up massive amounts of energy in the process.

Spiking , which are much less energy-intensive, could be one solution to this problem. In the past, however, the normal techniques used to train them only worked with significant limitations.

A recent study by the University of Bonn has now presented a possible new answer to this dilemma, potentially paving the way for new AI methods that are much more energy-efficient. The findings have been published in Physical Review Letters.

In 1956, a small group of scientists gathered for the Dartmouth Summer Research Project on Artificial Intelligence, which was the birth of this field of research.

To celebrate the anniversary, more than 100 researchers and scholars again met at Dartmouth for AI@50, a conference that not only honored the past and assessed present accomplishments, but also helped seed ideas for future artificial intelligence research.

The initial meeting was organized by John McCarthy, then a mathematics professor at the College. In his proposal, he stated that the conference was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”