Toggle light / dark theme

Currently, computing technologies are rapidly evolving and reshaping how we imagine the future. Quantum computing is taking its first toddling steps toward delivering practical results that promise unprecedented abilities. Meanwhile, artificial intelligence remains in public conversation as it’s used for everything from writing business emails to generating bespoke images or songs from text prompts to producing deep fakes.

Some physicists are exploring the opportunities that arise when the power of machine learning — a widely used approach in AI research—is brought to bear on quantum physics. Machine learning may accelerate quantum research and provide insights into quantum technologies, and quantum phenomena present formidable challenges that researchers can use to test the bounds of machine learning.

When studying quantum physics or its applications (including the development of quantum computers), researchers often rely on a detailed description of many interacting quantum particles. But the very features that make quantum computing potentially powerful also make quantum systems difficult to describe using current computers. In some instances, machine learning has produced descriptions that capture the most significant features of quantum systems while ignoring less relevant details—efficiently providing useful approximations.

FUS instruments is a manufacturer of preclinical foucsed ultrasound systems for research. We specialize in systems for brain research. We sell stereotactic and MRI-guided FUS systems as well as transducer and other accessories for focused ultrasound research.

As opposed to black holes, white holes are thought to eject matter and light while never absorbing any. Detecting these as yet hypothetical objects could not only provide evidence of quantum gravity but also explain the origin of dark matter.

No one today questions the existence of black holes, objects from which nothing, not even light, can escape. But after they were first predicted in 1915 by Einstein’s general theory of relativity, it took many decades and multiple observations to show that they actually existed. And when it comes to white holes, history may well repeat itself. Such objects, which are also predicted by general relativity, can only eject matter and light, and as such are the exact opposite of black holes, which can only absorb them. So, just as it is impossible to escape from a black hole, it is equally impossible to enter a white one, occasionally and perhaps more aptly dubbed a “white fountain”. For many, these exotic bodies are mere mathematical curiosities.

Scientists studying Large Language Models (LLMs) have found that LLMs perform similarly to humans in cognitive tasks, often making judgments and decisions that deviate from rational norms, such as risk and loss aversion. LLMs also exhibit human-like biases and errors, particularly in probability judgments and arithmetic operations tasks. These similarities suggest the potential for using LLMs as models of human cognition. However, significant challenges remain, including the extensive data LLMs are trained on and the unclear origins of these behavioural similarities.

The suitability of LLMs as models of human cognition is debated due to several issues. LLMs are trained on much larger datasets than humans and may have been exposed to test questions, leading to artificial enhancements in human-like behaviors through value alignment processes. Despite these challenges, fine-tuning LLMs, such as the LLaMA-1-65B model, on human choice datasets has improved accuracy in predicting human behavior. Prior research has also highlighted the importance of synthetic datasets in enhancing LLM capabilities, particularly in problem-solving tasks like arithmetic. Pretraining on such datasets can significantly improve performance in predicting human decisions.

Researchers from Princeton University and Warwick University propose enhancing the utility of LLMs as cognitive models by (i) utilizing computationally equivalent tasks that both LLMs and rational agents must master for cognitive problem-solving and (ii) examining task distributions required for LLMs to exhibit human-like behaviors. Applied to decision-making, specifically risky and intertemporal choice, Arithmetic-GPT, an LLM pretrained on an ecologically valid arithmetic dataset, predicts human behavior better than many traditional cognitive models. This pretraining suffices to align LLMs closely with human decision-making.

It seems like over the past few years, Quantum is being talked about more and more. We’re hearing words like qubits, entanglement, super position, and quantum computing. But what does that mean … and is quantum science really that big of a deal? Yeah, it is.

It’s because Quantum science has the potential to revolutionize our world. From processing data to predicting weather, to picking stocks or even discovering new medical drugs. Quantum, specifically quantum computers, could solve countless problems.

Dr. Heather Masson-Forsythe, an AAAS Science \& Technology Fellow in NSF’s Directorate for Computer and Information Science and Engineering, hosts this future-forward episode.

Featured guests include (in order of appearance):