Toggle light / dark theme

Building a plane while flying it isn’t typically a goal for most, but for a team of Harvard-led physicists that general idea might be a key to finally building large-scale quantum computers.

Described in a new paper in Nature, the research team, which includes collaborators from QuEra Computing, MIT, and the University of Innsbruck, developed a new approach for processing quantum information that allows them to dynamically change the layout of atoms in their system by moving and connecting them with each other in the midst of computation.

This ability to shuffle the qubits (the fundamental building blocks of quantum computers and the source of their massive processing power) during the computation process while preserving their quantum state dramatically expands processing capabilities and allows for self-correction of errors. Clearing this hurdle marks a major step toward building large-scale machines that leverage the bizarre characteristics of quantum mechanics and promise to bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields.

Natural language processing (NLP) has entered a transformational period with the introduction of Large Language Models (LLMs), like the GPT series, setting new performance standards for various linguistic tasks. Autoregressive pretraining, which teaches models to forecast the most likely tokens in a sequence, is one of the main factors causing this amazing achievement. Because of this fundamental technique, the models can absorb a complex interaction between syntax and semantics, contributing to their exceptional ability to understand language like a person. Autoregressive pretraining has substantially contributed to computer vision in addition to NLP.

In computer vision, autoregressive pretraining was initially successful, but subsequent developments have shown a sharp paradigm change in favor of BERT-style pretraining. This shift is noteworthy, especially in light of the first results from iGPT, which showed that autoregressive and BERT-style pretraining performed similarly across various tasks. However, because of its greater effectiveness in visual representation learning, subsequent research has come to prefer BERT-style pretraining. For instance, MAE shows that a scalable approach to visual representation learning may be as simple as predicting the values of randomly masked pixels.

In this work, the Johns Hopkins University and UC Santa Cruz research team reexamined iGPT and questioned whether autoregressive pretraining can produce highly proficient vision learners, particularly when applied widely. Two important changes are incorporated into their process. First, the research team “tokenizes” photos into semantic tokens using BEiT, considering images are naturally noisy and redundant. This modification shifts the focus of the autoregressive prediction from pixels to semantic tokens, allowing for a more sophisticated comprehension of the interactions between various picture areas. Secondly, the research team adds a discriminative decoder to the generative decoder, which autoregressively predicts the subsequent semantic token.

In the ever-evolving landscape of artificial intelligence, a seismic shift is unfolding at OpenAI, and it involves more than just lines of code. The reported ‘superintelligence’ breakthrough has sent shockwaves through the company, pushing the boundaries of what we thought was possible and raising questions that extend far beyond the realm of algorithms.

Imagine a breakthrough so monumental that it threatens to dismantle the very fabric of the company that achieved it. OpenAI, the trailblazer in artificial intelligence, finds itself at a crossroads, dealing not only with technological advancement but also with the profound ethical and existential implications of its own creation – ‘superintelligence.’

The Breakthrough that Nearly Broke OpenAI: The Information’s revelation about a Generative AI breakthrough, capable of unleashing ‘superintelligence’ within this decade, sheds light on the internal disruption at OpenAI. Spearheaded by Chief Scientist Ilya Sutskever, the breakthrough challenges conventional AI training, allowing machines to solve problems they’ve never encountered by reasoning with cleaner and computer-generated data.

“The only plausible way this can arise among different stars is if there is a consistent process operating during the formation of the heavy elements,” Mumpower said. “This is incredibly profound and is the first evidence of fission operating in the cosmos, confirming a theory we proposed several years ago.”

“As we’ve acquired more observations, the cosmos is saying, ‘hey, there’s a signature here, and it can only come from fission.’”

Neutron stars are created when massive stars reach the end of their fuel supplies necessary for intrinsic nuclear fusion processes, which means the energy that has been supporting them against the inward push of their own gravity ceases. As the outer layers of these dying stars are blown away, the stellar cores with masses between one and two times that of the sun collapse into a width of around 12 miles (20 kilometers).

There’s an unfortunate irony in cell therapy that holds it back from its full potential: Regenerating tissues often must be damaged to know if the treatment is working, such as surgically removing tissue to see if rejuvenation is occurring beneath.

The alternative isn’t much better: Patients can choose to wait and see if their health improves, but after weeks of uncertainty, they might find that no healing has taken place without a clear explanation as to why.

Jinhwan Kim, a new assistant professor of biomedical engineering at the University of California, Davis, who holds a joint appointment with the Department of Surgery at UC Davis Health, wants to change all of that. In his research program, he combines nanotechnology and novel bioimaging techniques to provide non-invasive, real-time monitoring of cellular function and health.