Toggle light / dark theme

A joint effort in chemistry has resulted in an innovative method for utilizing carbon dioxide in a positive – even beneficial – manner: through electrosynthesis, it is integrated into a series of organic molecules that play a crucial role in the development of pharmaceuticals.

During the process, the team made an innovative discovery. By altering the type of electrochemical reactor used, they were able to generate two distinct products, both of which are useful in medicinal chemistry.

The team’s paper was recently published in the journal Nature. The paper’s co-lead authors are postdoctoral researchers Peng Yu and Wen Zhang, and Guo-Quan Sun of Sichuan University in China.

Just in case people are curious how accurate the news is, the following article says “Nvidia, AMD, and TSMC will still bear the bulk of the risk for establishing manufacturing within the United States.” The reality is that neither Nvidia or AMD makes chips. In that list, only TSMC is a chip manufacturer.


The U.S. Secretary of Commerce reminds investors that the federal government supports a sweeping shift in how and where chips are made.

While tunneling reactions are remarkably hard to predict, a group of researchers were able to experimentally observe such an effect, marking a breakthrough in the field of quantum chemistry.

Tunnel Effect

Predicting tunnel effects is very difficult to pull off. The mechanically exact quantum description of chemical reactions that cover over three particles is quite hard. If it covers over four particles, it is almost impossible to pull off. In order to stimulate the reactions, scientists use classical physics but have to push aside the quantum effects. However, EurekAlert reports that there is a limit to classically describing these chemical reactions. What, then, is the limit?

An innovative nuclear fusion technology that uses no radioactive materials and is calculated capable of “powering the planet for more than 100,000 years”, has been successfully piloted by a US-Japanese team of researchers.

California-based TAE Technologies, working with Japan’s National Institute for Fusion Science (NIFS), have completed first tests of a hydrogen-boron fuel cycle in magnetically-confined plasma, which could generate cleaner, lower cost energy that that produced by the more common deuterium-tritium (D-T) fusion process.

“This experiment offers us a wealth of data to work with and shows that hydrogen-boron has a place in utility-scale fusion power. We know we can solve the physics challenge at hand and deliver a transformational new form of carbon-free energy to the world that relies on this non-radioactive, abundant fuel,” said Michl Binderbauer, CEO of TAE Technologies.

Haptic holography promises to bring virtual reality to life, but a new study reveals a surprising physical obstacle that will need to be overcome.

A research team at UC Santa Barbara has discovered a new phenomenon that underlies emerging holographic haptic displays, and could lead to the creation of more compelling virtual reality experiences. The team’s findings are published in the journal Science Advances.

Holographic haptic displays use phased arrays of ultrasound emitters to focus ultrasound in the air, allowing users to touch, feel and manipulate three-dimensional virtual objects in mid-air using their bare hands, without the need for a physical device or interface. While these displays hold great promise for use in various application areas, including augmented reality, virtual reality and telepresence, the tactile sensations they currently provide are diffuse and faint, feeling like a “breeze” or “puff of air.”

On Monday, researchers from Microsoft introduced Kosmos-1, a multimodal model that can reportedly analyze images for content, solve visual puzzles, perform visual text recognition, pass visual IQ tests, and understand natural language instructions. The researchers believe multimodal AI—which integrates different modes of input such as text, audio, images, and video—is a key step to building artificial general intelligence (AGI) that can perform general tasks at the level of a human.

Visual examples from the Kosmos-1 paper show the model analyzing images and answering questions about them, reading text from an image, writing captions for images, and taking a visual IQ test with 22–26 percent accuracy (more on that below).