Toggle light / dark theme

A new computational simulator can help predict whether changes to materials or design will improve performance in new photovoltaic cells.

In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.

Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.

While they wrestle with the immediate danger posed by hackers today, US government officials are preparing for another, longer-term threat: attackers who are collecting sensitive, encrypted data now in the hope that they’ll be able to unlock it at some point in the future.

The threat comes from quantum computers, which work very differently from the classical computers we use today. Instead of the traditional bits made of 1s and 0s, they use quantum bits that can represent different values at the same time. The complexity of quantum computers could make them much faster at certain tasks, allowing them to solve problems that remain practically impossible for modern machines—including breaking many of the encryption algorithms currently used to protect sensitive data such as personal, trade, and state secrets.

While quantum computers are still in their infancy, incredibly expensive and fraught with problems, officials say efforts to protect the country from this long-term danger need to begin right now.

Researchers at Lawrence Berkeley National Laboratory’s Advanced Quantum Testbed (AQT) demonstrated that an experimental method known as randomized compiling (RC) can dramatically reduce error rates in quantum algorithms and lead to more accurate and stable quantum computations. No longer just a theoretical concept for quantum computing, the multidisciplinary team’s breakthrough experimental results are published in Physical Review X.

The experiments at AQT were performed on a four-qubit superconducting quantum processor. The researchers demonstrated that RC can suppress one of the most severe types of errors in quantum computers: coherent errors.

Akel Hashim, AQT researcher, involved in the experimental breakthrough and a graduate student at the University of California, Berkeley explained: “We can perform quantum computations in this era of noisy intermediate-scale quantum (NISQ) computing, but these are very noisy, prone to errors from many different sources, and don’t last very long due to the decoherence—that is, information loss—of our qubits.”

It is the highest resolution sensor of its type ever made.


Canon has developed an image sensor that is capable of capturing high-quality color photography even in the dark. The company says that it will be able to shoot clear photos even in situations where nothing is visible to the naked eye.

In a report from Nikkei, Canon says that it has developed a new type of light-receiving element called a single photon avalanche diode (SPAD) and is implementing it on a CMOS sensor. SPAD photodetector technology on its own isn’t new, and has been in use since the 1970s. However, Canon has managed to create a sensor with 3.2 million pixels, which it says is more than three times the resolution of conventional SPADs and makes it the highest-resolution sensor of its type ever made.

The sensor is designed to replace, or at least provide an alternative to, infrared night vision cameras. Infrared is useful for recognizing shapes and providing sight in the dark, but is not capable of recognizing colors. On the flipside, cameras that can see color in the dark only do so by leveraging high ISOs, which can work to a certain point but eventually lead to extremely noisy images in levels of extreme darkness.

Rare earth elements are essential for many of our modern day technologies. It’s used in rechargeable batteries, phones, fiber optics, wind turbines, televisions, dvd players and many others.

Some countries control majority of supply and use this as a means to pressure other countries.


It’s expected to become one of the issues at stake in the ongoing trade war between the #US and #China. Rare earth elements are crucial to manufacturing phones, computers and wind turbines. China currently controls 90 percent of their production, but the US is also determined to extract these precious #minerals and has just reopened a rare earth elements mine in California. Our France 2 colleagues report, with FRANCE 24’s James Vasina.

🔔 Subscribe to France 24 now: https://f24.my/YTen.
🔴 LIVE — Watch FRANCE 24 English 24/7 here: https://f24.my/YTliveEN

🌍 Read the latest International News and Top Stories: https://www.france24.com/en/

Microsoft has announced a new DirectX12 API for Windows which will offer a new way for apps to efficiently encode video using the GPU.

The Video Encode API is available to 3rd party apps and is native to Windows 11, and can efficiently encode video in the H264 and HEVC formats.

Microsoft says it offers a considerable number of configurable parameters are exposed by this API for the user to tweak different aspects of the encoding process and make them fit best for their scenarios such as: custom slices partitioning scheme, active (i.e. CBR, VBR, QBVR) and passive (Absolute/Delta custom QP maps) rate control configuration modes, custom codec encoding tools usage, custom codec block and transform sizes, motion vector precision limit, explicit usage of intra-refresh sessions, dynamic reconfiguration of video stream resolution/rate control/slices partitioning and more.

The future of computing may be analog.

The design of our everyday computers is good for reading email and gaming, but today’s problem-solving computers are working with vast amounts of data. The ability to both store and process this information can lead to performance bottlenecks due to the way computers are built.

The next computer revolution might be a new kind of hardware, called processing-in-memory (PIM), an emerging computing paradigm that merges the memory and processing unit and does its computations using the physical properties of the machine—no 1s or 0s needed to do the processing digitally.