Toggle light / dark theme

Intel & NVIDIA Announce Strategic Collaboration

This collaboration will integrate NVIDIA’s AI and accelerated computing technologies with Intel’s CPUs and x86 ecosystem, using NVIDIA NVLink for seamless connectivity.

For data centers, Intel will build NVIDIA-custom x86 CPUs that NVIDIA will integrate into its AI infrastructure platforms and offer to the market. However, the spotlight is on personal computing, for which Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SoCs are designed to “power a wide range of PCs that demand integration of world-class CPUs and GPUs.”

Are tech monopolies just becoming even bigger monopolies with less competition? It’s worth noting that Intel has actually done something similar before with AMD, and Kaby Lake-G is now largely viewed as a failure. Still, it’s hard to imagine this happening if Intel weren’t facing serious challenges. NVIDIA’s $5 billion investment in Intel, through common stock purchased at $23.28 per share, goes beyond a mere hint.

AI model offers accurate and explainable insights to support autism assessment

Scientists have developed and tested a deep-learning model that could support clinicians by providing accurate results and clear, explainable insights—including a model-estimated probability score for autism.

The model, outlined in a study published in eClinicalMedicine, was used to analyze resting-state fMRI data—a non-invasive method that indirectly reflects via blood-oxygenation changes.

In doing so, the model achieved up to 98% cross-validated accuracy for Autism Spectrum Disorder (ASD) and neurotypical classification and produced clear, explainable maps of the brain regions most influential to its decisions.

New light-powered gears fit inside a strand of hair

Researchers at the University of Gothenburg have made light-powered gears on a micrometer scale. This paves the way for the smallest on-chip motors in history, which can fit inside a strand of hair. The research is published in the journal Nature Communications.

Gears are everywhere—from clocks and cars to robots and wind turbines. For more than 30 years, researchers have been trying to create even smaller gears in order to construct micro-engines. But progress stalled at 0.1 millimeters, as it was not possible to build the drive trains needed to make them move any smaller.

Researchers from Gothenburg University, among others, have now broken through this barrier by ditching traditional mechanical drive trains and instead using to set the gears in motion directly.

The AI model that teaches itself to think through problems, no humans required

Artificial intelligence is getting smarter every day, but it still has its limits. One of the biggest challenges has been teaching advanced AI models to reason, which means solving problems step by step. But in a new paper published in the journal Nature, the team from DeepSeek AI, a Chinese artificial intelligence company, reports that they were able to teach their R1 model to reason on its own without human input.

When many of us try to solve a problem, we typically don’t get the answer straight away. We follow a methodical process that may involve gathering information and taking notes until we get to a solution. Traditionally, training AI models to reason has involved copying our approach. However, it is a long, drawn-out process where people show an AI model countless examples of how to work through a problem. It also means that AI is only as good as the examples it is given and can pick up on human biases.

Instead of showing the R1 model every step, researchers at DeepSeek AI used a technique called reinforcement learning. This trial-and-error approach, using rewards for , encouraged the model to reason for itself.

Magnetic tunnel junctions mimic synapse behavior for energy-efficient neuromorphic computing

The rapid development of artificial intelligence (AI) poses challenges to today’s computer technology. Conventional silicon processors are reaching their limits: they consume large amounts of energy, the storage and processing units are not interconnected and data transmission slows down complex applications.

As the size of AI models is constantly increasing and they are having to process huge amounts of data, the need for new computing architectures is rising. In addition to quantum computers, focus is shifting, in particular, to neuromorphic concepts. These systems are based on the way the works.

This is where the research of a team led by Dr. Tahereh Sadat Parvini and Prof. Dr. Markus Münzenberg from the University of Greifswald and colleagues from Portugal, Denmark and Germany began. They have found an innovative way to make computers of tomorrow significantly more energy-efficient. Their research centers around so-called magnetic tunnel junctions (MTJs), tiny components on the nanometer scale.

‘Quantum squeezing’ a nanoscale particle for the first time

Researchers Mitsuyoshi Kamba, Naoki Hara, and Kiyotaka Aikawa of the University of Tokyo have successfully demonstrated quantum squeezing of the motion of a nanoscale particle, a motion whose uncertainty is smaller than that of quantum mechanical fluctuations.

As enhancing the measurement precision of sensors is vital in many modern technologies, the achievement paves the way not only for basic research in fundamental physics but also for applications such as accurate autonomous driving and navigation without a GPS signal. The findings are published in the journal Science.

The physical world at the macroscale, from to planets, is governed by the laws of discovered by Newton in the 17th century. The physical world at the microscale, atoms and below, is governed by the laws of quantum mechanics, which lead to phenomena generally not observed at the macroscale.

Advanced AI links atomic structure to quantum tech

A research team led by Oak Ridge National Laboratory has developed a new method to uncover the atomic origins of unusual material behavior. This approach uses Bayesian deep learning, a form of artificial intelligence that combines probability theory and neural networks to analyze complex datasets with exceptional efficiency.

The technique reduces the amount of time needed for experiments. It helps researchers explore sample regions widely and rapidly converge on important features that exhibit interesting properties.

“This method makes it possible to study a material’s properties with much greater efficiency,” said ORNL’s Ganesh Narasimha. “Usually, we would need to scan a large region, and then several small regions, and perform spectroscopy, which is very time-consuming. Here, the AI algorithm takes control and does this process automatically and intelligently.”

Light-Powered AI Chips: The Photonic Revolution That’s About to Change Everything

Light-Powered AI Chips: The Photonic Revolution That’s About to Change Everything ## The future of artificial intelligence (AI) may be revolutionized by photonic AI chips that use light instead of electricity to process information, enabling faster, more efficient, and heat-free computing.

## Questions to inspire discussion.

Photonic AI Technology.

🔬 Q: What makes photonic AI chips more efficient than current AI chips? A: Photonic AI chips are 100x more energy efficient and produce virtually zero heat compared to electronic chips, as they use light instead of electrons for computation.

🌈 Q: How do photonic chips encode information differently? A: Photonic chips can encode information simultaneously in wavelength, amplitude, and phase by bouncing light off mirrors and optical devices, replacing traditional electronic processors.

Industry Developments.

/* */