Toggle light / dark theme

Reinforcement learning accelerates model-free training of optical AI systems

Optical computing has emerged as a powerful approach for high-speed and energy-efficient information processing. Diffractive optical networks, in particular, enable large-scale parallel computation through the use of passive structured phase masks and the propagation of light. However, one major challenge remains: systems trained in model-based simulations often fail to perform optimally in real experimental settings, where misalignments, noise, and model inaccuracies are difficult to capture.

In a new paper published in Light: Science & Applications, researchers at the University of California, Los Angeles (UCLA) introduce a model-free in situ training framework for diffractive optical processors, driven by proximal policy optimization (PPO), a reinforcement learning algorithm known for stability and sample efficiency. Rather than rely on a digital twin or the knowledge of an approximate physical model, the system learns directly from real optical measurements, optimizing its diffractive features on the hardware itself.

“Instead of trying to simulate complex optical behavior perfectly, we allow the device to learn from experience or experiments,” said Aydogan Ozcan, Chancellor’s Professor of Electrical and Computer Engineering at UCLA and the corresponding author of the study. “PPO makes this in situ process fast, stable, and scalable to realistic experimental conditions.”

Optical system uses diffractive processors to achieve large-scale nonlinear computation

Researchers at the University of California, Los Angeles (UCLA) have developed an optical computing framework that performs large-scale nonlinear computations using linear materials.

Reported in eLight, the study demonstrates that diffractive optical processors—thin, passive material structures composed of phase-only layers—can compute numerous nonlinear functions simultaneously, executed rapidly at extreme parallelism and spatial density, bound by the diffraction limit of light.

Nonlinear operations underpin nearly all modern information-processing tasks, from and pattern recognition to general-purpose computing. Yet, implementing such operations optically has remained a challenge, as most are weak, power-hungry, or slow.

AI model forecasts speech development in deaf children after cochlear implants

An AI model using deep transfer learning—the most advanced form of machine learning—has predicted spoken language outcomes with 92% accuracy from one to three years after patients received cochlear implants (implanted electronic hearing device).

The research is published in the journal JAMA Otolaryngology–Head & Neck Surgery.

Although cochlear implantation is the only effective treatment to improve hearing and enable spoken language for children with severe to profound hearing loss, spoken language development after early implantation is more variable in comparison to children born with typical hearing. If children who are likely to have more difficulty with spoken language are identified prior to implantation, intensified therapy can be offered earlier to improve their speech.

Cybercriminals Abuse Google Cloud Email Feature in Multi-Stage Phishing Campaign

In response to the findings, Google has blocked the phishing efforts that abuse the email notification feature within Google Cloud Application Integration, adding that it’s taking more steps to prevent further misuse.

Check Point’s analysis has revealed that the campaign has primarily targeted manufacturing, technology, financial, professional services, and retail sectors, although other industry verticals, including media, education, healthcare, energy, government, travel, and transportation, have been singled out.

“These sectors commonly rely on automated notifications, shared documents, and permission-based workflows, making Google-branded alerts especially convincing,” it added. “This campaign highlights how attackers can misuse legitimate cloud automation and workflow features to distribute phishing at scale without traditional spoofing.”

Artificial Intelligence–Based Automated Echocardiographic Analysis and the Workflow of Sonographers: A Randomized Crossover Trial (AI‐Echo RCT)

RCT results: An automated AI analysis led to reduced exam time, increased scan volume and image quality, and lower fatigue. @KagiyamaNobu

The Next Great Transformation: How AI Will Reshape Industries—and Itself

#artificialintelligence #ai #technology #futuretech


This change will revolutionize leadership, governance, and workforce development. Successful firms will invest in technology and human capital by reskilling personnel, redefining roles, and fostering a culture of human-machine collaboration.

The Imperative of Strategy Artificial intelligence is not preordained; it is a tool shaped by human choices. How we execute, regulate, and protect AI will determine its impact on industries, economies, and society. I emphasized in Inside Cyber that technology convergence—particularly the amalgamation of AI with 5G, IoT, distributed architectures, and ultimately quantum computing—will augment both potential and hazards.

The issue at hand is not if AI will transform industries—it has already done so. The essential question is whether we can guide this change to enhance security, resilience, and human well-being. Individuals who interact with AI strategically, ethically, and with a long-term perspective will gain a competitive advantage and foster the advancement of a more innovative and secure future.

Artificial neurons mimic complex brain abilities for next-generation

Researchers have created atomically thin artificial neurons capable of processing both light and electric signals for computing. The material enables the simultaneous existence of separate feedforward and feedback paths within a neural network, boosting the ability to solve complex problems.

For decades, scientists have been investigating how to recreate the versatile computational capabilities of biological neurons to develop faster and more energy-efficient machine learning systems. One promising approach involves the use of memristors: electronic components capable of storing a value by modifying their conductance and then utilising that value for in-memory processing.

However, a key challenge to replicating the complex processes of biological neurons and brains using memristors has been the difficulty in integrating both feedforward and feedback neuronal signals. These mechanisms underpin our cognitive ability to learn complex tasks, using rewards and errors.

/* */