Researchers at the University of California, Los Angeles (UCLA), in collaboration with UC Berkeley, have developed a new type of intelligent image sensor that can perform machine-learning inference during the act of photodetection itself.
Reported in Science, the breakthrough redefines how spectral imaging, machine vision and AI can be integrated within a single semiconductor device.
Traditionally, spectral cameras capture a dense stack of images, each image corresponding to a different wavelength, and then transfer this large dataset to digital processors for computation and scene analysis. This workflow, while powerful, creates a severe bottleneck: the hardware must move and process massive amounts of data, which limits speed, power efficiency, and the achievable spatial–spectral resolution.









