A traditional digital camera splits an image into three channels—red, green and blue—mirroring how the human eye perceives color. But those are just three discrete points along a continuous spectrum of wavelengths. Specialized “spectral” cameras go further by sequentially capturing dozens, or even hundreds, of these divisions across the spectrum.
This process is slow, however, meaning that hyperspectral cameras can only take still images, or videos with very low frame rates, or frames per second (fps). But what if a high-fps video camera could capture dozens of wavelengths at once, revealing details invisible to the naked eye?
Now, researchers at the University of Utah’s John and Marcia Price College of Engineering have developed a new way of taking a high-definition snapshot that encodes spectral data into images, much like a traditional camera encodes color. Instead of a filter that divides light into three color channels, their specialized filter divides it into 25. Each pixel stores compressed spectral information along with its spatial information, which computer algorithms can later reconstruct into a “cube” of 25 separate images—each representing a distinct slice of the visible spectrum.