Menu

Blog

Jan 15, 2022

Computing With Light

Posted by in categories: finance, robotics/AI, sustainability, transportation

There are widely cited forecasts that project accelerating information and communications technology (ICT) energy consumption increases through the 2020’s with a 2018 Nature article estimating that if current trends continue, this will consume more than 20% of electricity demand by 2030. At several industry events I have heard talks that say one of the important limits of data center performance will be the amount of energy consumed. NVIDIA’s latest GPU solutions use 400+W processors and this energy consumption could more than double in future AI processor chips. Solutions that can accelerate important compute functions while consuming less energy will be important to provide more sustainable and economical data centers.

Lightmatter’s Envise chip (shown below) is a general-purpose machine learning accelerator that combines photonics (PIC) and CMOS transistor-based devices (ASIC) into a single compact module. The device uses silicon photonics for high performance AI inference tasks and consumes much less energy than CMOS only solutions and thus helping to reduce the projected power load from data centers.

Full Story:


Lightmatter has a roadmap for even faster processing using more colors for parallel processing channels with each color acting as a separate virtual computer.

Nick said that in addition to data center applications for Envise he could see the technology being used to enable autonomous electric vehicles that require high performance AI but are constrained by battery power, making it easier to provide compelling range per vehicle charge. In addition to the Envise module, Lightmatter also offers optical interconnect technology that it calls Passage.

Lightmatter is making optical AI processors that can provide fast results with less power consumption than conventional CMOS products. Their compute module combines CMOS logic and memory with optical analog processing units useful for AI inference, 0, natural language processing, financial modelling and ray tracing.

Comments are closed.