Toggle light / dark theme

AI Isn’t A Revolution—It’s A Productivity Engine

At Phobio, well-implemented AI hasn’t just made us faster—it’s made us sharper, more creative and more strategic. When routine tasks are streamlined, people have time to think deeply about customers, competition and innovation.

Closing Thoughts

AI isn’t coming to take your job. But someone who knows how to use it might.

1 comment on “Maryland U & Google Introduce LilNetX: Simultaneously Optimizing DNN Size, Cost, Structured Sparsity & Accuracy”

The current conventional wisdom on deep neural networks (DNNs) is that, in most cases, simply scaling up a model’s parameters and adopting computationally intensive architectures will result in large performance improvements. Although this scaling strategy has proven successful in research labs, real-world industrial deployments introduce a number of complications, as developers often need to repeatedly train a DNN, transmit it to different devices, and ensure it can perform under various hardware constraints with minimal accuracy loss.

The research community has thus become increasingly interested in reducing such models’ storage size on devices while also improving their run-time. Explorations in this area have tended to follow one of two avenues: reducing model size via compression techniques, or using model pruning to reduce computation burdens.

In the new paper LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification, a team from the University of Maryland and Google Research proposes a way to “bridge the gap” between the two approaches with LilNetX, an end-to-end trainable technique for neural networks that jointly optimizes model parameters for accuracy, model size on the disk, and computation on any given task.

Self-learning neural network cracks iconic black holes

A team of astronomers led by Michael Janssen (Radboud University, The Netherlands) has trained a neural network with millions of synthetic black hole data sets. Based on the network and data from the Event Horizon Telescope, they now predict, among other things, that the black hole at the center of our Milky Way is spinning at near top speed.

The astronomers have published their results and methodology in three papers in the journal Astronomy & Astrophysics.

In 2019, the Event Horizon Telescope Collaboration released the first image of a supermassive black hole at the center of the galaxy M87. In 2022, they presented an image of the black hole in our Milky Way, Sagittarius A*. However, the data behind the images still contained a wealth of hard-to-crack information. An international team of researchers trained a neural network to extract as much information as possible from the data.

NVIDIA/physicsnemo: Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods

Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods — NVIDIA/physicsnemo

AI-designed waveguides pave the way for next-generation photonic devices

A team of researchers at the University of California, Los Angeles (UCLA) has introduced a novel framework for designing and creating universal diffractive waveguides that can control the flow of light in highly specific and complex ways.

This new technology uses (AI), specifically deep learning, to design a series of structured surfaces that guide light with high efficiency and can perform a wide range of functions that are challenging for conventional waveguides.

The work is published in the journal Nature Communications.

Quantum machine learning: Small-scale photonic quantum processor can already outperform classical counterparts

One of the current hot research topics is the combination of two of the most recent technological breakthroughs: machine learning and quantum computing.

An experimental study shows that already small-scale quantum computers can boost the performance of algorithms.

This was demonstrated on a photonic quantum processor by an international team of researchers at the University of Vienna. The work, published in Nature Photonics, shows promising for optical quantum computers.