Menu

Blog

Dec 7, 2022

Latest AI Research Finds a Simple Self-Supervised Pruning Metric That Enables Them to Discard 20% of ImageNet Without Sacrificing Performance, Beating Neural Scaling Laws via Data Pruning

Posted by in category: robotics/AI

Applying neural scaling laws to machine learning models, which means increasing the number of computations, the size of the model, and the number of training data points, can reduce errors and improve model performance. Since we have a lot of computing power available and collecting more data is easier than ever before, we should be able to reduce the test error to a very small value, right?

Here is a catch, this methodology is far from ideal. Even though we have enough computational power, the benefits of scaling are fairly weak and unsustainable due to the huge additional computational costs. For example, dropping the error from 3.4% to 2.8% might require an order of magnitude of more data, computation, or energy. So what can be a solution?

Comments are closed.