Toggle light / dark theme

(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.) (THE CONVERSATION) As with weeds in a garden, it is a challenge to fully get rid of cancer cells in the body once they arise. They have a relentless need to continuously expand, even when they are significantly cut back by therapy or surgery.

Over the last decade, the landscape of machine learning software development has undergone significant changes. Many frameworks have come and gone, but most have relied heavily on leveraging Nvidia’s CUDA and performed best on Nvidia GPUs. However, with the arrival of PyTorch 2.0 and OpenAI’s Triton, Nvidia’s dominant position in this field, mainly due to its software moat, is being disrupted.

This report will touch on topics such as why Google’s TensorFlow lost out to PyTorch, why Google hasn’t been able to capitalize publicly on its early leadership of AI, the major components of machine learning model training time, the memory capacity/bandwidth/cost wall, model optimization, why other AI hardware companies haven’t been able to make a dent in Nvidia’s dominance so far, why hardware will start to matter more, how Nvidia’s competitive advantage in CUDA is wiped away, and a major win one of Nvidia’s competitors has at a large cloud for training silicon.

Join Our Discord to enter the giveaway (and comment with your username (without the at!)): https://discord.gg/learnaitogether.

Attend WAICF with a 20% discount: https://www.worldaicannes.com/pass/executive-pass/?utm_campa…ICF23-LOBO

References:
►Read the full article: https://www.louisbouchard.ai/vall-e/
►Link for the audio samples: https://valle-demo.github.io/
►Wang et al., 2023: VALL-E. https://arxiv.org/pdf/2301.02111.pdf.
►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/

Chapters: