Toggle light / dark theme

Meta challenges Nvidia’s dominance with new AI chips

The design is intended to provide more computing power, bandwidth, and memory capacity to the chips. Initially, Meta aimed to perform inference functions such as ranking and generating responses to user prompts. Meta plans to use the chips for more intense operations, such as training AI models using large data sets.

A shift to its chips could help Meta save millions in energy costs every year, alongside the billions needed in capital expenditure to buy chips from Nvidia.

Meta isn’t the only tech company looking to design and build its own AI chips. Legacy chipmaker Intel, which has lagged in catering to industry requirements for AI chips, also announced its new Gaudi chips at an event on Tuesday.

Elon Musk Says That Within Two Years, AI Will Be “Smarter Than the Smartest Human”

Tesla CEO Elon Musk — who has an abysmal track record for making predictions — is predicting that we will achieve artificial general intelligence (AGI) by 2026.

“If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” he told Norway wealth fund CEO Nicolai Tangen during an interview this week, as quoted by Reuters.

The mercurial billionaire also attempted to explain why his own AI venture, xAI, has been falling behind the competition. According to Musk, a shortage of chips was hampering his startup’s efforts to come up with the successor of Grok, a foul-mouthed, dad joke-generating AI chatbot.

Q&A: How to Train AI when you Don’t Have Enough Data

Artificial intelligence excels at sorting through information and detecting patterns or trends. But these machine learning algorithms need to be trained with large amounts of data first.

As researchers explore potential applications for AI, they have found scenarios where AI could be really useful—such as analyzing X-ray image data to look for evidence of rare conditions or detecting a rare fish species caught on a commercial fishing boat—but there’s not enough data to accurately train the algorithms.

Jenq-Neng Hwang, University of Washington professor of electrical and computer and engineering, specializes in these issues. For example, Hwang and his team developed a method that teaches AI to monitor how many distinct poses a baby can achieve throughout the day. There are limited training datasets of babies, which meant the researchers had to create a unique pipeline to make their algorithm accurate and useful.

Beyond Transformers: Symbolica launches with $33M to change the AI industry with symbolic models

Artificial intelligence startup Symbolica AI launched today with an original approach to building generative AI models.

The company is aiming to tackle the expensive mechanisms behind training and deploying large language models such as OpenAI’s ChatGPT that are based on Transformer architecture.

Alongside that news, it also revealed today that it has raised $33 million in total funding combined from a Series A and seed funding round led by Khosla Ventures. Other investors included Day One Ventures, Abstract Ventures Buckley Ventures and General Catalyst.

Moore’s Law for Everything

Fascinating vision/plan by the one and only Sam Altman of how to update our economic systems to benefit everyone in the context of rapidly accelerating technological change.


My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.

We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. Doing so can make the society of the future much less divisive and enable everyone to participate in its gains.

In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”

Success requires ‘ample doses of pain,’ billionaire Nvidia CEO tells Stanford students: ‘I hope suffering happens to you’

When it comes to achieving success, Huang knows more than most. In 1993, he co-founded computer chip company Nvidia, where he’s served as CEO for more than three decades. The company’s success turned Huang into a billionaire. Now, with Nvidia’s chips in high demand for building AI software, it’s become one of the world’s most valuable companies with a valuation north of $2 trillion.

Huang himself is one of the world’s wealthiest individuals, with an estimated net worth of $77.6 billion, according to Bloomberg.

For Huang, there is one particular trait that can make anyone more likely to become successful: Resilience. At last week’s event, he told Stanford students how he personally developed the resilience necessary to build and run one of the world’s most valuable companies.

/* */