Toggle light / dark theme

I had a conversation with NVIDIA CEO Jensen Huang and we spoke about groundbreaking developments in physical AI and other big announcements made at CES. Jensen discusses how NVIDIA Cosmos and Omniverse are revolutionizing robot training, enabling machines to understand the physical world and learn in virtual environments — reducing training time from years to hours.

He shares insights on NVIDIA DRIVE AI’s autonomous vehicle developments, including their major partnership with Toyota, and talks about the critical role of safety in their three-computer system approach.

Jensen also shares what he considers to be the most impactful technology of our time! This conversation left me feeling excited for the future of technology and where we’re headed. I hope you enjoy it as much as I did.

Timestamps:

Modern AI systems have fulfilled Turing’s vision of machines that learn and converse like humans, but challenges remain. A new paper highlights concerns about energy consumption and societal inequality while calling for more robust AI testing to ensure ethical and sustainable progress.

A perspective published on November 13 in Intelligent Computing, a Science Partner Journal, argues that modern artificial intelligence.

Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include understanding natural language, recognizing patterns, solving problems, and learning from experience. AI technologies use algorithms and massive amounts of data to train models that can make decisions, automate processes, and improve over time through machine learning. The applications of AI are diverse, impacting fields such as healthcare, finance, automotive, and entertainment, fundamentally changing the way we interact with technology.

Trying to spot contraband is a tricky business. Not only is identifying items like narcotics and counterfeit merchandise difficult, but the current most used technology—X-rays—only gives a 2D view, and often a muddy one at that.

“It’s not like X-raying a tooth, where you just have a tooth,” said Eric Miller, professor of electrical and computer engineering at Tufts. Instead, it’s like X-raying a tooth and getting the entire dental exam room.

But Miller and his research team have now found a possible solution that uses AI with to spot items that shouldn’t be there and is accurate 98% of the time. Their findings were published in Engineering Applications of Artificial Intelligence.

A team of math and AI researchers at Microsoft Asia has designed and developed a small language model (SLM) that can be used to solve math problems. The group has posted a paper on the arXiv preprint server outlining the technology and math behind the new tool and how well it has performed on standard benchmarks.

Over the past several years, multiple have been working hard to steadily improve their LLMs, resulting in AI products that have in a very short time become mainstream. Unfortunately, such tools require massive amounts of computer power, which means they consume a lot of electricity, making them expensive to maintain.

Because of that, some in the field have been turning to SLMs, which as their name implies, are smaller and thus far less resource intensive. Some are small enough to run on a local device. One of the main ways AI researchers make the best use of SLMs is by narrowing their focus—instead of trying to answer any question about anything, they are designed to answer questions about something much more specific—like math. In this new effort, Microsoft has focused its efforts on not just solving , but also in teaching an SLM how to reason its way through a problem.