Toggle light / dark theme

Nvidia has announced what the company called its “largest-ever platform expansion for Edge AI and Robotics,” and rightfully so. In typical Nvidia fashion, the company is bringing together years of work on several different software platforms to meet the needs of a very specific application – robotics. Nvidia has been developing solutions for robotics, or what should more appropriately be called autonomous machines, for more than a decade. In conjunction with the company’s investment in artificial intelligence (AI) and autonomous vehicles, Nvidia was one of the first tech companies to see the potential in future robotics platforms leveraging AI and technology developed for other segments,… More.


He announcement includes adapting generative AI (GenAI) models to the Jetson Orin platform, the availability of Metropolis APIs and microservices for vision applications, the release of the latest Isaac ROS (Robot Operating System) framework and Isaac AMR (autonomous mobile robot) platform, and the release of the JetPack 6 software development kit (SDK). According to Nvidia, these releases include more value than all the other releases over the past 10 years combined. All the areas highlighted in green in the figure below are new or updated in this platform release.

One of the most significant enhancements is support for GenAI. As we at Tirias Research have indicated in previous articles, GenAI will be transformative for not just people, but also machines. Nvidia is working to enable zero-shot learning, which will enable devices to learn and predict results based on classes of data rather than being trained on specific samples. This will allow for shorter training cycles and more flexible interactions with and use of the resulting models. Nvidia is also predicting a new innovation cycle for autonomous machines as much of the learning shifts from text-based solutions to video or multi-modal (text, audio & video) training. Nvidia also included its new transformer PeopleNet model to increase the accuracy of people identification. But most important is the capability of the Orin platform to execute large language models (LLMs).

If you wanted to, you could access an “evil” version of OpenAI’s ChatGPT today—though it’s going to cost you. It also might not necessarily be legal depending on where you live.

However, getting access is a bit tricky. You’ll have to find the right web forums with the right users. One of those users might have a post marketing a private and powerful large language model (LLM). You’ll connect with them on an encrypted messaging service like Telegram where they’ll ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.

Once you have access to it, though, you’ll be able to use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: have conversations about any illicit or ethically dubious topic under the sun, learn how to cook meth or create pipe bombs, or even use it to fuel a cybercriminal enterprise by way of phishing schemes.

When humans make decisions, such as picking what to eat from a menu, what jumper to buy at a store, what political candidate to vote for, and so on, they might be more or less confident with their choice. If we are less confident and thus experience greater uncertainty in relation to their choice, our choices also tend to be less consistent, meaning that we will be more likely to change our mind before reaching a final decision.

While neuroscientists have been exploring the neural underpinnings decision-making for decades, many questions are still unanswered. For instance, how neural network computations support decision-making under varying levels of certainty remain poorly understood.

Researchers at the National Institute of Mental Health in Bethesda, Maryland recently carried out a study on aimed at better understanding the neural network dynamics associated with decision confidence. Their paper, published in Nature Neuroscience, offers evidence that energy landscapes in the can predict the consistency of choices made by monkeys, which is in turn a sign of the animals’ confidence in their decisions.

Artificial general intelligence is the big goal of OpenAI. What exactly that is, however, is still up for debate.

According to OpenAI CEO Sam Altman, systems like GPT-4 or GPT-5 would have passed for AGI to “a lot of people” ten years ago. “Now people are like, well, you know, it’s like a nice little chatbot or whatever,” Altman said.

The phenomenon Altman describes has a name: It’s called the “AI effect,” and computer scientist Larry Tesler summed it up by saying, “AI is anything that has not been done yet.”

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Researchers from the Australian National University, the University of Oxford, and the Beijing Academy of Artificial Intelligence have developed a new AI system called “3D-GPT” that can generate 3D models simply from text-based descriptions provided by a user.

The system, described in a paper published on arXiv, offers a more efficient and intuitive way to create 3D assets compared to traditional 3D modeling workflows.