Toggle light / dark theme

Nvidia has announced what the company called its “largest-ever platform expansion for Edge AI and Robotics,” and rightfully so. In typical Nvidia fashion, the company is bringing together years of work on several different software platforms to meet the needs of a very specific application – robotics. Nvidia has been developing solutions for robotics, or what should more appropriately be called autonomous machines, for more than a decade. In conjunction with the company’s investment in artificial intelligence (AI) and autonomous vehicles, Nvidia was one of the first tech companies to see the potential in future robotics platforms leveraging AI and technology developed for other segments,… More.


He announcement includes adapting generative AI (GenAI) models to the Jetson Orin platform, the availability of Metropolis APIs and microservices for vision applications, the release of the latest Isaac ROS (Robot Operating System) framework and Isaac AMR (autonomous mobile robot) platform, and the release of the JetPack 6 software development kit (SDK). According to Nvidia, these releases include more value than all the other releases over the past 10 years combined. All the areas highlighted in green in the figure below are new or updated in this platform release.

One of the most significant enhancements is support for GenAI. As we at Tirias Research have indicated in previous articles, GenAI will be transformative for not just people, but also machines. Nvidia is working to enable zero-shot learning, which will enable devices to learn and predict results based on classes of data rather than being trained on specific samples. This will allow for shorter training cycles and more flexible interactions with and use of the resulting models. Nvidia is also predicting a new innovation cycle for autonomous machines as much of the learning shifts from text-based solutions to video or multi-modal (text, audio & video) training. Nvidia also included its new transformer PeopleNet model to increase the accuracy of people identification. But most important is the capability of the Orin platform to execute large language models (LLMs).

If you wanted to, you could access an “evil” version of OpenAI’s ChatGPT today—though it’s going to cost you. It also might not necessarily be legal depending on where you live.

However, getting access is a bit tricky. You’ll have to find the right web forums with the right users. One of those users might have a post marketing a private and powerful large language model (LLM). You’ll connect with them on an encrypted messaging service like Telegram where they’ll ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.

Once you have access to it, though, you’ll be able to use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: have conversations about any illicit or ethically dubious topic under the sun, learn how to cook meth or create pipe bombs, or even use it to fuel a cybercriminal enterprise by way of phishing schemes.

When humans make decisions, such as picking what to eat from a menu, what jumper to buy at a store, what political candidate to vote for, and so on, they might be more or less confident with their choice. If we are less confident and thus experience greater uncertainty in relation to their choice, our choices also tend to be less consistent, meaning that we will be more likely to change our mind before reaching a final decision.

While neuroscientists have been exploring the neural underpinnings decision-making for decades, many questions are still unanswered. For instance, how neural network computations support decision-making under varying levels of certainty remain poorly understood.

Researchers at the National Institute of Mental Health in Bethesda, Maryland recently carried out a study on aimed at better understanding the neural network dynamics associated with decision confidence. Their paper, published in Nature Neuroscience, offers evidence that energy landscapes in the can predict the consistency of choices made by monkeys, which is in turn a sign of the animals’ confidence in their decisions.

Artificial general intelligence is the big goal of OpenAI. What exactly that is, however, is still up for debate.

According to OpenAI CEO Sam Altman, systems like GPT-4 or GPT-5 would have passed for AGI to “a lot of people” ten years ago. “Now people are like, well, you know, it’s like a nice little chatbot or whatever,” Altman said.

The phenomenon Altman describes has a name: It’s called the “AI effect,” and computer scientist Larry Tesler summed it up by saying, “AI is anything that has not been done yet.”

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Researchers from the Australian National University, the University of Oxford, and the Beijing Academy of Artificial Intelligence have developed a new AI system called “3D-GPT” that can generate 3D models simply from text-based descriptions provided by a user.

The system, described in a paper published on arXiv, offers a more efficient and intuitive way to create 3D assets compared to traditional 3D modeling workflows.

Amazon is always on the frontier of technological advancements, and in a recent move, the e-commerce giant announced its plans to deepen its collaboration with Agility Robotics. As part of their ongoing partnership, Amazon will commence tests using the bipedal robot, Digit, in its operations. This exciting development comes after Amazon’s strategic investment in Agility Robotics through the Amazon Industrial Innovation Fund.

For those unfamiliar with Digit, it’s a marvel of modern robotic engineering. Developed by Agility Robotics, Digit stands out with its unique bipedal design. It isn’t just any robot; it’s designed with two legs, allowing it to move and operate in human-like ways, making it exceptionally fit for environments crafted for humans. Equipped with advanced sensors like LIDAR, it perceives its surroundings and avoids obstacles with ease. Its arms are adept at maintaining balance, carrying objects, and interacting with various elements of its environment.

Amazon’s primary interest in Digit stems from its capacity to move, grasp, and handle items in the nooks and crannies of warehouses, offering innovative approaches to everyday operations. Specifically, the robot’s dimensions make it ideal for navigating spaces within buildings primarily designed for humans. Amazon foresees significant potential in scaling a mobile manipulator solution like Digit, particularly due to its collaborative nature. The robot is set to assist Amazon employees initially with tote recycling – a repetitive process involving the pickup and transportation of empty totes after their inventory has been fully picked.

Central to Amazon’s robotic initiatives is the collaborative nature of these systems. Amazon emphasizes that robots like Digit and Sequoia are designed to work in harmony with employees. Over the past decade, Amazon has integrated numerous robotic systems into its operations. Impressively, this technological adoption has not meant the reduction of its workforce. In fact, the company has introduced hundreds of thousands of new jobs in this period. Alongside the addition of robotic systems, Amazon has ushered in 700 new job categories, many of which are skilled roles that previously didn’t exist within the company.