Toggle light / dark theme

Currently, the most common and accurate methods for diagnosing type 2 diabetes involve blood work. A new study, however, asserts that type 2 diabetes can now be diagnosed based on the sound of a person’s voice.

Researchers from Klick Applied Science have developed a tool they say can diagnose type 2 diabetes in women and men, respectively, with up to 0.89 and 0.86 accuracy.

To achieve this, the researchers used an ensemble model that also factored in women’s body mass index (BMI) and men’s age and BMI.

Nvidia has announced what the company called its “largest-ever platform expansion for Edge AI and Robotics,” and rightfully so. In typical Nvidia fashion, the company is bringing together years of work on several different software platforms to meet the needs of a very specific application – robotics. Nvidia has been developing solutions for robotics, or what should more appropriately be called autonomous machines, for more than a decade. In conjunction with the company’s investment in artificial intelligence (AI) and autonomous vehicles, Nvidia was one of the first tech companies to see the potential in future robotics platforms leveraging AI and technology developed for other segments,… More.


He announcement includes adapting generative AI (GenAI) models to the Jetson Orin platform, the availability of Metropolis APIs and microservices for vision applications, the release of the latest Isaac ROS (Robot Operating System) framework and Isaac AMR (autonomous mobile robot) platform, and the release of the JetPack 6 software development kit (SDK). According to Nvidia, these releases include more value than all the other releases over the past 10 years combined. All the areas highlighted in green in the figure below are new or updated in this platform release.

One of the most significant enhancements is support for GenAI. As we at Tirias Research have indicated in previous articles, GenAI will be transformative for not just people, but also machines. Nvidia is working to enable zero-shot learning, which will enable devices to learn and predict results based on classes of data rather than being trained on specific samples. This will allow for shorter training cycles and more flexible interactions with and use of the resulting models. Nvidia is also predicting a new innovation cycle for autonomous machines as much of the learning shifts from text-based solutions to video or multi-modal (text, audio & video) training. Nvidia also included its new transformer PeopleNet model to increase the accuracy of people identification. But most important is the capability of the Orin platform to execute large language models (LLMs).

If you wanted to, you could access an “evil” version of OpenAI’s ChatGPT today—though it’s going to cost you. It also might not necessarily be legal depending on where you live.

However, getting access is a bit tricky. You’ll have to find the right web forums with the right users. One of those users might have a post marketing a private and powerful large language model (LLM). You’ll connect with them on an encrypted messaging service like Telegram where they’ll ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.

Once you have access to it, though, you’ll be able to use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: have conversations about any illicit or ethically dubious topic under the sun, learn how to cook meth or create pipe bombs, or even use it to fuel a cybercriminal enterprise by way of phishing schemes.

When humans make decisions, such as picking what to eat from a menu, what jumper to buy at a store, what political candidate to vote for, and so on, they might be more or less confident with their choice. If we are less confident and thus experience greater uncertainty in relation to their choice, our choices also tend to be less consistent, meaning that we will be more likely to change our mind before reaching a final decision.

While neuroscientists have been exploring the neural underpinnings decision-making for decades, many questions are still unanswered. For instance, how neural network computations support decision-making under varying levels of certainty remain poorly understood.

Researchers at the National Institute of Mental Health in Bethesda, Maryland recently carried out a study on aimed at better understanding the neural network dynamics associated with decision confidence. Their paper, published in Nature Neuroscience, offers evidence that energy landscapes in the can predict the consistency of choices made by monkeys, which is in turn a sign of the animals’ confidence in their decisions.