Apple is focused on developing its own AI products to catch up to its competitors.

The influence of language on human thinking could be stronger than previously assumed. This is the result of a new study by Professor Friedemann Pulvermüller and his team from the Brain Language Laboratory at Freie Universität Berlin. In this study, the researchers examined the modeling of human concept formation and the impact of language mechanisms on the emergence of concepts. The results were recently published in the journal Progress in Neurobiology under the title “Neurobiological Mechanisms for Language, Symbols, and Concepts: Clues from Brain-Constrained Deep Neural Networks” (accessible online at https://www.sciencedirect.com/science/article/pii/S0301008223001120?via%3Dihub).
Children can learn one or more languages with little effort. However, the cognitive activity involved should not be underestimated. Not only do language learners have to learn how to pronounce words, they must also learn how to interlink word forms with content – with concepts such as “coffee,” “drinking,” or “beauty.” But what are the actual mechanisms at work in the network of billions of nerve cells within our brains? And might the learning of some concepts strictly require the presence of language?
Modern computer models—for example for complex, potent AI applications—push traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing.
A team of researchers has now developed a so-called event-based architecture, using photonic processors with which data are transported and processed by means of light. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network. This changeable connections are the basis for learning processes.
For the purposes of the study, a team working at Collaborative Research Center 1,459 (Intelligent Matter)—headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster—joined forces with researchers from the Universities of Exeter and Oxford in the UK. The study has been published in the journal Science Advances.
Currently, the most common and accurate methods for diagnosing type 2 diabetes involve blood work. A new study, however, asserts that type 2 diabetes can now be diagnosed based on the sound of a person’s voice.
Researchers from Klick Applied Science have developed a tool they say can diagnose type 2 diabetes in women and men, respectively, with up to 0.89 and 0.86 accuracy.
To achieve this, the researchers used an ensemble model that also factored in women’s body mass index (BMI) and men’s age and BMI.
Nvidia has announced what the company called its “largest-ever platform expansion for Edge AI and Robotics,” and rightfully so. In typical Nvidia fashion, the company is bringing together years of work on several different software platforms to meet the needs of a very specific application – robotics. Nvidia has been developing solutions for robotics, or what should more appropriately be called autonomous machines, for more than a decade. In conjunction with the company’s investment in artificial intelligence (AI) and autonomous vehicles, Nvidia was one of the first tech companies to see the potential in future robotics platforms leveraging AI and technology developed for other segments,… More.
He announcement includes adapting generative AI (GenAI) models to the Jetson Orin platform, the availability of Metropolis APIs and microservices for vision applications, the release of the latest Isaac ROS (Robot Operating System) framework and Isaac AMR (autonomous mobile robot) platform, and the release of the JetPack 6 software development kit (SDK). According to Nvidia, these releases include more value than all the other releases over the past 10 years combined. All the areas highlighted in green in the figure below are new or updated in this platform release.
One of the most significant enhancements is support for GenAI. As we at Tirias Research have indicated in previous articles, GenAI will be transformative for not just people, but also machines. Nvidia is working to enable zero-shot learning, which will enable devices to learn and predict results based on classes of data rather than being trained on specific samples. This will allow for shorter training cycles and more flexible interactions with and use of the resulting models. Nvidia is also predicting a new innovation cycle for autonomous machines as much of the learning shifts from text-based solutions to video or multi-modal (text, audio & video) training. Nvidia also included its new transformer PeopleNet model to increase the accuracy of people identification. But most important is the capability of the Orin platform to execute large language models (LLMs).