Toggle light / dark theme

Nvidia teams with Hexagon to transform industrial digital twins

Missed the GamesBeat Summit excitement? Don’t worry! Tune in now to catch all of the live and virtual sessions here.

Hexagon, a Nordic company focused on digital reality, is collaborating with Nvidia to enable industrial digital twins to capture data in real time.

Digital twins are digital replicas of designs such as factories or buildings in the real. The idea of the collaboration is to unite reality capture, manufacturing twins, AI, simulation and visualization to deliver real-time comparisons to real-world models.

AI unlikely to gain human-like cognition, unless connected to real world through robots, says study

Connecting artificial intelligence systems to the real world through robots and designing them using principles from evolution is the most likely way AI will gain human-like cognition, according to research from the University of Sheffield.

In a paper published in Science Robotics, Professor Tony Prescott and Dr. Stuart Wilson from the University’s Department of Computer Science, say that AI systems are unlikely to resemble real brain processing no matter how large their neural networks or the datasets used to train them might become, if they remain disembodied.

Current AI systems, such as ChatGPT, use large to solve difficult problems, such as generating intelligible written text. These networks teach AI to process data in a way that is inspired by the human brain and also learn from their mistakes in order to improve and become more accurate.

A new AI-based approach for controlling autonomous robots

In the film “Top Gun: Maverick,” Maverick, played by Tom Cruise, is charged with training young pilots to complete a seemingly impossible mission—to fly their jets deep into a rocky canyon, staying so low to the ground they cannot be detected by radar, then rapidly climb out of the canyon at an extreme angle, avoiding the rock walls. Spoiler alert: With Maverick’s help, these human pilots accomplish their mission.

A machine, on the other hand, would struggle to complete the same pulse-pounding task. To an , for instance, the most straightforward path toward the target is in conflict with what the machine needs to do to avoid colliding with the canyon walls or staying undetected. Many existing AI methods aren’t able to overcome this conflict, known as the stabilize-avoid problem, and would be unable to reach their goal safely.

MIT researchers have developed a new technique that can solve complex stabilize-avoid problems better than other methods. Their machine-learning approach matches or exceeds the safety of existing methods while providing a tenfold increase in stability, meaning the agent reaches and remains stable within its goal region.

Mind-control robots a reality

The technology was recently demonstrated by the Australian Army, where soldiers operated a Ghost Robotics quadruped robot using the brain-machine interface. Photo supplied by Australian Army. #Graphene


New technology is making mind reading possible with positive implications for the fields of healthcare, aerospace and advanced manufacturing.

Researchers from the University of Technology Sydney (UTS) have developed biosensor technology that will allow you to operate devices, such as robots and machines, solely through thought-control.

The advanced brain-computer interface was developed by Distinguished Professor Chin-Teng Lin and Professor Francesca Iacopi, from the UTS Faculty of Engineering and IT, in collaboration with the Australian Army and Defence Innovation Hub.

Future of religion? Hundreds attend Church service conducted by ChatGPT

Artificial Intelligence is breaking new barriers by the day. Hundreds of worshippers at St. Paul’s Church in Fürth, Bavaria, Germany recently attended a service, entirely conducted by AI with the sermon being presented by the AI chatbot ChatGPT.

An hour before the service even began, people formed a long queue outside the 19th-century, neo-Gothic building to witness what the AI could deliver.

The chatbot’s sermon majorly focused on themes surrounding leaving one’s past behind, living in the present, not being afraid of death and maintaining faith in Jesus Christ. Notably, four different AI avatars took turns to impart the sermons and lead the service.

Shining a light on neuromorphic computing

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system—both hardware and software combined—has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article published in Nature Photonics, titled “Integrated Optical Memristors,” sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential.

Ben Goertzel — 2021 Reflection and Update on SNET, Ecosystem and Path to AGI

Dr. Ben Goertzel shares his thoughts on where we are at the end of 2021, beginning of 2022 — how progress toward AGI looks in retrospect, and looking into the future — updates on the ecosystem…

And the importance of the SingularityNET Community 🥰

SingularityNET is a decentralized marketplace for artificial intelligence. We aim to create the world’s global brain with a full-stack AI solution powered by a decentralized protocol.

We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

Website: https://singularitynet.io.
Forum: https://community.singularitynet.io.
Telegram: https://t.me/singularitynet.
Twitter: https://twitter.com/singularity_net.
Facebook: https://facebook.com/singularitynet.io.
Instagram: https://instagram.com/singularitynet.io.
Github: https://github.com/singnet.
Linkedin: https://www.linkedin.com/company/singularitynet

The case for why our Universe may be a giant neural network

For example, scientists have recently emphasized that the physical organization of the Universe mirrors the structure of a brain. Theoretical physicist Sabine Hossenfelder — renowned for her skepticism — wrote a bold article for Time Magazine in August of 2022 titled “Maybe the Universe Thinks. Hear Me Out,” which describes the similarities. Like our nervous system, the Universe has a highly interconnected, hierarchical organization. The estimated 200 billion detectable galaxies aren’t distributed randomly, but lumped together by gravity into clusters that form even larger clusters, which are connected to one another by “galactic filaments,” or long thin threads of galaxies. When one zooms out to envision the cosmos as a whole, the “cosmic web” formed by these clusters and filaments looks strikingly similar to the “connectome,” a term that refers to the complete wiring diagram of the brain, which is formed by neurons and their synaptic connections. Neurons in the brain also form clusters, which are grouped into larger clusters, and are connected by filaments called axons, which transmit electrical signals across the cognitive system.

Hossenfelder explains that this resemblance between the cosmic web and the connectome is not superficial, citing a rigorous study by a physicist and a neuroscientist that analyzed the features common to both, and based on the shared mathematical properties, concluded that the two structures are “remarkably similar.” Due to these uncanny similarities, Hossenfelder speculates as to whether the Universe itself could be thinking.