Toggle light / dark theme

Or so goes the theory. Most CIM chips running AI algorithms have solely focused on chip design, showcasing their capabilities using simulations of the chip rather than running tasks on full-fledged hardware. The chips also struggle to adjust to multiple different AI tasks—image recognition, voice perception—limiting their integration into smartphones or other everyday devices.

This month, a study in Nature upgraded CIM from the ground up. Rather than focusing solely on the chip’s design, the international team—led by neuromorphic hardware experts Dr. H.S. Philip Wong at Stanford and Dr. Gert Cauwenberghs at UC San Diego—optimized the entire setup, from technology to architecture to algorithms that calibrate the hardware.

The resulting NeuRRAM chip is a powerful neuromorphic computing behemoth with 48 parallel cores and 3 million memory cells. Extremely versatile, the chip tackled multiple AI standard tasks—such as reading hand-written numbers, identifying cars and other objects in images, and decoding voice recordings—with over 84 percent accuracy.

Understanding how the brain organizes and accesses spatial information — where we are, what’s around the corner, how to get there — remains an exquisite challenge. The process involves recalling an entire network of memories and stored spatial data from tens of billions of neurons, each connected to thousands of others. Neuroscientists have identified key elements such as grid cells, neurons that map locations. But going deeper will prove tricky: It’s not as though researchers can remove and study slices of human gray matter to watch how location-based memories of images, sounds and smells flow through and connect to each other.

Artificial intelligence offers another way in. For years, neuroscientists have harnessed many types of neural networks — the engines that power most deep learning applications — to model the firing of neurons in the brain. In recent work, researchers have shown that the hippocampus, a structure of the brain critical to memory, is basically a special kind of neural net, known as a transformer, in disguise. Their new model tracks spatial information in a way that parallels the inner workings of the brain. They’ve seen remarkable success.

“The fact that we know these models of the brain are equivalent to the transformer means that our models perform much better and are easier to train,” said James Whittington, a cognitive neuroscientist who splits his time between Stanford University and the lab of Tim Behrens at the University of Oxford.

In this age of innovation and technology, Humanoid robots working closely.
with actual humans, are used for research and space exploration, personal.
assistance and caregiving, education and entertainment, search and.
rescue, manufacturing and maintenance, public relations, and healthcare.

This is not a dream or the distant future but current reality!

In this video, we are going to look at the most advanced humanoid robots.
that are changing the future with the help of artificial intelligence.

#advance #humanoid #robots.

Bionic technology is removing physical barriers faced by disabled people while raising profound questions of what it is to be human. From DIY prosthetics realised through 3D printing technology to customised AI-driven limbs, science is at the forefront of many life-enhancing innovations.

Support the Guardian ► https://support.theguardian.com/contribute.

Today in Focus podcast ► https://www.theguardian.com/news/series/todayinfocus.

Sign up for the Guardian documentaries newsletter ► https://www.theguardian.com/info/2016/sep/02/sign-up-for-the…es-update.

Researchers have created a way for artificial neuronal networks to communicate with biological neuronal networks. The new system converts artificial electrical spiking signals to a visual pattern than is then used to entrain the real neurons via optogenetic stimulation of the network. This advance will be important for future neuroprosthetic devices that replace damages neurons with artificial neuronal circuitry.

A prosthesis is an artificial device that replaces an injured or missing part of the body. You can easily imagine a stereotypical pirate with a wooden leg or Luke Skywalker’s famous robotic hand. Less dramatically, think of old-school prosthetics like glasses and contact lenses that replace the natural lenses in our eyes. Now try to imagine a prosthesis that replaces part of a damaged brain. What could artificial brain matter be like? How would it even work?

Creating neuroprosthetic technology is the goal of an international team led by by the Ikerbasque Researcher Paolo Bonifazi from Biocruces Health Research Institute (Bilbao, Spain), and Timothée Levi from Institute of Industrial Science, The University of Tokyo and from IMS lab, University of Bordeaux. Although several types of artificial neurons have been developed, none have been truly practical for neuroprostheses. One of the biggest problems is that neurons in the brain communicate very precisely, but electrical output from the typical electrical neural network is unable to target specific neurons. To overcome this problem, the team converted the electrical signals to light. As Levi explains, “advances in optogenetic technology allowed us to precisely target neurons in a very small area of our biological neuronal network.”

The most powerful Exascale Supercomputer is going to release in 2021 and will feature a total of 64 Exaflops. More than 6 times as much, as the Leonardo Supercomputer that’s also set to release this year.
This is accomplished with the help of a new type of processor technology from Tachyum that’s called “Prodigy” and is described as the first Universal Processor.

This new processor is set to enable General Artificial Intelligence at the speed of the human brain in real-time. It’s many times faster than the fastest intel xeon, nvidia graphics card or apple silicon. This new super-computer will enable previously-thought impossible simulations of the brain, medicine and more.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

#supercomputer #ai #exascale

Quantum computing looks like a world of imagination where we’ll be processing data beyond our thoughts. Many Industries are working to make a powerful Quantum computer that will solve all the issues. But what IBM has done is really something exceptional. They have developed the world’s first Quantum computer that will change history.
In a classical computer, data is stored and processed in bits, represented by either a zero or a one. But in quantum computers, qubits can not only be in a zero or one state but a superposition of both simultaneously: the more qubits, the more computing power, and the more possibilities. IBM’s quantum computer journey started with a 5-qubit quantum computer on the cloud called the Quantum Experience and led to the Eagle chip that began in 2016. Since then, the company has released a succession of chips with increasing numbers of qubits, all named after birds, each with its own set of technological challenges.

https://rb.gy/aezco2

Watch How artificial Intelligence is changing the world:
https://www.youtube.com/watch?v=trhLQi9RuZ4&t.

#Quantumcomputer #Quantumcomputeribm

Recent technological advancements have paved the way for the creation of increasingly sophisticated robotic systems designed to autonomously complete missions in different familiar and unfamiliar environments. Robots meant to operate in uncertain or remote environments could greatly benefit from the ability to actively acquire electrical power from their surroundings.

Researchers at Worcester Polytechnic Institute, Imperial College London, and University of Illinois Urbana Champaign have recently developed a new robotic system that can visually rearrange its surroundings to receive the maximum amount of energy from a given power source. This robot, presented in a paper pre-published on arXiv and set to be presented at the IEEE International Conference on Robotics and Biomimetics, works by drawing using conductive ink.

“Our PLOS ONE work started off as a quite philosophical thought experiment,” Andre Rosendo, the professor who carried out the study, told TechXplore. “Nietzsche claims that human’s primal instinct is power, and survival is just a condition sine qua non we couldn’t reach that final goal. Based on this idea, we started to devise experimental settings where our robot could not only act to survive, but to thrive.”

Elon Musk tweeted a fascinating — and frankly unsettling — theory last night about how a brain parasite might be forcing all humans to create advanced AI.

The Tesla CEO was responding to a story from National Geographic about how toxoplasmosis, a common parasite often found in cats, seems to be causing hyenas to be reckless around predators such as lions. In a staggering and perhaps facetious leap of logic, Musk suggested that the parasite is actually what’s causing humans to create advanced artificial intelligence.

“Toxoplasmosis infects rats, then cats, then humans who make cat videos,” Musk tweeted on Friday. “AI trains achieves superhuman intelligence training on Internet cat videos, thus making toxoplasmosis the true arbiter of our destiny.”