A Massachusetts company says it could help stop shootings like the Tops massacre in Buffalo. Its surveillance product is increasingly popular — and, critics say, problematic.
😃
DeepMind has released what it calls a “generalist” AI called Gato, which can play Atari games, accurately caption images, chat naturally with a human and stack coloured blocks with a robot arm, among 600 other tasks. But is Gato truly intelligent – having artificial general intelligence – or is it just an AI model with a few extra tricks up its sleeve?
What is artificial general intelligence (AGI)?
Outside science fiction, AI is limited to niche tasks. It has seen plenty of success recently in solving a huge range of problems, from writing software to protein folding and even creating beer recipes, but individual AI models have limited, specific abilities. A model trained for one task is of little use for another.
This AI powered prosthetic arm understands what you think. Muscle-controlled prosthetic limbs that patients with amputations across the globe currently use have various limitations and challenges. Good quality prosthetics parts are cumbersome, come with a complex setup, and require patients to undergo training for several months to learn their use. Interestingly, a new technology proposed by a team of researchers at the University of Minnesota (UMN) can overcome all such challenges.
It may sound like science-fiction, but the researchers claim that the new technology would allow patients to control robotic body parts using their thoughts. By employing artificial intelligence and machine learning, the researchers at UMN have developed a portable neuroprosthetic hand. The robotic hand comes equipped with a nerve implant linked to the peripheral nerve in a patient’s arm.
Explaining the significance of their neuroprosthetic innovation, project collaborator and UMN neuroscientist Edward Keefer said, “We are well along the way toward allowing upper limb amputees at least, and other people in the future, to have totally natural and intuitive control of their prosthetic devices.” ## THE NEUROPROSTHETIC HAND IS DIFFERENT FROM YOUR REGULAR PROSTHETIC LIMBS
The prosthetic body parts currently available on the market detect shoulder, chest, or muscle movement. They have sensors to recognize signals in specific regions of the human body. Therefore, every time a patient wants to move his hand, he is required to trigger his body muscles. Adapting to such muscle-driven limb movement is not easy for patients, and many such devices are not suitable for physically weak individuals.
Some advanced and efficient muscle-sensitive prosthetics come with complex wiring and other arrangements that make them difficult to use. The amputees have to go through a lot of training to adjust to such devices, which often increases frustration and stress. Now imagine a device that starts working immediately, is less invasive, requires no training, no muscle activation, and no complex setup.
The neuroprosthetic arm enables the patients to move their arms simply at the will of their minds. It is an efficient, easy to use, and a lot more intuitive alternative to any commercial prosthetic system available.
The long term goal for Elon Musk and brain to computer interfaces (brain chips) is to merge our minds with artificial intelligence. While the short term goal for Neuralink and other companies is to help people with medical issues.
This short documentary video takes a look at connecting our brains to computers, and how this works is explained. We also take a look at the downsides of people connecting their brains, the technology being used today, and Elon Musk’s thoughts.
Other topics in the video include:
• How in the future we could download and upload our memories and dreams so they are not forgotten.
• The different types of brain to computer implants and devices.
• Updates (demonstrating and testing on pigs, and soon humans) and a summery of where we are today with this technology.
• Tutorial on how it works.
• And what other sci-fi like things will brains connected to computers allow us humans to do.
Researchers at the Stevens Institute of Technology used a customized BlueROV2 robot to explore a busy harbor at the U.S. Merchant Marine Academy in New York. | Source: Stevens Institute of Technology.
Underwater environments can be particularly challenging for autonomous robots. Things are constantly moving and changing, and robots need to figure out where they are without relying on GPS data.
Researchers at the Stevens Institute of Technology have created a robot that is able to successfully navigate a crowded marina underwater. The robot is able to map its environment, track its own location and plan a safe route through a complex environment in real-time, simultaneously.
Researchers at the Department of Energy’s Oak Ridge National Laboratory are teaching microscopes to drive discoveries with an intuitive algorithm, developed at the lab’s Center for Nanophase Materials Sciences, that could guide breakthroughs in new materials for energy technologies, sensing and computing.
“There are so many potential materials, some of which we cannot study at all with conventional tools, that need more efficient and systematic approaches to design and synthesize,” said Maxim Ziatdinov of ORNL’s Computational Sciences and Engineering Division and the CNMS. “We can use smart automation to access unexplored materials as well as create a shareable, reproducible path to discoveries that have not previously been possible.”
The approach, published in Nature Machine Intelligence, combines physics and machine learning to automate microscopy experiments designed to study materials’ functional properties at the nanoscale.