[Gettyimages Bank]

Every dad should do this. 😃
French dad and robotics engineer Jean-Louis Constanza has built a robotic suit for his 16-year-old son Oscar that allows him to walk.
Oscar, a wheelchair user, activates the suit by saying “Robot, stand up” and it then walks for him.
Jean-Louis co-founded the company that builds the suit, which can allow users to move upright for a few hours a day.
It is used in several hospitals, but it isn’t yet available for everyday use by individuals and has a price tag of around €150000 (about £127700).
Virtual game worlds provide a non-stop stream of open-ended challenges that nudge AI towards general intelligence.
Granted, it’s a little different for a robot, since they don’t have lungs or a heart. But they do have a “brain” (software), “muscles” (hardware), and “fuel” (a battery), and these all had to work together for Cassie to be able to run.
The brunt of the work fell to the brain—in this case, a machine learning algorithm developed by students at Oregon State University’s Dynamic Robotics Laboratory. Specifically, they used deep reinforcement learning, a method that mimics the way humans learn from experience by using a trial-and-error process guided by feedback and rewards. Over many repetitions, the algorithm uses this process to learn how to accomplish a set task. In this case, since it was trying to learn to run, it may have tried moving the robot’s legs varying distances or at distinct angles while keeping it upright.
Once Cassie got a good gait down, completing the 5K was as much a matter of battery life as running prowess. The robot covered the whole distance (a course circling around the university campus) on a single battery charge in just over 53 minutes, but that did include six and a half minutes of troubleshooting; the computer had to be reset after it overheated, as well as after Cassie fell during a high-speed turn. But hey, an overheated computer getting reset isn’t so different from a human runner pausing to douse their head and face with a cup of water to cool off, or chug some water to rehydrate.
Summary: Researchers have compiled a new, highly detailed 3D brain map that captures the shapes and activity of neurons in the visual neocortex of mice. The map is freely available for neuroscience researchers and artificial intelligence specialists to utilize.
Source: Allen Institute
Researchers from the University of Reading, in the UK, are using drones to give clouds an electrical charge, which could help increase rainfall in water-stressed regions.
At this point i think the US government is going to get stuck paying to develop human level robotic hands.
Over the past few decades, roboticists and computer scientists have developed a variety of data-based techniques for teaching robots how to complete different tasks. To achieve satisfactory results, however, these techniques should be trained on reliable and large datasets, preferably labeled with information related to the task they are learning to complete.
For instance, when trying to teach robots to complete tasks that involve the manipulation of objects, these techniques could be trained on videos of humans manipulating objects, which should ideally include information about the types of grasps they are using. This allows the robots to easily identify the strategies they should employ to grasp or manipulate specific objects.
Researchers at University of Pisa, Istituto Italiano di Tecnologia, Alpen-Adria-Universitat Klagenfurt and TU Delft recently developed a new taxonomy to label videos of humans manipulating objects. This grasp classification method, introduced in a paper published in IEEE Robotics and Automation Letters, accounts for movements prior to the grasping of objects, for bi-manual grasps and for non-prehensile strategies.
Researchers at the University of Sydney and quantum control startup Q-CTRL today announced a way to identify sources of error in quantum computers through machine learning, providing hardware developers the ability to pinpoint performance degradation with unprecedented accuracy and accelerate paths to useful quantum computers.
A joint scientific paper detailing the research, titled “Quantum Oscillator Noise Spectroscopy via Displaced Cat States,” has been published in the Physical Review Letters, the world’s premier physical science research journal and flagship publication of the American Physical Society (APS Physics).
Focused on reducing errors caused by environmental “noise”—the Achilles’ heel of quantum computing —the University of Sydney team developed a technique to detect the tiniest deviations from the precise conditions needed to execute quantum algorithms using trapped ion and superconducting quantum computing hardware. These are the core technologies used by world-leading industrial quantum computing efforts at IBM, Google, Honeywell, IonQ, and others.
The predicted shapes still need to be confirmed in the lab, Ellis told Technology Review. If the results hold up, they will rapidly push forward the study of the proteome, or the proteins in a given organism. DeepMind researchers published their open-source code and laid out the method in two peer-reviewed papers published in Nature last week.
And in 20 other animals often studied by science, too.