Toggle light / dark theme

Innovative Shins Turns Quadrupedal Robot Biped

What are you working on next?

Rosendo: Our next steps…will be on the development of the manipulability of this robot. More specifically, we have been asking ourselves the question: “Now that we can stand up, what can we do that other robots cannot?”, and we already have some preliminary results on climbing to places that are higher than the center of gravity of the robot itself. After mechanical changes on the forelimbs, we will better evaluate complex handling that might require both hands at the same time, which is rare in current mobile robots.

Multi-Modal Legged Locomotion Framework with Automated Residual Reinforcement Learning, by Chen Yu and Andre Rosendo from ShanghaiTech University, was presented this week at IROS 2022 in Kyoto, Japan. More details are available on Github.

Floppy or not: AI predicts properties of complex metamaterials

Given a 3D piece of origami, can you flatten it without damaging it? Just by looking at the design, the answer is hard to predict, because each and every fold in the design has to be compatible with flattening.

This is an example of a combinatorial problem. New research led by the UvA Institute of Physics and research institute AMOLF has demonstrated that machine learning algorithms can accurately and efficiently answer these kinds of questions. This is expected to give a boost to the artificial intelligence-assisted design of complex and functional (meta)materials.

In their latest work, published in Physical Review Letters this week, the research team tested how well (AI) can predict the properties of so-called combinatorial mechanical metamaterials.

Paralyzed patients can now connect their iPhones to their brains to type messages using thoughts alone

A novel brain-computer interface developed by a New York-based company called Synchron was just used to help a paralyzed patient send messages using their Apple device for the very first time. It’s a massive step up in an industry that has increasingly reported progress, which suggests that interfacing our minds with consumer devices could happen a lot sooner than some of us bargained for.

Brain-computer devices eavesdrop on brainwaves and convert these into commands. More or less the same neural signals that healthy people use to instruct their muscle fibers to twitch and enact a movement like walking or grasping an object can be used to command a robotic arm or move a cursor on a computer screen. It really is a phenomenal and game-changing piece of technology, with obvious benefits for those who are completely paralyzed and have few if any means of communicating with the outside world.

This type of technology is not exactly new. Scientists have been experimenting with brain-computer interfaces for decades, but it’s been in the last couple of years or so that we’ve actually come to see tremendous progress. Even Elon Musk has jumped on this bandwagon, founding a company called Neuralink with the ultimate goal of developing technology that allows people to transmit and receive information between their brain and a computer wirelessly — essentially connecting the human mind to devices. The idea is for anyone to be able to use this technology, even normal, healthy people, who want to augment their abilities by interfacing with machines. In 2021, Neuralink released a video of a monkey with an implanted Neuralink device playing pong, and the company wants to start clinical trials with humans soon.

Scientists create edible drone built of rice cakes and gelatin that can save lives

The size of the wing, made of compressed puffed rice, depends on the recipient’s nutrition requirements.

The IEEE/RSJ International Conference on Intelligent Robots and Systems in Kyoto last week saw an ingenious creation presented by researchers from the Swiss Federal Institute of Technology Lausanne. Their paper described a drone made from rice cakes.

Mind you, this was no light matter. Titled ‘Towards Edible Drones for Rescue Missions: Design and Flight of Nutritional Wings,’ by Bokeon Kwak, Jun Shintake, Lu Zhang, and Dario Floreano from EPFL, the paper detailed a drone that could “boost its payload of food from 30 percent to 50 percent of its mass”, according to a release.

New large-scale virtual model of cortex highly successful in solving visual tasks

HBP researchers have trained a large-scale model of the primary visual cortex of the mouse to solve visual tasks in a highly robust way. The model provides the basis for a new generation of neural network models. Due to their versatility and energy-efficient processing, these models can contribute to advances in neuromorphic computing.

Modeling the brain can have a massive impact on artificial intelligence (AI): since the brain processes images in a much more energy-efficient way than artificial networks, scientists take inspiration from neuroscience to create neural networks that function similarly to the biological ones to significantly save energy.

In that sense, brain-inspired neural networks are likely to have an impact on future technology, by serving as blueprints for visual processing in more energy-efficient neuromorphic hardware. Now, a study by Human Brain Project (HBP) researchers from the Graz University of Technology (Austria) showed how a large data-based model can reproduce a number of the brain’s visual processing capabilities in a versatile and accurate way. The results were published in the journal Science Advances.

Artificial intelligence makes enzyme engineering easy

You can’t move a pharmaceutical scientist from a lab to a kitchen and expect the same research output. Enzymes behave exactly the same: They are dependent upon a specific environment. But now, in a study recently published in ACS Synthetic Biology, researchers from Osaka University have imparted an analogous level of adaptability to enzymes, a goal that has remained elusive for over 30 years.

Enzymes perform impressive functions, enabled by the unique arrangement of their constituent amino acids, but usually only within a specific cellular environment. When you change the cellular environment, the enzyme rarely functions well—if at all. Thus, a long-standing research goal has been to retain or even improve upon the function of enzymes in different environments; for example, conditions that are favorable for biofuel production. Traditionally, such work has involved extensive experimental trial-and-error that might have little assurance of achieving an optimal result.

Artificial intelligence (a computer-based tool) can minimize this trial-and-error, but still relies on experimentally obtained crystal structures of enzymes—which can be unavailable or not especially useful. Thus, “the pertinent amino acids one should mutate in the enzyme might be only best-guesses,” says Teppei Niide, co-senior author. “To solve this problem, we devised a methodology of ranking amino acids that depends only on the widely available amino acid sequence of analogous enzymes from other living species.”

Incorporating nanoparticles into a porous hydrogel to propel an aquabot with minimal voltage

A team of researchers from Korea University, Ajou University and Hanyang University, all in the Republic of Korea, has created a tiny aquabot propelled by fins made of a porous hydrogel imbued with nanoparticles. In their paper published in the journal Science Robotics, the group describes how the hydrogel works to power a tiny boat and reveals how much voltage was required.

Scientists and engineers have been working for several years to build tiny, soft robots for use in and have found that hydrogels are quite suitable for the task. Unfortunately, such materials also have undesirable characteristics, most notably, poor electro-connectivity. In this new effort, the researchers took a new approach to making hydrogels more amenable for use with electricity as a —adding conductive nanoparticles.

The work involved adding a small number of nanoparticles to a part of a porous hydrogel which they then used as a wrinkled nanomembrane electrode (WNE) . Adding the nanoparticles allowed the hydrogel to conduct electricity in a reliable way. Testing showed the actuator could be powered with as little as 3 volts of electricity. The researchers then fashioned two of the actuators into finlike shapes and attached them to a tiny plastic body. Electronics added to the body controlled the electricity sent to the fins. The resulting robot had a water bug appearance, floating on the surface of the water in a tank.

AI Helped Design a Clear Window Coating That Can Cool Buildings Without Using Energy

Demand is growing for effective new technologies to cool buildings, as climate change intensifies summer heat. Now, scientists have just designed a transparent window coating that could lower the temperature inside buildings, without expending a single watt of energy. They did this with the help of advanced computing technology and artificial intelligence. The researchers report the details today (November 2) in the journal ACS Energy Letters.

Cooling accounts for about 15% of global energy consumption, according to estimates from previous research studies. That demand could be lowered with a window coating that could block the sun’s ultraviolet and near-infrared light. These are parts of the solar spectrum that are not visible to humans, but they typically pass through glass to heat an enclosed room.

Energy use could be even further reduced if the coating radiates heat from the window’s surface at a wavelength that passes through the atmosphere into outer space. However, it’s difficult to design materials that can meet these criteria simultaneously and at the same time can also transmit visible light, This is required so they don’t interfere with the view. Eungkyu Lee, Tengfei Luo, and colleagues set out to design a “transparent radiative cooler” (TRC) that could do just that.

Why Continual Learning is the key towards Machine Intelligence

The last decade has marked a profound change in how we perceive and talk about Artificial Intelligence. The concept of learning, once confined in the corner of AI, has now become so important some people came up with the new term “Machine Intelligence”[1][2][3] as to make clear the fundamental role of Machine Learning in it and further depart form older symbolic approaches.

Recent Deep Learning (DL) techniques have literally swept away previous AI approaches and have shown how beautiful, end-to-end differentiable functions can be learned to solve incredibly complex tasks involving high-level perception abilities.

Yet, since DL techniques have been proven shining only with a large number of labeled examples, the research community has now shifted his attention towards Unsupervised and Reinforcement Learning, both aiming to solve equivalently complex tasks but without (or less as possible) explicit supervision.