Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1085

Mar 3, 2022

Artificial muscles robotic arm

Posted by in categories: cyborgs, robotics/AI

Credits: clone incorporated.

Mar 3, 2022

Team develops fingertip sensitivity for robots

Posted by in categories: information science, robotics/AI

In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high-resolution.

The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque grayish color that prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera, which records colorful images, illuminated by a ring of LEDs.

Continue reading “Team develops fingertip sensitivity for robots” »

Mar 3, 2022

Using artificial intelligence to find anomalies hiding in massive datasets

Posted by in category: robotics/AI

Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

Because the they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

Mar 3, 2022

New approach to flexible robotics and metamaterials design mimics nature, encourages sustainability

Posted by in categories: information science, robotics/AI, sustainability

A new study challenges the conventional approach to designing soft robotics and a class of materials called metamaterials by utilizing the power of computer algorithms. Researchers from the University of Illinois Urbana-Champaign and Technical University of Denmark can now build multimaterial structures without dependence on human intuition or trial-and-error to produce highly efficient actuators and energy absorbers that mimic designs found in nature.

The study, led by Illinois civil and environmental engineering professor Shelly Zhang, uses optimization theory and an -based design process called . Also known as digital synthesis, the builds composite structures that can precisely achieve complex prescribed mechanical responses.

Continue reading “New approach to flexible robotics and metamaterials design mimics nature, encourages sustainability” »

Mar 3, 2022

Researchers establish first-of-its-kind framework to diagnose 3D-printing errors

Posted by in categories: 3D printing, media & arts, robotics/AI

Additive manufacturing, or 3D printing, can create custom parts for electromagnetic devices on-demand and at a low cost. These devices are highly sensitive, and each component requires precise fabrication. Until recently, though, the only way to diagnose printing errors was to make, measure and test a device or to use in-line simulation, both of which are computationally expensive and inefficient.

To remedy this, a research team co-led by Penn State created a first-of-its-kind methodology for diagnosing errors with machine learning in real time. The researchers describe this framework—published in Additive Manufacturing —as a critical first step toward correcting 3D-printing errors in real time. According to the researchers, this could make printing for sensitive devices much more effective in terms of time, cost and computational bandwidth.

“A lot of things can go wrong during the process for any component,” said Greg Huff, associate professor of electrical engineering at Penn State. “And in the world of electromagnetics, where dimensions are based on wavelengths rather than regular units of measure, any small defect can really contribute to large-scale system failures or degraded operations. If 3D printing a household item is like tuning a tuba—which can be done with broad adjustments—3D-printing devices functioning in the electromagnetic domain is like tuning a violin: Small adjustments really matter.”

Mar 3, 2022

Deciphering behavior algorithms used by ants and the internet

Posted by in categories: food, information science, internet, robotics/AI

Engineers sometimes turn to nature for inspiration. Cold Spring Harbor Laboratory Associate Professor Saket Navlakha and research scientist Jonathan Suen have found that adjustment algorithms—the same feedback control process by which the Internet optimizes data traffic—are used by several natural systems to sense and stabilize behavior, including ant colonies, cells, and neurons.

Internet engineers route data around the world in small packets, which are analogous to . As Navlakha explains, “The goal of this work was to bring together ideas from and Internet design and relate them to the way forage.”

The same algorithm used by internet engineers is used by ants when they forage for food. At first, the colony may send out a single ant. When the ant returns, it provides information about how much food it got and how long it took to get it. The colony would then send out two ants. If they return with food, the colony may send out three, then four, five, and so on. But if ten ants are sent out and most do not return, then the colony does not decrease the number it sends to nine. Instead, it cuts the number by a large amount, a multiple (say half) of what it sent before: only five ants. In other words, the number of ants slowly adds up when the signals are positive, but is cut dramatically lower when the information is negative. Navlakha and Suen note that the system works even if individual ants get lost and parallels a particular type of “additive-increase/multiplicative-decrease algorithm” used on the internet.

Mar 3, 2022

For new insights into aerodynamics, scientists turn to paper airplanes

Posted by in categories: drones, mathematics, robotics/AI

A series of experiments using paper airplanes reveals new aerodynamic effects, a team of scientists has discovered. Its findings enhance our understanding of flight stability and could inspire new types of flying robots and small drones.

“The study started with simple curiosity about what makes a good airplane and specifically what is needed for smooth gliding,” explains Leif Ristroph, an associate professor at New York University’s Courant Institute of Mathematical Sciences and an author of the study, which appears in the Journal of Fluid Mechanics. “Answering such basic questions ended up being far from child’s play. We discovered that the aerodynamics of how paper airplanes keep level flight is really very different from the stability of conventional airplanes.”

“Birds glide and soar in an effortless way, and paper airplanes, when tuned properly, can also glide for long distances,” adds author Jane Wang, a professor of engineering and physics at Cornell University. “Surprisingly, there has been no good mathematical model for predicting this seemingly simple but subtle gliding flight.”

Mar 3, 2022

Peter Diamandis Describes How Applying AI to Drug Discovery is Causing Positive Disruption to Biopharma

Posted by in categories: biotech/medical, Peter Diamandis, robotics/AI

The content of Peter’s email blast has been edited by me (can’t help myself). But I believe I have captured its essence and hope you enjoy the retelling. As always your comments are welcomed.

What is Insilico Medicine?

Insilico Medicine is a pioneering drug company that is powered by a “drug discovery engine” that sifts through millions of data samples to determine the signature biological characteristics of specific diseases. It then identifies the most promising treatment targets and uses a new AI technique called generative adversarial networks (GANs) to create molecules perfectly suited against them.

Mar 3, 2022

Tesla Will Focus More on Developing Humanoid Robots Called ‘Optimus’ in 2022, Elon Musk Says

Posted by in categories: Elon Musk, robotics/AI, transportation

Elon Musk announces that Tesla will soon be joining the robotic industry. New release of cars in 2022 will be delayed to make way for the humanoid robot development of the company.

Mar 3, 2022

Simulation of a Human-Scale Cerebellar Network Model on the K Computer

Posted by in categories: neuroscience, robotics/AI, supercomputing

Circa 2020 Simulation of the human brain.


Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.

Computer simulation of the whole human brain is an ambitious challenge in the field of computational neuroscience and high-performance computing (Izhikevich, 2005; Izhikevich and Edelman, 2008; Amunts et al., 2016). The human brain contains approximately 100 billion neurons. While the cerebral cortex occupies 82% of the brain mass, it contains only 19% (16 billion) of all neurons. The cerebellum, which occupies only 10% of the brain mass, contains 80% (69 billion) of all neurons (Herculano-Houzel, 2009). Thus, we could say that 80% of human-scale whole brain simulation will be accomplished when a human-scale cerebellum is built and simulated on a computer. The human cerebellum plays crucial roles not only in motor control and learning (Ito, 1984, 2000) but also in cognitive tasks (Ito, 2012; Buckner, 2013). In particular, the human cerebellum seems to be involved in human-specific tasks, such as bipedal locomotion, natural language processing, and use of tools (Lieberman, 2014).