Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1427

Dec 16, 2020

LiquidPiston’s “inside-out” rotary X-Engine wins Army research contract

Posted by in categories: military, robotics/AI

Connecticut-based company LiquidPiston is developing a portable generator for the US Army that uses its X-Engine, a fresh and extremely powerful take on the rotary engine that’ll deliver as much power as the Army’s current-gen-set at one-fifth the size.

We’ve written a few times before about the fascinating LiquidPiston rotary engine. It’s not a Wankel – indeed, it’s closer to an inside-out Wankel – and with only two moving parts, it’s able to deliver extraordinary power density at up to 1.5 horsepower per pound (0.45 kg).

Continue reading “LiquidPiston’s ‘inside-out’ rotary X-Engine wins Army research contract” »

Dec 16, 2020

The Air Force Just Let an AI Take Over Systems of a Military Jet

Posted by in categories: military, robotics/AI

The flight marks the first known time an AI has to been used to control a US military aircraft.

“This is the first time this has ever happened,” assistant Air Force Secretary Will Roper told the newspaper.

The AI took care of some highly specific tasks and was never in control of actually flying the plane — or, notably, any weapon systems.

Dec 16, 2020

Piloting A Real-Life Giant Exoskeleton Suit

Posted by in categories: cyborgs, robotics/AI

This Giant Four-Legged Robot Is Like Something Out Of A Science Fiction Film!! 😍 🤖

Dec 16, 2020

Wheels Are Better Than Feet for Legged Robots

Posted by in category: robotics/AI

ANYmal demonstrates how hybrid mobility can benefit quadrupedal robots.

Dec 16, 2020

How Self Driving Cars Will Change The World

Posted by in categories: robotics/AI, transportation

There is no doubt that the future of transport is autonomous. Tesla is already rolling out a beta version of full self driving and that will be released fully in 2021, I am sure. From robotaxis to freight transport, our lives will get easier, cheaper and more convenient and for those with current mobility issues, the change will be even greater. Here I look at some of the ways that all our lives, the environment and the places we live will change…for the better. I cannot wait…can you?


In The Mind Blowing Future Of Transportation — How Self Driving Cars Will Change The World, I will look at the future of autonomous vehicles and how they will change our world…for the better.

Continue reading “How Self Driving Cars Will Change The World” »

Dec 16, 2020

Making eye contact with a robot: Psychophysiological responses to eye contact with a human and with a humanoid robot

Posted by in category: robotics/AI

Previous research has shown that eye contact, in human-human interaction, elicits increased affective and attention related psychophysiological responses. In the present study, we investigated whether eye contact with a humanoid robot would elicit these responses. Participants were facing a humanoid robot (NAO) or a human partner, both physically present and looking at or away from the participant. The results showed that both in human-robot and human-human condition, eye contact versus averted gaze elicited greater skin conductance responses indexing autonomic arousal, greater facial zygomatic muscle responses (and smaller corrugator responses) associated with positive affect, and greater heart deceleration responses indexing attention allocation. With regard to the skin conductance and zygomatic responses, the human model’s gaze direction had a greater effect on the responses as compared to the robot’s gaze direction. In conclusion, eye contact elicits automatic affective and attentional reactions both when shared with a humanoid robot and with another human.

Dec 16, 2020

A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping

Posted by in category: robotics/AI

Grasping objects is something primates do effortlessly, but how does our brain coordinate such a complex task? Multiple brain areas across the parietal and frontal cortices of macaque monkeys are essential for shaping the hand during grasping, but we lack a comprehensive model of grasping from vision to action. In this work, we show that multiarea neural networks trained to reproduce the arm and hand control required for grasping using the visual features of objects also reproduced neural dynamics in grasping regions and the relationships between areas, outperforming alternative models. Simulated lesion experiments revealed unique deficits paralleling lesions to specific areas in the grasping circuit, providing a model of how these areas work together to drive behavior.

One of the primary ways we interact with the world is using our hands. In macaques, the circuit spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary motor cortex is necessary for transforming visual information into grasping movements. However, no comprehensive model exists that links all steps of processing from vision to action. We hypothesized that a recurrent neural network mimicking the modular structure of the anatomical circuit and trained to use visual features of objects to generate the required muscle dynamics used by primates to grasp objects would give insight into the computations of the grasping circuit. Internal activity of modular networks trained with these constraints strongly resembled neural activity recorded from the grasping circuit during grasping and paralleled the similarities between brain regions.

Dec 16, 2020

Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications

Posted by in category: robotics/AI

The study of visual illusions has proven to be a very useful approach in vision science. In this work we start by showing that, while convolutional neural networks (CNNs) trained for low-level visual tasks in natural images may be deceived by brightness and color illusions, some network illusions can be inconsistent with the perception of humans. Next, we analyze where these similarities and differences may come from. On one hand, the proposed linear eigenanalysis explains the overall similarities: in simple CNNs trained for tasks like denoising or deblurring, the linear version of the network has center-surround receptive fields, and global transfer functions are very similar to the human achromatic and chromatic contrast sensitivity functions in human-like opponent color spaces. These similarities are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments. Specifically, here human-like features emerge from error minimization. On the other hand, the observed differences must be due to the behavior of the human visual system not explained by the linear approximation. However, our study also shows that more ‘flexible’ network architectures, with more layers and a higher degree of nonlinearity, may actually have a worse capability of reproducing visual illusions. This implies, in line with other works in the vision science literature, a word of caution on using CNNs to study human vision: on top of the intrinsic limitations of the L + NL formulation of artificial networks to model vision, the nonlinear behavior of flexible architectures may easily be markedly different from that of the visual system.

Dec 16, 2020

Become a Raspberry Pi and ROS Robotics Expert with This Bundle

Posted by in category: robotics/AI

Raspberry Pi and ROS Robotics are versatile exciting tools that allow you to build many wondrous projects. However, they are not always the easiest systems to manage and use… until now.

The Ultimate Raspberry Pi & ROS Robotics Developer Super Bundle will turn you into a Raspberry Pi and ROS Robotics expert in no time. With over 39 hours of training and over 15 courses, the bundle leaves no stone unturned.


There is almost nothing you won’t be able to do with your new-found bundle on Raspberry Pi and ROS Robotics.

Continue reading “Become a Raspberry Pi and ROS Robotics Expert with This Bundle” »

Dec 15, 2020

Sea creature-inspired robot walks, rolls, transports cargo

Posted by in categories: biotech/medical, robotics/AI

Northwestern researchers have developed a first-of-its-kind soft, aquatic robot that is powered by light and rotating magnetic fields. These life-like robotic materials could someday be used as “smart” microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures.