Menu

Blog

Page 5748

Dec 16, 2020

Making eye contact with a robot: Psychophysiological responses to eye contact with a human and with a humanoid robot

Posted by in category: robotics/AI

Previous research has shown that eye contact, in human-human interaction, elicits increased affective and attention related psychophysiological responses. In the present study, we investigated whether eye contact with a humanoid robot would elicit these responses. Participants were facing a humanoid robot (NAO) or a human partner, both physically present and looking at or away from the participant. The results showed that both in human-robot and human-human condition, eye contact versus averted gaze elicited greater skin conductance responses indexing autonomic arousal, greater facial zygomatic muscle responses (and smaller corrugator responses) associated with positive affect, and greater heart deceleration responses indexing attention allocation. With regard to the skin conductance and zygomatic responses, the human model’s gaze direction had a greater effect on the responses as compared to the robot’s gaze direction. In conclusion, eye contact elicits automatic affective and attentional reactions both when shared with a humanoid robot and with another human.

Dec 16, 2020

A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping

Posted by in category: robotics/AI

Grasping objects is something primates do effortlessly, but how does our brain coordinate such a complex task? Multiple brain areas across the parietal and frontal cortices of macaque monkeys are essential for shaping the hand during grasping, but we lack a comprehensive model of grasping from vision to action. In this work, we show that multiarea neural networks trained to reproduce the arm and hand control required for grasping using the visual features of objects also reproduced neural dynamics in grasping regions and the relationships between areas, outperforming alternative models. Simulated lesion experiments revealed unique deficits paralleling lesions to specific areas in the grasping circuit, providing a model of how these areas work together to drive behavior.

One of the primary ways we interact with the world is using our hands. In macaques, the circuit spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary motor cortex is necessary for transforming visual information into grasping movements. However, no comprehensive model exists that links all steps of processing from vision to action. We hypothesized that a recurrent neural network mimicking the modular structure of the anatomical circuit and trained to use visual features of objects to generate the required muscle dynamics used by primates to grasp objects would give insight into the computations of the grasping circuit. Internal activity of modular networks trained with these constraints strongly resembled neural activity recorded from the grasping circuit during grasping and paralleled the similarities between brain regions.

Dec 16, 2020

Computational imaging during video game playing shows dynamic synchronization of cortical and subcortical networks of emotions

Posted by in categories: computing, entertainment, neuroscience

Second, we chose 2 major Appraisals with well-established roles in emotion elicitation, but interactive game paradigms could also investigate the neural basis of other appraisals (e.g., novelty, social norms). Furthermore, our study did not elucidate the precise cognitive mechanisms of particular appraisals or their neuroanatomical substrates but rather sought to dissect distinct brain networks underlying appraisals and other emotion components in order to assess any transient synchronization among them during emotion-eliciting situations. Importantly, even though different appraisals would obviously engage different brain networks, a critical assumption of the CPM is that synchronization between these networks and other components would arise through similar mechanisms as found here.

Third, our task design and event durations were chosen for fMRI settings, with blocked conditions and sufficient repetitions of similar trials. The limited temporal resolution of fMRI did not allow the investigation of faster, within-level dynamics which may be relevant to emotions. Additionally, this slow temporal resolution and our brain-based synchronization approach are insufficient to uncover fast and recurrent interactions among component networks during synchronization, as hypothesized by the CPM. Nonetheless, our computational model for the peripheral synchronization index did include recurrence as one of its parameters, allowing us refine our model-based analysis of network synchronization in ways explicitly taking recurrent effects into account (see S1 Text and Table J in S1 Table). In any case, neither the correlation of a model-based peripheral index nor an instantaneous phase synchronization approach could fully verify this hypothesis at the neuronal level using fMRI. To address these limitations, future studies might employ other paradigms with different game events or other imaging analyses and methodologies with higher temporal resolution. Higher temporal resolution may also help shed light on causality factors hypothesized by the CPM, which could not be addressed here. Finally, our study focused on the 4 nonexperiential components of emotion, with feelings measured purely retrospectively for manipulation-check purposes. This approach was motivated conceptually by the point of view that an emotion can be characterized comprehensively by the combination of its nonexperiential parts [10] and methodologically by the choice to avoid self-report biases and dual task conditions in our experimental setting. However, future work will be needed to link precise moments of component synchronization more directly to concurrent measures along relevant emotion dimensions, without task biases, as previously examined in purely behavioral research [20].

Nevertheless, by investigating emotions from a dynamic multi-componential perspective with interactive situations and model-based parameters, our study demonstrates the feasibility of a new approach to emotion research. We provide important new insights into the neural underpinnings of emotions in the human brain that support theoretical accounts of emotions as transient states emerging from embodied and action-oriented processes which govern adaptive responses to the environment. By linking transient synchronization between emotion components to specific brain hubs in basal ganglia, insula, and midline cortical areas that integrate sensorimotor, interoceptive, and self-relevant representations, respectively, our results provide a new cornerstone to bridge neuroscience with psychological and developmental frameworks in which affective functions emerge from a multilevel integration of both physical/bodily and psychological/cognitive processes [62].

Dec 16, 2020

Tailoring Magnetic Fields in Inaccessible Regions

Posted by in category: materials

Controlling magnetism, essential for a wide range of technologies, is impaired by the impossibility of generating a maximum of magnetic field in free space. Here, we propose a strategy based on negative permeability to overcome this stringent limitation. We experimentally demonstrate that an active magnetic metamaterial can emulate the field of a straight current wire at a distance. Our strategy leads to an unprecedented focusing of magnetic fields in empty space and enables the remote cancellation of magnetic sources, opening an avenue for manipulating magnetic fields in inaccessible regions.

Dec 16, 2020

Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification

Posted by in categories: information science, neuroscience

This work develops PSID, a dynamic modeling method to dissociate and prioritize neural dynamics relevant to a given behavior.

Dec 16, 2020

Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications

Posted by in category: robotics/AI

The study of visual illusions has proven to be a very useful approach in vision science. In this work we start by showing that, while convolutional neural networks (CNNs) trained for low-level visual tasks in natural images may be deceived by brightness and color illusions, some network illusions can be inconsistent with the perception of humans. Next, we analyze where these similarities and differences may come from. On one hand, the proposed linear eigenanalysis explains the overall similarities: in simple CNNs trained for tasks like denoising or deblurring, the linear version of the network has center-surround receptive fields, and global transfer functions are very similar to the human achromatic and chromatic contrast sensitivity functions in human-like opponent color spaces. These similarities are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments. Specifically, here human-like features emerge from error minimization. On the other hand, the observed differences must be due to the behavior of the human visual system not explained by the linear approximation. However, our study also shows that more ‘flexible’ network architectures, with more layers and a higher degree of nonlinearity, may actually have a worse capability of reproducing visual illusions. This implies, in line with other works in the vision science literature, a word of caution on using CNNs to study human vision: on top of the intrinsic limitations of the L + NL formulation of artificial networks to model vision, the nonlinear behavior of flexible architectures may easily be markedly different from that of the visual system.

Dec 16, 2020

Become a Raspberry Pi and ROS Robotics Expert with This Bundle

Posted by in category: robotics/AI

Raspberry Pi and ROS Robotics are versatile exciting tools that allow you to build many wondrous projects. However, they are not always the easiest systems to manage and use… until now.

The Ultimate Raspberry Pi & ROS Robotics Developer Super Bundle will turn you into a Raspberry Pi and ROS Robotics expert in no time. With over 39 hours of training and over 15 courses, the bundle leaves no stone unturned.


There is almost nothing you won’t be able to do with your new-found bundle on Raspberry Pi and ROS Robotics.

Continue reading “Become a Raspberry Pi and ROS Robotics Expert with This Bundle” »

Dec 15, 2020

A new particle, the ultralight boson, could swirl around black holes, releasing detectable gravitational waves

Posted by in categories: cosmology, particle physics

A hypothetical particle known as the ultralight boson could be responsible for our universe’s dark matter.

Dec 15, 2020

Trashed Bottles Upcycled Into Home Lighting

Posted by in category: sustainability

Empty Coke bottles are being turned into solar-powered light sources! They’re lighting the way for communities without access to electricity and upcycling plast… See More.

Dec 15, 2020

Sea creature-inspired robot walks, rolls, transports cargo

Posted by in categories: biotech/medical, robotics/AI

Northwestern researchers have developed a first-of-its-kind soft, aquatic robot that is powered by light and rotating magnetic fields. These life-like robotic materials could someday be used as “smart” microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures.