Menu

Blog

Archive for the ‘robotics/AI’ category: Page 815

Apr 24, 2022

Electrostatic brakes make bendy robot arms a lot more efficient

Posted by in category: robotics/AI

Replacing motors with electrostatic brakes can boost the energy-efficiency of robot limbs, although the robots are slower.

Apr 24, 2022

New “Electric Eye” Neuromorphic Artificial Vision Device Developed Using Nanotechnology

Posted by in categories: nanotechnology, robotics/AI

Using nanotechnology, scientists have created a newly designed neuromorphic electronic device that endows microrobotics with colorful vision.

Researchers at Georgia State University have successfully designed a new type of artificial vision device that incorporates a novel vertical stacking architecture and allows for greater depth of color recognition and micro-level scaling. The new research study was published on April 18, 2022, in the top journal ACS Nano.

“This work is the first step toward our final destination–to develop a micro-scale camera for microrobots,” says assistant professor of Physics Sidong Lei, who led the research. “We illustrate the fundamental principle and feasibility to construct this new type of image sensor with emphasis on miniaturization.”

Apr 24, 2022

Here’s how automation could affect the relationship between us and our cars

Posted by in categories: robotics/AI, transportation

An automated system called Guardian is being developed by the Toyota Research Institute to amplify human control in a vehicle, as opposed to removing it.


Here’s the scenario: A driver falls asleep at the wheel. But their car is equipped with a dashboard camera that detects the driver’s eye condition, activating a safety system that promptly guides the vehicle to a secure halt.

That’s not just an idea on the drawing board. The system, called Guardian, is being refined at the Toyota Research Institute (TRI), where MIT Professor John Leonard is helping steer the group’s work, while on leave from MIT. At the MIT Mobility Forum, Leonard and Avinash Balachandran, head of TRI’s Human-Centric Driving Research Department, presented an overview of their work.

Continue reading “Here’s how automation could affect the relationship between us and our cars” »

Apr 24, 2022

The Day We Give Birth to AGI — Stuart Russell’s Warning About AI

Posted by in categories: futurism, robotics/AI

Stuart Russell warns about the dangers involved in the creation of artificial intelligence. Particularly, artificial general intelligence or AGI.
The idea of an artificial intelligence that might one day surpass human intelligence has been captivating and terrifying us for decades now. The possibility of what it would be like if we had the ability to create a machine that could think like a human, or even surpass us in cognitive abilities is something that many envision. But, as with many novel technologies, there are a few problems with building an AGI. But what if we succeed? What would happen should our quest to create artificial intelligence bear fruit? How do we retain power over entities that are more intelligent than us? The answer, of course, is that nobody knows for sure. But there are some logical conclusions we can draw from examining the nature of intelligence and what kind of entities might be capable of it.

Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He outlines the definition of AI, the risks and benefits it poses for the future. According to him, the idea of an AGI is the most important problem to intellectually to work on.

Continue reading “The Day We Give Birth to AGI — Stuart Russell’s Warning About AI” »

Apr 24, 2022

How Much Oxalate Is Too Much? n=1 Analysis

Posted by in categories: biotech/medical, robotics/AI

Join us on Patreon!
https://www.patreon.com/MichaelLustgartenPhD

Papers referenced in the video:
Dietary oxalate to calcium ratio and incident cardiovascular events: a 10-year follow-up among an Asian population.
https://pubmed.ncbi.nlm.nih.gov/35346210/

Continue reading “How Much Oxalate Is Too Much? n=1 Analysis” »

Apr 24, 2022

ATLAS strengthens its search for supersymmetry

Posted by in categories: cosmology, particle physics, robotics/AI

Where is all the new physics? In the decade since the Higgs boson’s discovery, there have been no statistically significant hints of new particles in data from the Large Hadron Collider (LHC). Could they be sneaking past the standard searches? At the recent Rencontres de Moriond conference, the ATLAS collaboration at the LHC presented several results of novel types of searches for particles predicted by supersymmetry.

Supersymmetry, or SUSY for short, is a promising theory that gives each elementary particle a “superpartner”, thus solving several problems in the current Standard Model of particle physics and even providing a possible candidate for dark matter. ATLAS’s new searches targeted charginos and neutralinos – the heavy superpartners of force-carrying particles in the Standard Model – and sleptons – the superpartners of Standard Model matter particles called leptons. If produced at the LHC, these particles would each transform, or “decay”, into Standard Model particles and the lightest neutralino, which does not further decay and is taken to be the dark-matter candidate.

ATLAS’s newest search for charginos and sleptons studied a particle-mass region previously unexplored due to a challenging background of Standard Model processes that mimics the signals from the sought-after particles. The ATLAS researchers designed dedicated searches for each of these SUSY particle types, using all the data recorded from Run 2 of the LHC and looking at the particles’ decays into two charged leptons (electrons or muons) and “missing energy” attributed to neutralinos. They used new methods to extract the putative signals from the background, including machine-learning techniques and “data-driven” approaches.

Apr 23, 2022

A self-driving revolution? We’re barely out of second gear

Posted by in categories: government, mobile phones, robotics/AI, transportation

“Britain moves closer to a self-driving revolution,” said a perky message from the Department for Transport that popped into my inbox on Wednesday morning. The purpose of the message was to let us know that the government is changing the Highway Code to “ensure the first self-driving vehicles are introduced safely on UK roads” and to “clarify drivers’ responsibilities in self-driving vehicles, including when a driver must be ready to take back control”.

The changes will specify that while travelling in self-driving mode, motorists must be ready to resume control in a timely way if they are prompted to, such as when they approach motorway exits. They also signal a puzzling change to current regulations, allowing drivers “to view content that is not related to driving on built-in display screens while the self-driving vehicle is in control”. So you could watch Gardeners’ World on iPlayer, but not YouTube videos of F1 races? Reassuringly, though, it will still be illegal to use mobile phones in self-driving mode, “given the greater risk they pose in distracting drivers as shown in research”.

Apr 23, 2022

Covid has reset relations between people and robots

Posted by in categories: employment, robotics/AI

An awful lot of meetings lie ahead for roboticists and regulators to determine how machines and people will work together.


Machines will do the nasty jobs; human beings the nice ones | Science & technology.

Apr 23, 2022

Quantifying arousal and awareness in altered states of consciousness using interpretable deep learning

Posted by in categories: biotech/medical, robotics/AI

The classical neurophysiological approach for calculating PCI, power spectral density, and spectral exponent relies on many epochs to improve the reliability of statistical estimates of these indices21. However, these methods are only suitable for investigating the averaged brain states and they can only clarify general neurophysiological aspects. Machine learning (ML) allows decoding and identifying specific brain states and discriminating them from unrelated brain signals, even in a single trial in real-time22. This can potentially transform statistical results at the group level into individual predictions9. A deep neural network, which is a popular approach in ML, has been employed to classify or predict brain states using EEG data23. Particularly, a convolutional neural network (CNN) is the most extensively used technique in deep learning and has proven to be effective in the classification of EEG data24. However, a CNN has the drawback that it cannot provide information on why it made a particular prediction25. Recently, layer-wise relevance propagation (LRP) has successfully demonstrated why classifiers such as CNNs have made a specific decision26. Specifically, the relevance score resulting from the LRP indicates the contribution of each input variable to the classification or prediction decision. Thus, a high score in a particular area of an input variable implies that the classifier has made the classification or prediction using this feature. For example, neurophysiological data suggest that the left motor region is activated during right-hand motor imagery27. The LRP indicates that the neural network classifies EEG data as right-hand motor imagery because of the activity of the left motor region28. Therefore, the relevance score was higher in the left motor region than in other regions. Thus, it is possible to interpret the neurophysiological phenomena underlying the decisions of CNNs using LRP.

In this work, we develop a metric, called the explainable consciousness indicator (ECI), to simultaneously quantify the two components of consciousness—arousal and awareness—using CNN. The processed time-series EEG data were used as an input of the CNN. Unlike PCI, which relies on source modeling and permutation-based statistical analysis, ECI used event-related potentials at the sensor level for spatiotemporal dynamics and ML approaches. For a generalized model, we used the leave-one-participant-out (LOPO) approach for transfer learning, which is a type of ML that transfers information to a new participant not included in the training phase24,27. The proposed indicator is a 2D value consisting of indicators of arousal (ECIaro) and awareness (ECIawa). First, we used TMS–EEG data collected from healthy participants during NREM sleep with no subjective experience, REM sleep with subjective experience, and healthy wakefulness to consider each component of consciousness (i.e., low/high arousal and low/high awareness) with the aim to analyze correlations between the proposed ECI and the three states, namely NREM, REM, and wakefulness. Next, we measured ECI using TMS–EEG data collected under general anesthesia with ketamine, propofol, and xenon, again with the aim to measure correlation with these three anesthetics. Before anesthesia, TMS–EEG data were also recorded during healthy wakefulness. Upon awakening, healthy participants reported conscious experience during ketamine-induced anesthesia and no conscious experience during propofol-and xenon-induced anesthesia. Finally, TMS–EEG data were collected from patients with disorders of consciousness (DoC), which includes patients diagnosed as UWS and MCS patients. We hypothesized that our proposed ECI can clearly distinguish between the two components of consciousness under physiological, pharmacological, and pathological conditions.

To verify the proposed indicator, we next compared ECIawa with PCI, which is a reliable index for consciousness. Then, we applied ECI to additional resting-state EEG data acquired in the anesthetized participants and patients with DoC. We hypothesize that if CNN can learn characteristics related to consciousness, it could calculate ECI accurately even without TMS in the proposed framework. In terms of clinical applicability, it is important to use the classifier from the previous LOPO training of the old data to classify the new data (without additional training). Therefore, we computed ECI in patients with DoC using a hold-out approach29, where training data and evaluation data are arbitrarily divided, instead of cross-validation. Finally, we investigated why the classifier generated these decisions using LRP to interpret ECI30.

Apr 23, 2022

Elon Musk says Tesla’s humanoid Optimus robot ‘will be worth more than the car business’

Posted by in categories: business, Elon Musk, robotics/AI, transportation

Tesla first announced the robot last summer, and says the first models will arrive next year.

Page 815 of 2,040First812813814815816817818819Last