Elon Musk discussed a humanoid robot Tesla is making called Optimus, saying, “We could download the things that we believe make ourselves so unique.”
The longest drone corridor in the world could be built in the U.K., linking multiple cities across the country and representing the most ambitious British transport project since the railway network in the 18th century.
Researchers in Japan have developed a new machine learning system for a two-armed robot – enabling it to identify, pick up, and peel a banana.
Researchers at the University of Tokyo have developed a new machine learning system for a two-armed robot – enabling it to identify, pick up, and peel a banana.
The manipulation of deformable objects, like soft fruits, is problematic for robots, because of difficulties in object modelling and a lack of knowledge about the type of force and dexterity needed.
Heecheol Kim, a researcher in the Intelligent Systems and Informatics Laboratory, University of Tokyo, worked with colleagues to develop a system based on “goal-conditioned dual-action deep imitation learning (DIL)”. This can learn dexterous manipulation skills using human demonstration data.
American Robotics CEO Reese Mozer has no beef with drone deliveries, but he thinks all the hoopla surrounding aerial transport of burgers and burritos is drowning out news about farther-reaching UAV activities that are dramatically changing the way businesses operate. He tells DroneDJ about that transformative innovation, and how American Robotics’s (AR) leading role in the complete automation of critical drone services to industry is set to take wing.
Nevertheless, Mozer adds, that action manages to generate sufficient media and public excitement to divert attention from the more complex, vital, and – in total financial terms – valuable surveying and inspection services drone automation provides heavy industry, energy, railroad, and infrastructure operators. And that’s precisely the UAV sector activity he predicts will begin taking off and turning heads this year.
Combining AI and robotics technology, researchers have identified new cellular characteristics of Parkinson’s disease in skin cell samples from patients.
#ai #parkinsons #neuroscience #science #robotics
Summary: Combining AI and robotics technology, researchers have identified new cellular characteristics of Parkinson’s disease in skin cell samples from patients.
Source: New York Stem Cell Foundation
A study published today in Nature Communications unveils a new platform for discovering cellular signatures of disease that integrates robotic systems for studying patient cells with artificial intelligence methods for image analysis.
Using their automated cell culture platform, scientists at the NYSCF Research Institute collaborated with Google Research to successfully identify new cellular hallmarks of Parkinson’s disease by creating and profiling over a million images of skin cells from a cohort of 91 patients and healthy controls.
AI and neuroscience have too many similarities. While Artificial neural networks mimic the human brain, the human behaviour holds generous data for Artificial intelligence to train itself into human mind.
Technique allows researchers to toggle on individual genes that regulate cell growth, development, and function.
By combining CRISPR technology with a protein designed with artificial intelligence, it is possible to awaken individual dormant genes by disabling the chemical “off switches” that silence them. Researchers from the University of Washington School of Medicine in Seattle describe this finding in the journal Cell Reports.
The approach will allow researchers to understand the role individual genes play in normal cell growth and development, in aging, and in such diseases as cancer, said Shiri Levy, a postdoctoral fellow in UW Institute for Stem Cell and Regenerative Medicine (ISCRM) and the lead author of the paper.
NVIDIA today introduced Clara Holoscan MGX™, a platform for the medical device industry to develop and deploy real-time AI applications at the edge, specifically designed to meet required regulatory standards.
Mind-body philosophy | solving the hard problem of consciousness.
Recent advances in science and technology have allowed us to reveal — and in some cases even alter — the innermost workings of the human body. With electron microscopes, we can see our DNA, the source code of life itself. With nanobots, we can send cameras throughout our bodies and deliver drugs directly into the areas where they are most needed. We are even using artificially intelligent robots to perform surgeries on ourselves with unprecedented precision and accuracy.
Materialism says that the cosmos, and all that is contains, is an objective physical reality. As a result, philosophers who subscribe to this school of thought assert that consciousness, and all that it entails, arises from material interactions. As such, the material world (our flesh, neurons, synapse, etc.) is what creates consciousness.
Idealism says that the universe is entirely subjective and that reality is something that is mentally constructed. In other words, consciousness is something that is immaterial and cannot be observed or measured empirically. Since consciousness is what creates the material world, according to this school of thought, it is unclear if we can ever truly know anything that is mind-independent and beyond our subjective experience.
Dualism essentially holds that mental phenomena are, in some respects, non-physical in nature. In this respect, the mind and the body exist, but they are distinct and separable.
Although most modern philosophers subscribe to the materialist view, determining, and ultimately understanding, the nature of human consciousness using an empirical methodology is a remarkably difficult task. The primary issue with accomplishing the aforementioned is that empirical science requires things to be measured objectively. And when it comes to consciousness, everything is subjective.
In recent decades, machine learning and deep learning algorithms have become increasingly advanced, so much so that they are now being introduced in a variety of real-world settings. In recent years, some computer scientists and electronics engineers have been exploring the development of an alternative type of artificial intelligence (AI) tools, known as diffractive optical neural networks.
Diffractive optical neural networks are deep neural networks based on diffractive optical technology (i.e., lenses or other components that can alter the phase of light propagating through them). While these networks have been found to achieve ultra-fast computing speeds and high energy efficiencies, typically they are very difficult to program and adapt to different use cases.
Researchers at Southeast University, Peking University and Pazhou Laboratory in China have recently developed a diffractive deep neural network that can be easily programmed to complete different tasks. Their network, introduced in a paper published in Nature Electronics, is based on a flexible and multi-layer metasurface array.