Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1645

Aug 28, 2020

White House Announces $1 Billion Plan to Create AI, Quantum Institutes

Posted by in categories: quantum physics, robotics/AI

The White House on Wednesday will announce that federal agencies and their private sector partners are committing more than $1 billion over the next five years to establish 12 new research institutes focused on artificial intelligence and quantum information sciences.


The effort is designed to ensure the U.S. remains globally competitive in AI and quantum technologies, administration officials said.

Aug 28, 2020

How robots could help save underwater ecosystems

Posted by in category: robotics/AI

Why underwater robots are vacuuming up lionfish.

Aug 28, 2020

Tiny battery for micro robots

Posted by in categories: futurism, robotics/AI

This tiny battery could change the game for micro robots.

Aug 28, 2020

Superluminal Motion-Assisted 4-Dimensional Light-in-Flight Imaging

Posted by in categories: information science, mathematics, physics, robotics/AI

Abstract: Advances in high speed imaging techniques have opened new possibilities for capturing ultrafast phenomena such as light propagation in air or through media. Capturing light-in-flight in 3-dimensional xyt-space has been reported based on various types of imaging systems, whereas reconstruction of light-in-flight information in the fourth dimension z has been a challenge. We demonstrate the first 4-dimensional light-in-flight imaging based on the observation of a superluminal motion captured by a new time-gated megapixel single-photon avalanche diode camera. A high resolution light-in-flight video is generated with no laser scanning, camera translation, interpolation, nor dark noise subtraction. A machine learning technique is applied to analyze the measured spatio-temporal data set. A theoretical formula is introduced to perform least-square regression, and extra-dimensional information is recovered without prior knowledge. The algorithm relies on the mathematical formulation equivalent to the superluminal motion in astrophysics, which is scaled by a factor of a quadrillionth. The reconstructed light-in-flight trajectory shows a good agreement with the actual geometry of the light path. Our approach could potentially provide novel functionalities to high speed imaging applications such as non-line-of-sight imaging and time-resolved optical tomography.

Aug 28, 2020

How to make AI trustworthy

Posted by in categories: information science, robotics/AI, transportation

One of the biggest impediments to adoption of new technologies is trust in AI.

Now, a new tool developed by USC Viterbi Engineering researchers generates automatic indicators if data and predictions generated by AI algorithms are trustworthy. Their , “There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks” by Mingxi Cheng, Shahin Nazarian and Paul Bogdan of the USC Cyber Physical Systems Group, was featured in Frontiers in Artificial Intelligence.

Neural networks are a type of artificial intelligence that are modeled after the brain and generate predictions. But can the predictions these neural networks generate be trusted? One of the key barriers to adoption of self-driving cars is that the vehicles need to act as independent decision-makers on auto-pilot and quickly decipher and recognize objects on the road—whether an object is a speed bump, an inanimate object, a pet or a child—and make decisions on how to act if another vehicle is swerving towards it.

Aug 28, 2020

Scientists use reinforcement learning to train quantum algorithm

Posted by in categories: chemistry, information science, quantum physics, robotics/AI, supercomputing

Recent advancements in quantum computing have driven the scientific community’s quest to solve a certain class of complex problems for which quantum computers would be better suited than traditional supercomputers. To improve the efficiency with which quantum computers can solve these problems, scientists are investigating the use of artificial intelligence approaches.

In a new study, scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have developed a based on reinforcement learning to find the optimal parameters for the Quantum Approximate Optimization Algorithm (QAOA), which allows a quantum computer to solve certain combinatorial problems such as those that arise in materials design, chemistry and wireless communications.

“Combinatorial optimization problems are those for which the solution space gets exponentially larger as you expand the number of decision variables,” said Argonne scientist Prasanna Balaprakash. “In one traditional example, you can find the shortest route for a salesman who needs to visit a few cities once by enumerating all possible routes, but given a couple thousand cities, the number of possible routes far exceeds the number of stars in the universe; even the fastest supercomputers cannot find the shortest route in a reasonable time.”

Aug 28, 2020

A 26-layer convolutional neural network for human action recognition

Posted by in categories: information science, robotics/AI

Deep learning algorithms, such as convolutional neural networks (CNNs), have achieved remarkable results on a variety of tasks, including those that involve recognizing specific people or objects in images. A task that computer scientists have often tried to tackle using deep learning is vision-based human action recognition (HAR), which specifically entails recognizing the actions of humans who have been captured in images or videos.

Researchers at HITEC University and Foundation University Islamabad in Pakistan, Sejong University and Chung-Ang University in South Korea, University of Leicester in the UK, and Prince Sultan University in Saudi Arabia have recently developed a new CNN for recognizing human actions in videos. This CNN, presented in a paper published in Springer Link’s Multimedia Tools and Applications journal, was trained to differentiate between several different human actions, including boxing, clapping, waving, jogging, running and walking.

“We designed a new 26-layered convolutional neural network (CNN) architecture for accurate complex action recognition,” the researchers wrote in their paper. “The features are extracted from the global average pooling layer and fully connected (FC) layer and fused by a proposed high entropy-based approach.”

Aug 28, 2020

Robot Skin 3D Printer Close to First-in-Human Clinical Trials

Posted by in categories: 3D printing, bioprinting, biotech/medical, government, health, robotics/AI

In just two years a robotic device that prints a patient’s own skin cells directly onto a burn or wound could have its first-in-human clinical trials. The 3D bioprinting system for intraoperative skin regeneration developed by Australian biotech start-up Inventia Life Science has gained new momentum thanks to major investments from the Australian government and two powerful new partners, world-renowned burns expert Fiona Wood and leading bioprinting researcher Gordon Wallace.

Codenamed Ligō from the Latin “to bind”, the system is expected to revolutionize wound repairs by delivering multiple cell types and biomaterials rapidly and precisely, creating a new layer of skin where it has been damaged. The novel system is slated to replace current wound healing methods that simply attempt to repair the skin, and is being developed by Inventia Skin, a subsidiary of Inventia Life Science.

“When we started Inventia Life Science, our vision was to create a technology platform with the potential to bring enormous benefit to human health. We are pleased to see how fast that vision is progressing alongside our fantastic collaborators. This Federal Government support will definitely help us accelerate even faster,” said Dr. Julio Ribeiro, CEO, and co-founder of Inventia.

Aug 28, 2020

Elon Musk is one step closer to connecting a computer to your brain

Posted by in categories: Elon Musk, robotics/AI

Neuralink is building a brain-machine interface as well as a little robot that installs it into your skull.

Aug 28, 2020

Google conducts largest chemical simulation on a quantum computer to date

Posted by in categories: chemistry, particle physics, quantum physics, robotics/AI

A team of researchers with Google’s AI Quantum team (working with unspecified collaborators) has conducted the largest chemical simulation on a quantum computer to date. In their paper published in the journal Science, the group describes their work and why they believe it was a step forward in quantum computing. Xiao Yuan of Stanford University has written a Perspective piece outlining the potential benefits of quantum computer use to conduct chemical simulations and the work by the team at AI Quantum, published in the same journal issue.

Developing an ability to predict by simulating them on computers would be of great benefit to chemists—currently, they do most of it through trial and error. Prediction would open up the door to the development of a wide range of new materials with still unknown properties. Sadly, current computers lack the exponential scaling that would be required for such work. Because of that, chemists have been hoping quantum computers will one day step in to take on the role.

Current quantum computer technology is not yet ready to take on such a challenge, of course, but computer scientists are hoping to get them there sometime in the near future. In the meantime, big companies like Google are investing in research geared toward using quantum computers once they mature. In this new effort, the team at AI Quantum focused their efforts on simulating a simple chemical process—the Hartree-Fock approximation of a real system—in this particular case, a diazene molecule undergoing a reaction with hydrogen atoms, resulting in an altered configuration.