Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1432

Oct 17, 2020

NASA Selects Partner to Land Water-Measuring Payload on the Moon

Posted by in categories: robotics/AI, space, sustainability

NASA has awarded Intuitive Machines of Houston approximately $47 million to deliver a drill combined with a mass spectrometer to the Moon by December 2022 under the agency’s Commercial Lunar Payload Services initiative. The delivery of the Polar Resources Ice Mining Experiment known as PRIME-1 will help NASA search for ice at the Moon’s South Pole and, for the first time, harvest ice from below the surface.

“We continue to rapidly select vendors from our pool of CLPS vendors to land payloads on the lunar surface, which exemplifies our work to integrate the ingenuity of commercial industry into our efforts at the Moon,” said NASA’s Associate Administrator for Science Thomas Zurbuchen. “The information we’ll gain from PRIME-1 and other science instruments and technology demonstrations we’re sending to the lunar surface will inform our Artemis missions with astronauts and help us better understand how we can build a sustainable lunar presence.”

PRIME-1 will land on the Moon and drill up to 3 feet (approximately 1 meter) below the surface. It will measure with a mass spectrometer how much ice in the sample is lost to sublimation as the ice turns from a solid to a vapor in the vacuum of the lunar environment. Versions of PRIME-1’s drill and the Mass Spectrometer Observing Lunar Operations, or MSolo, will also fly on VIPER, a mobile robot that also will search for ice at the lunar South Pole in 2023. NASA will land the first woman and next man on the Moon’s South Pole the following year.

Oct 17, 2020

AI that scans a construction site can spot when things are falling behind

Posted by in categories: robotics/AI, transportation

Construction sites are vast jigsaws of people and parts that must be pieced together just so at just the right times. As projects get larger, mistakes and delays get more expensive. The consultancy Mckinsey estimates that on-site mismanagement costs the construction industry $1.6 trillion a year. But typically you might only have five managers overseeing construction of a building with 1,500 rooms, says Roy Danon, founder and CEO of British-Israeli startup Buildots: “There’s no way a human can control that amount of detail.”

Danon thinks that AI can help. Buildots is developing an image recognition system that monitors every detail of an ongoing construction project and flags up delays or errors automatically. It is already being used by two of the biggest building firms in Europe, including UK construction giant Wates in a handful of large residential builds. Construction is essentially a kind of manufacturing, says Danon. If high-tech factories now use AI to manage their processes, why not construction sites?

AI is starting to change various aspects of construction, from design to self-driving diggers. Some companies even provide a kind of overall AI site inspector that matches images taken on site against a digital plan of the building. Now Buildots is making that process easier than ever by using video footage from GoPro cameras mounted on the hard hats of workers.

Oct 17, 2020

All-terrain microrobot flips through a live colon

Posted by in categories: biotech/medical, robotics/AI

A rectangular robot as tiny as a few human hairs can travel throughout a colon by doing back flips, Purdue University engineers have demonstrated in live animal models.

Why the back flips? Because the goal is to use these robots to transport drugs in humans, whose colons and other organs have . Side flips work, too.

Why a back-flipping robot to transport drugs? Getting a drug directly to its target site could remove side effects, such as hair loss or stomach bleeding, that the drug may otherwise cause by interacting with other organs along the way.

Oct 17, 2020

Artificial Intelligence Learns to Learn Entirely on Its Own

Posted by in category: robotics/AI

Article. This AI can learn on its own according to article.


A new version of AlphaGo needed no human instruction to figure out how to clobber the best Go player in the world — itself.

Oct 16, 2020

A virtual reality game that integrates tactile experiences using biometric feedback

Posted by in categories: entertainment, privacy, robotics/AI, virtual reality, wearables

Over the past few decades, technological advances have enabled the development of increasingly sophisticated, immersive and realistic video games. One of the most noteworthy among these advances is virtual reality (VR), which allows users to experience games or other simulated environments as if they were actually navigating them, via the use of electronic wearable devices.

Most existing VR systems primarily focus on the sense of vision, using headsets that allow users to see what is happening in a or in another simulated environment right before their eyes, rather than on a screen placed in front of them. While this can lead to highly engaging visual experiences, these experiences are not always matched by other types of sensory inputs.

Researchers at Nagoya University’s School of Informatics in Japan have recently created a new VR game that integrates immersive audiovisual experiences with . This game, presented in a paper published in the Journal of Robotics, Networking and Artificial Life, uses a player’s biometric data to create a spherical object in the VR space that beats in alignment with his/her heart. The player can thus perceive the beating of his/her heart via this object visually, auditorily and tactually.

Oct 16, 2020

NASA-JPL team tests out DuAxel in Mojave Desert

Posted by in categories: robotics/AI, space

There’s rough terrain – then there are the craters and near-vertical cliffs on the Moon, Mars, and beyond. The DuAxel is a robot built for situations just like those. By creating two single-axle rovers that can combine into one with a central payload we could maximize versatility during future missions. See more details: go.nasa.gov/34QNo5T.

Oct 16, 2020

SpaceX targeting this weekend for Starlink launch from Kennedy Space Center

Posted by in categories: drones, internet, robotics/AI, satellites

SpaceX is targeting this weekend for its next Falcon 9 rocket launch from Kennedy Space Center, this time with another batch of Starlink internet satellites.

If schedules hold, teams will give the go-ahead for the 230-foot rocket to launch from pad 39A at 8:27 a.m. Sunday, the opening of an instantaneous window. It must launch at that time or delay to another day.

About eight minutes after liftoff, the rocket’s 162-foot first stage will target an autonomous landing on the Of Course I Still Love You drone ship in the Atlantic Ocean. SpaceX’s fleet of ships and the booster should return to Port Canaveral a few days later.

Oct 16, 2020

Artificial Intelligence Used to ‘Redefine’ Alzheimer’s Disease

Posted by in categories: biotech/medical, genetics, robotics/AI

Summary: New artificial intelligence technology will analyze clinical data, brain images, and genetic information from Alzheimer’s patients to look for new biomarkers associated with the neurodegenerative disease.

Source: University of Pennsylvania

As the search for successful Alzheimer’s disease drugs remains elusive, experts believe that identifying biomarkers — early biological signs of the disease — could be key to solving the treatment conundrum. However, the rapid collection of data from tens of thousands of Alzheimer’s patients far exceeds the scientific community’s ability to make sense of it.

Oct 16, 2020

How the Nervous System Mutes or Boosts Sensory Information to Make Behavioral Decisions

Posted by in category: robotics/AI

Summary: Researchers have identified a novel neural network in fruit flies that converts external stimuli of varying intensity into decisions about whether to act.

Source: University of Michigan

Fruit flies may be able to teach researchers a thing or two about artificial intelligence.

Oct 16, 2020

A radical new technique lets AI learn with practically no data

Posted by in category: robotics/AI

“Less than one”-shot learning can teach a model to identify more objects than the number of examples it is trained on.