Toggle light / dark theme

A Vortex in a Nanometric Teacup: Researchers Generate a Vortex Beam of Atoms and Molecules

Robots are already in space. From landers on the moon to rovers on Mars and more, robots are the perfect candidates for space exploration: they can bear extreme environments while consistently repeating the same tasks in exactly the same way without tiring. Like robots on Earth, they can accomplish both dangerous and mundane jobs, from space walks to polishing a spacecraft’s surface. With space missions increasing in number and expanding in scientific scope, requiring more equipment, there’s a need for a lightweight robotic arm that can manipulate in environments difficult for humans.

Lightweight space robot with precise control developed

Robots are already in space. From landers on the moon to rovers on Mars and more, robots are the perfect candidates for space exploration: they can bear extreme environments while consistently repeating the same tasks in exactly the same way without tiring. Like robots on Earth, they can accomplish both dangerous and mundane jobs, from space walks to polishing a spacecraft’s surface. With space missions increasing in number and expanding in scientific scope, requiring more equipment, there’s a need for a lightweight robotic arm that can manipulate in environments difficult for humans.

However, the control schemes that can move such arms on Earth, where the planes of operation are flat, do not translate to space, where the environment is unpredictable and changeable. To address this issue, researchers in Harbin Institute of Technology’s School of Mechanical Engineering and Automation have developed a robotic arm weighing 9.23 kilograms—about the size of a one-year-old baby—capable of carrying almost a quarter of its own weight, with the ability to adjust its position and speed in real time based on its environment.

They published their results on Sept. 28 in Space: Science & Technology.

New lightweight precision robotic arm developed for space applications

In a new paper published in Space: Science & Technology, a team of researchers have created a new lightweight robotic arm with precision controls.

As missions in space increase in scope and variety, so to will the tools necessary to accomplish them. Robots are already used throughout space, but robotic arms used on Earth do not translate well to space. A flat plane relative to the ground enables Earth-bound robotic arms to articulate freely in a three-dimensional coordinate grid with relatively simple programming. However, with constantly changing environments in space, a robotic arm would struggle to orient itself correctly.

After AIs mastered Go and Super Mario, scientists have taught them how to ‘play’ experiments

Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent — an autonomous computational program that observes and acts — how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.

As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations — called beamlines — is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day — despite the facility’s 24/7 operations.

“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”

The Ironic Need To Make Sure That Self-Driving Cars Look Like Self-Driving Cars, At Least For The Time Being

Quickly, tell me what you think a self-driving car looks like. Most people have not seen a self-driving car in the wild, so to speak, having only seen self-driving cars indirectly and as shown in online videos, automotive advertisements, and glossy pictures posted on social media or used in daily news reports. For those people that perchance live in an area whereby self-driving cars are being tested out on public roadways, they tend to see self-driving cars quite often. The first reaction to seeing a self-driving car with your own eyes is that it is an amazing sight to see (for my first-hand eyewitness coverage of what it is like to ride in a self-driving car, see the link here). This is the future, right before your very eyes. One day, presumably, self-driving cars will be everywhere, and they will be a common sight. We won’t take notice of self-driving cars at that juncture, treating them as rather mundane, ordinary, and all-out ho-hum. Right now, they are a marvel to behold. Full Story:

Clearview AI Is Enroute to Win an US Patent for Facial Recognition Technology

The government wants to have a “search engine for faces,” but the experts are wary.

If you haven’t heard of Clearview AI then you should, as the company’s facial recognition technology has likely already spotted you. Clearview’s software goes through public images from social media to help law enforcement identify wanted individuals by matching their public images with those found in government databases or surveillance footage. Now, the company just got permission to be awarded a U.S. federal patent, according to Politico.

The firm is not without its fair share of controversy. It has long faced opposition from privacy advocates and civil rights groups. The first says it makes use of citizens’ faces without their knowledge or consent. The latter warns of the fact that facial recognition technology is notoriously prone to racially-based errors, misidentifying women and minorities much more frequently than white men and sometimes leading to false arrests.

Deepmind’s Crazy Plan To Surpass OpenAI’s Best AI

Google’s Deepmind is working on a rather crazy and unique plan to surpass OpenAI’s biggest and best Artificial Intelligence Model within the next few months. In a new paper, AI researchers at DeepMind present a new technique to improve the capacity of reinforcement learning agents to cooperate with humans at different skill levels. Accepted at the annual NeurIPS conference, the technique is called Fictitious Co-Play (FCP) and it does not require human-generated data to train the RL agents.

TIMESTAMPS:
00:00 How Deepmind is ahead of OpenAI
01:45 Why this AI is similar to our Brain.
04:17 New AI Features and Abilities.
06:36 How successful was this AI?
08:58 The Future of AI
10:13 Last Words.

#ai #agi #deepmind

/* */