Ports.
Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent — an autonomous computational program that observes and acts — how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.
As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations — called beamlines — is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day — despite the facility’s 24/7 operations.
“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”
Machine learning could help humanity discover more than ever.
Looking to the future, astronomers are excited to see how machine learning will enhance surveys.
Quickly, tell me what you think a self-driving car looks like. Most people have not seen a self-driving car in the wild, so to speak, having only seen self-driving cars indirectly and as shown in online videos, automotive advertisements, and glossy pictures posted on social media or used in daily news reports. For those people that perchance live in an area whereby self-driving cars are being tested out on public roadways, they tend to see self-driving cars quite often. The first reaction to seeing a self-driving car with your own eyes is that it is an amazing sight to see (for my first-hand eyewitness coverage of what it is like to ride in a self-driving car, see the link here). This is the future, right before your very eyes. One day, presumably, self-driving cars will be everywhere, and they will be a common sight. We won’t take notice of self-driving cars at that juncture, treating them as rather mundane, ordinary, and all-out ho-hum. Right now, they are a marvel to behold. Full Story:
The government wants to have a “search engine for faces,” but the experts are wary.
If you haven’t heard of Clearview AI then you should, as the company’s facial recognition technology has likely already spotted you. Clearview’s software goes through public images from social media to help law enforcement identify wanted individuals by matching their public images with those found in government databases or surveillance footage. Now, the company just got permission to be awarded a U.S. federal patent, according to Politico.
The firm is not without its fair share of controversy. It has long faced opposition from privacy advocates and civil rights groups. The first says it makes use of citizens’ faces without their knowledge or consent. The latter warns of the fact that facial recognition technology is notoriously prone to racially-based errors, misidentifying women and minorities much more frequently than white men and sometimes leading to false arrests.
Google’s Deepmind is working on a rather crazy and unique plan to surpass OpenAI’s biggest and best Artificial Intelligence Model within the next few months. In a new paper, AI researchers at DeepMind present a new technique to improve the capacity of reinforcement learning agents to cooperate with humans at different skill levels. Accepted at the annual NeurIPS conference, the technique is called Fictitious Co-Play (FCP) and it does not require human-generated data to train the RL agents.
–
TIMESTAMPS:
00:00 How Deepmind is ahead of OpenAI
01:45 Why this AI is similar to our Brain.
04:17 New AI Features and Abilities.
06:36 How successful was this AI?
08:58 The Future of AI
10:13 Last Words.
–
#ai #agi #deepmind
Computer simulations and visualizations of knots and other objects have long helped mathematicians to look for patterns and develop their intuition, says Jeffrey Weeks, a mathematician based in Canton, New York, who has pioneered some of those techniques since the 1980s. But, he adds, “Getting the computer to seek out patterns takes the research process to a qualitatively different level.”
The authors say the approach, described in a paper in the 2 December issue of Nature1, could benefit other areas of maths that involve large data sets.
Amazon Scholars Michael I. Jordan and Michael Kearns, and Amazon vice president and distinguished scientist Bernhard Schölkopf discuss the future of AI ahead of NeurIPS 2021. Watch the recorded event where these industry luminaries cover a range of topics including the history of ML in the past decade, its social impacts, the role of causal reasoning in ML, and whether or not autonomous, general-purpose intelligence should really be the aim of AI.
Follow us:
Website: https://www.amazon.science.
Twitter: https://twitter.com/AmazonScience.
Facebook: https://www.facebook.com/AmazonScience.
Instagram: https://www.instagram.com/AmazonScience.
LinkedIn: https://www.linkedin.com/showcase/AmazonScience.
Newsletter: https://www.amazon.science/newsletter.
#AmazonScience #NeurIPS #NeurIPS2021 #ArtificialIntelligence #MachineLearning #AI #ML