Toggle light / dark theme

Researchers’ study of human-robot interactions is an early step in creating future robot ‘guides’

A new study by Missouri S&T researchers shows how human subjects, walking hand-in-hand with a robot guide, stiffen or relax their arms at different times during the walk. The researchers’ analysis of these movements could aid in the design of smarter, more humanlike robot guides and assistants.

“This work presents the first measurement and analysis of human arm stiffness during overground physical interaction between a robot leader and a human follower,” the Missouri S&T researchers write in a paper recently published in the journal Scientific Reports.

The lead researcher, Dr. Yun Seong Song, assistant professor of mechanical and aerospace engineering at Missouri S&T, describes the findings as “an early step in developing a robot that is humanlike when it physically interacts with a human partner.”

The Naval Fleet of Drones

I’ve funded Ukraine media, the Ukraine army, and now I just funded the Ukraine navy. The navy part is extra interesting because it is “the world’s first naval fleet of drones”. That is pretty futuristic!

As Elon Musk said, “Future wars are all about the drones.”


Small and fast unmanned ships 3 russian vessels were damaged, including the Admiral Makarov, flagship of the russian Black Sea Fleet. This is the first case in history where the attack was carried out exclusively by unmanned vessels.

The result of this daring operation was incredible — russia has lost its undeniable advantage on the water. The killers of Ukrainian civilians — warships armed with missiles — became targets themselves.

Today, Ukraine starts assembling the world’s first Naval Fleet of Drones!

Scientists articulate new data standards for AI models

Aspiring bakers are frequently called upon to adapt award-winning recipes based on differing kitchen setups. Someone might use an eggbeater instead of a stand mixer to make prize-winning chocolate chip cookies, for instance.

Being able to reproduce a recipe in different situations and with varying setups is critical for both talented chefs and , the latter of whom are faced with a similar problem of adapting and reproducing their own “recipes” when trying to validate and work with new AI models. These models have applications in ranging from climate analysis to brain research.

“When we talk about data, we have a practical understanding of the digital assets we deal with,” said Eliu Huerta, scientist and lead for Translational AI at the U.S. Department of Energy’s (DOE) Argonne National Laboratory. “With an AI model, it’s a little less clear; are we talking about data structured in a smart way, or is it computing, or software, or a mix?”

A new method can correct and update large AI models

Large AI networks like language models make mistakes or contain outdated information. MEND shows how to update LLMs without changing the whole network.

Large AI models have become standard in many AI applications, such as natural language processing, image analysis, and image generation. The models, such as OpenAI’s GPT-3, often have more diverse capabilities than small, specialized models and can be further improved via finetuning.

However, even the largest AI models regularly make mistakes and additionally contain outdated information. GPT-3’s most recent data is from 2019 – when Theresa May was still prime minister.

How Real Holograms are Created by Artificial Intelligence (Lightfield)

Commercial Holograms may soon get into the hand of regular consumers with the help of the biggest Hologram company called Lightfield. Holography is a technique that enables a wavefront to be recorded and later re-constructed. Holography is best known as a method of generating three-dimensional images, but it also has a wide range of other applications. In principle, it is possible to make a hologram for any type of a light field.

TIMESTAMPS:
00:00 No longer just Science Fiction.
00:45 What is a hologram?
02:28 How do these new Holograms work?
05:56 The Future of Entertainment?
08:17 Last Words.

#holograms #ai #technology

Three Things AI Machines Won’t Be Able to Achieve

In this bonus interview for the series Science Uprising, computer scientist and AI expert Selmer Bringsjord provides a wide-ranging discussion of artificial intelligence (AI) and its capabilities. Bringsjord addresses three features humans possess that AI machines won’t be able to duplicate in his view: consciousness, cognition, and genuine creativity.

Selmer Bringsjord is a Professor of Cognitive Science and Computer Science at Rensselaer Polytechnic Institute and Director of the Rensselaer AI and Reasoning Laboratory. He and his colleagues have developed the “Lovelace Test” to evaluate whether machine intelligence has resulted in mind or consciousness.

Watch episodes of Science Uprising, plus bonus video interviews with experts from each episode at https://scienceuprising.com/.

============================
The Discovery Science News Channel is the official Youtube channel of Discovery Institute’s Center for Science & Culture. The CSC is the institutional hub for scientists, educators, and inquiring minds who think that nature supplies compelling evidence of intelligent design. The CSC supports research, sponsors educational programs, defends free speech, and produce articles, books, and multimedia content. For more information visit https://www.discovery.org/id/
http://www.evolutionnews.org/

Defining Intelligent Design

Follow us on Facebook, Instagram and Twitter:
Twitter: https://twitter.com/discoverycsc/
Facebook: https://www.facebook.com/discoverycsc/
Instagram: https://www.instagram.com/discoverycsc/

Visit other Youtube channels connected to the Center for Science & Culture.

AI Use Potentially Dangerous “Shortcuts” To Solve Complex Recognition Tasks

The researchers revealed that deep convolutional neural networks were insensitive to configural object properties.

Deep convolutional neural networks (DCNNs) do not view things in the same way that humans do (through configural shape perception), which might be harmful in real-world AI applications. This is according to Professor James Elder, co-author of a York University study recently published in the journal iScience.

The study, which conducted by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Nicholas Baker, an assistant psychology professor at Loyola College in Chicago and a former VISTA postdoctoral fellow at York, finds that deep learning models fail to capture the configural nature of human shape perception.