Toggle light / dark theme

Truly Smart Robots Know When To Ask For Help

Ask a manufacturing company what their current top headaches are, and supply chain disruptions and staff shortages will probably top the list. Vecna Robotics might have the solution for both.

Vecna Robotics develops autonomous mobile robots (AMRs) used in distribution, logistics, warehousing and manufacturing (the company first cut its teeth in hospitals and other health care facilities). Their secret sauce is the software; after all, explains founder and Chief Innovation Officer Daniel Theobald, robotics is 90–95% software. Vecna Robotics implants its software brains in existing equipment like forklifts and pallet trucks, making them autonomous – though the company does build some hardware of its own.

What sets Vecna Robotics apart is the combination of an ambitious vision with a pragmatic attitude, articulated in three key principles:

Singapore to develop mobile defence systems with Ghost Robotics

Singapore’s Defence Science and Technology Agency (DSTA) has inked a partnership with Philadelphia-based Ghost Robotics to identify uses cases involving legged robots for security, defence, and humanitarian applications. They will look to test and develop mobile robotic systems, as well as the associated technology enablers, that can be deployed in challenging urban terrain and harsh environments.

The collaboration also would see robots from Ghost Robotics paired with DSTA’s robotics command, control, and communications (C3) system, the two partners said in a joint statement released Thursday.

The Singapore government agency said its C3 capabilities were the “nerve centre” of military platforms and command centres, tapping data analytics, artificial intelligence, and computer vision technologies to facilitate “tighter coordination” and effectiveness during military and other contingency operations.

A Self-Operating Time Crystal Model of the Human Brain: Can We Replace Entire Brain Hardware with a 3D Fractal Architecture of Clocks Alone?

So there is no practical application of Time Crystal for human cells?


Time crystal was conceived in the 1970s as an autonomous engine made of only clocks to explain the life-like features of a virus. Later, time crystal was extended to living cells like neurons. The brain controls most biological clocks that regenerate the living cells continuously. Most cognitive tasks and learning in the brain run by periodic clock-like oscillations. Can we integrate all cognitive tasks in terms of running clocks of the hardware? Since the existing concept of time crystal has only one clock with a singularity point, we generalize the basic idea of time crystal so that we could bond many clocks in a 3D architecture. Harvesting inside phase singularity is the key. Since clocks reset continuously in the brain–body system, during reset, other clocks take over. So, we insert clock architecture inside singularity resembling brain components bottom-up and top-down.

Researchers develop fast, low-energy artificial synapse for advanced AI systems

Brain-inspired computing is a promising candidate for next-generation computing technologies. Developing next-generation advanced artificial intelligence (AI) systems that can be as energy-efficient, lightweight, and adaptable as the human brain has attracted significant interest.

However, mimicking the brain’s neuroplasticity, which is the ability to change a neural network connection, in traditional artificial synapses using ultralow energy is extremely challenging.

Moscow Metro launches pay per face recognition | DW News

Passengers on the Moscow metro can now pay for their commute using facial recognition technology. The system is called “Face Pay” — and connects passengers’ biometric data with their credit cards.
It’s been rolled out across all 241 stations in the Russian capital — but privacy activists are sounding the alarm.

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

For more news go to: http://www.dw.com/en/
Follow DW on social media:
►Facebook: https://www.facebook.com/deutschewellenews/
►Twitter: https://twitter.com/dwnews.
►Instagram: https://www.instagram.com/dwnews.
Für Videos in deutscher Sprache besuchen Sie: https://www.youtube.com/dwdeutsch.
#FacePay #FaceRecognition #MoscowMetro

Voice copying algorithms found able to dupe voice recognition devices

Deepfake videos are well-known; many examples of what only appear to be celebrities can be seen regularly on YouTube. But while such videos have grown lifelike and convincing, one area where they fail is in reproducing a person’s voice. In this new effort, the team at UoC found evidence that the technology has advanced. They tested two of the most well-known voice copying algorithms against both human and voice recognition devices and found that the algorithms have improved to the point that they are now able to fool both.

The two algorithms— SV2TTS and AutoVC —were tested by obtaining samples of voice recordings from publicly available databases. Both systems were trained using 90 five-minute voice snippets of people talking. They also enlisted the assistance of 14 volunteers who provided voice samples and access to their voice recognition devices. The researchers then tested the two systems using the open-source software Resemblyzer—it listens and compares voice recordings and then gives a rating based on the similar two samples are. They also tested the algorithms by using them to attempt to access services on voice recognition devices.

The researchers found the algorithms were able to fool the Resemblyzer nearly half of the time. They also found that they were able to fool Azure (Microsoft’s cloud computing service) approximately 30 percent of the time. And they were able to fool Amazon’s Alexa voice recognition system approximately 62% of the time.

World’s first massive egocentric dataset

The University of Bristol is part of an international consortium of 13 universities in partnership with Facebook AI, that collaborated to advance egocentric perception. As a result of this initiative, we have built the world’s largest egocentric dataset using off-the-shelf, head-mounted cameras.


Progress in the fields of artificial intelligence (AI) and augmented reality (AR) requires learning from the same data humans process to perceive the world. Our eyes allow us to explore places, understand people, manipulate objects and enjoy activities—from the mundane act of opening a door to the exciting interaction of a game of football with friends.

Egocentric 4D Live Perception (Ego4D) is a massive-scale dataset that compiles 3,025 hours of footage from the wearable cameras of 855 participants in nine countries: UK, India, Japan, Singapore, KSA, Colombia, Rwanda, Italy, and the US. The data captures a wide range of activities from the ‘egocentric’ perspective—that is from the viewpoint of the person carrying out the activity. The University of Bristol is the only UK representative in this diverse and international effort, collecting 270 hours from 82 participants who captured footage of their chosen activities of daily living—such as practicing a musical instrument, gardening, grooming their pet, or assembling furniture.

“In the not-too-distant future you could be wearing smart AR glasses that guide you through a recipe or how to fix your bike—they could even remind you where you left your keys,” said Principal Investigator at the University of Bristol and Professor of Computer Vision, Dima Damen.

War dogs

Today we learned that an art group is planning a spectacle to draw attention to a provocative use of our industrial robot, Spot. To be clear, we condemn the portrayal of our technology in any way that promotes violence, harm, or intimidation. It’s precisely the sort of thing the company tries to get out in front of. After decades of killer-robot science-fiction, it doesn’t take much to make people jump any time an advanced robot enters the picture. It’s the automaton version of Rule 34 (in staunch defiance of Asimov’s First Law of Robotics): If a robot exists, someone has tried to weaponize it.

Full Story:


Let’s talk about strapping guns to the backs of robots. I’m not a fan (how’s that for taking a stand?). When MSCHF did it with Spot back in February, it was a thought experiment, art exhibit and a statement about where society might be headed with autonomous robotics. And most importantly, of course, it was a paintball gun. Boston Dynamics clearly wasn’t thrilled with the message it was sending, noting: