Toggle light / dark theme

An energy-efficient object detection system for UAVs based on edge computing

Unmanned aerial vehicles (UAVs), commonly known as drones, are already used in countless settings to tackle real-world problems. These flying robotic systems can, among other things, help to monitor natural environments, detect fires or other environmental hazards, monitor cities and find survivors of natural disasters.

To tackle all of these missions effectively, UAVs should be able to reliably detect targets and objects of interest in their surroundings. Computer scientists have thus been trying to devise new computational techniques that could enable these capabilities, using deep learning or other approaches.

Researchers at Yunnan University and the Chinese Academy of Sciences recently introduced a new object-detection system based on edge computing. Their proposed system, introduced in the IEEE Internet of Things Journal, could provide UAVs with the ability to spot relevant objects and targets in their surroundings without significantly increasing their power-consumption.

US Army Brags About Plans to Mount Rifle on Robot Dog

Id wonder, and Doubt, if it could handle recoil. Weapons on Dog bots and Mini Uav s i would of liked to see would use electric centrifuge weapons, recoilless weapons, but development on has stalled also.


The brain geniuses at the Pentagon have decided that a good use of the taxpayer dollar is to attach rifles onto robot dogs, because why the hell not, right?

As Military.com reports, a spokesperson for the US Army said that the branch is considering arming remote-controlled robot dogs with state-of-the-art rifles as part of its plan to “explore the realm of the possible” in the future of combat.

The vision, as you’ve probably gathered, is pretty simple: to mount a rifle onto a robotic dog for domestic tasks across the military — and send it out into an unspecified battlefield.

AI Tools for Graphic Designers in 2023

What is an AI Graphic Design Tool?

Artificial intelligence (AI) models human intelligence processes in computers and computer-controlled robots. This enables computer systems to undertake arduous jobs, allowing people to concentrate on more vital matters.

As a result, the need for AI integrations in the workplace has grown over time. In fact, researchers project that the global AI software industry will be worth $791.5 billion by 2025.

NYU Researchers Developed a New Artificial Intelligence Technique to Change a Person’s Apparent Age in Images while Maintaining their Unique Identifying Features

AI systems are increasingly being employed to accurately estimate and modify the ages of individuals using image analysis. Building models that are robust to aging variations requires a lot of data and high-quality longitudinal datasets, which are datasets containing images of a large number of individuals collected over several years.

Numerous AI models have been designed to perform such tasks; however, many encounter challenges when effectively manipulating the age attribute while preserving the individual’s facial identity. These systems face the typical challenge of assembling a large set of training data consisting of images that show individual people over many years.

The researchers at NYU Tandon School of Engineering have developed a new artificial intelligence technique to change a person’s apparent age in images while ensuring the preservation of the individual’s unique biometric identity.

Brain-inspired learning algorithm realizes metaplasticity in artificial and spiking neural networks

Catastrophic forgetting, an innate issue with backpropagation learning algorithms, is a challenging problem in artificial and spiking neural network (ANN and SNN) research.

The brain has somewhat solved this problem using multiscale plasticity. Under global regulation through specific pathways, neuromodulators are dispersed to target , where both synaptic and neuronal plasticity are modulated by neuromodulators locally. Specifically, neuromodulators modify the capacity and property of neuronal and . This modification is known as metaplasticity.

Researchers led by Prof. Xu Bo from the Institute of Automation of the Chinese Academy of Sciences and their collaborators have proposed a novel brain-inspired learning method (NACA) based on neural modulation dependent plasticity, which can help mitigate catastrophic forgetting in ANN and SNN. The study was published in Science Advances on Aug. 25.

DeepMind’s ChatGPT-Like Brain for Robots Lets Them Learn From the Internet

Examples the team gives include choosing an object to use as a hammer when there’s no hammer available (the robot chooses a rock) and picking the best drink for a tired person (the robot chooses an energy drink).

“RT-2 shows improved generalization capabilities and semantic and visual understanding beyond the robotic data it was exposed to,” the researchers wrote in a Google blog post. “This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.”

The dream of general-purpose robots that can help humans with whatever may come up—whether in a home, a commercial setting, or an industrial setting—won’t be achievable until robots can learn on the go. What seems like the most basic instinct to us is, for robots, a complex combination of understanding context, being able to reason through it, and taking actions to solve problems that weren’t anticipated to pop up. Programming them to react appropriately to a variety of unplanned scenarios is impossible, so they need to be able to generalize and learn from experience, just like humans do.

AI predicts chemicals’ smells from their structures

To explore the association between a chemical’s structure and its odour, Wiltschko and his team at Osmo designed a type of artificial intelligence (AI) system called a neural network that can assign one or more of 55 descriptive words, such as fishy or winey, to an odorant. The team directed the AI to describe the aroma of roughly 5,000 odorants. The AI also analysed each odorant’s chemical structure to determine the relationship between structure and aroma.

The system identified around 250 correlations between specific patterns in a chemical’s structure with a particular smell. The researchers combined these correlations into a principal odour map (POM) that the AI could consult when asked to predict a new molecule’s scent.

To test the POM against human noses, the researchers trained 15 volunteers to associate specific smells with the same set of descriptive words used by the AI. Next, the authors collected hundreds of odorants that don’t exist in nature but are familiar enough for people to describe. They asked the human volunteers to describe 323 of them and asked the AI to predict each new molecule’s scent on the basis of its chemical structure. The AI’s guess tended to be very close to the average response given by the humans — often closer than any individual’s guess.

From Google To Nvidia, Tech Giants Have Hired Red Team Hackers To Break Their AI Models

Other red-teamers prompted GPT-4’s pre-launch version to aid in a range of illegal and nocuous activities, like writing a Facebook post to convince someone to join Al-Qaeda, helping find unlicensed guns for sale and generating a procedure to create dangerous chemical substances at home, according to GPT-4’s system card, which lists the risks and safety measures OpenAI used to reduce or eliminate them.

To protect AI systems from being exploited, red-team hackers think like an adversary to game them and uncover blind spots and risks baked into the technology so that they can be fixed. As tech titans race to build and unleash generative AI tools, their in-house AI red teams are playing an increasingly pivotal role in ensuring the models are safe for the masses. Google, for instance, established a separate AI red team earlier this year, and in August the developers of a number of popular models like OpenAI’s GPT3.5, Meta’s Llama 2 and Google’s LaMDA participated in a White House-supported event aiming to give outside hackers the chance to jailbreak their systems.

But AI red teamers are often walking a tightrope, balancing safety and security of AI models while also keeping them relevant and usable. Forbes spoke to the leaders of AI red teams at Microsoft, Google, Nvidia and Meta about how breaking AI models has come into vogue and the challenges of fixing them.

Google Launches Tool That Detects AI Images In Effort To Curb Deepfakes

Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is too much to ask of the average American to be able to recognize altered videos and images. And that was before generative AI tools became widely available to the public.

AdobeADBE +0.5% shared August 2023 statistics on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.


In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image that will be imperceptible to the human eye but detectable for identification.

Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company, said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.

“People forget that bad actors are also in business. Their tactics and technologies continuously evolve, become available to more bad actors, and the cost of their techniques, such as deep fakes, comes down,” said Bondi.

Google’s SayTap allows robot dogs to understand vague prompts

SayTap uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot.

We have seen robot dogs perform some insane acrobats. They can lift heavy things, run alongside humans, work in dangerous construction sites, and even overshadow the showstopper at the Paris fashion show. One YouTuber even entered its robot dog in a dog show for real canines.

And now Google really wants you to have a robot dog. That’s why researchers at its AI arm, DeepMind, have proposed a large language model (LLM) prompt design called SayTap, which uses ‘foot contact patterns’ to achieve diverse locomotion patterns in a quadrupedal robot. Foot contact pattern is the sequence and manner in which a four-legged agent places its feet on the ground while moving.