Toggle light / dark theme

AI-controlled Osprey MK3 drone completes its maiden flight

The United States Air Force has completed a critical AI-controlled autonomous flight of its modified Osprey Mark III unmanned aerial system.

The USAF reports that the United States Air Force’s (USAF) “Osprey” Mark III unmanned aerial system (UAS) has completed its first fully autonomous test flight. Conducted on July 20, 2023, the test formed part of the USAF’s larger Autonomy, Data, and AI Experimentation (ADAx) Proving Ground effort for the program, specifically the USAF’s Autonomy Prime Environment for Experimentation or APEX, a subset of ADAx. The trial was conducted to evaluate and operationalize artificial intelligence and autonomy concepts to support warfighters on the evolving… More.


USAF

Connecting the dots.

A technique to facilitate the robotic manipulation of crumpled cloths

To assist humans during their day-to-day activities and successfully complete domestic chores, robots should be able to effectively manipulate the objects we use every day, including utensils and cleaning equipment. Some objects, however, are difficult to grasp and handle for robotic hands, due to their shape, flexibility, or other characteristics.

These objects include textile-based cloths, which are commonly used by humans to clean surfaces, polish windows, glass or mirrors, and even mop the floors. These are all tasks that could be potentially completed by robots, yet before this can happen robots will need to be able to grab and manipulate cloths.

Researchers at ETH Zurich recently introduced a new computational technique to create of crumpled cloths, which could in turn help to plan effective strategies for robots to grasp cloths and use them when completing tasks. This technique, introduced in a paper pre-published on arXiv, was found to generalize well across cloths with different physical properties, and of different shapes, sizes and materials.

Physicists solve mysteries of microtubule movers

Active matter is any collection of materials or systems composed of individual units that can move on their own, thanks to self-propulsion or autonomous motion. They can be of any size—think clouds of bacteria in a petri dish, or schools of fish.

Roman Grigoriev is mostly interested in the emergent behaviors in active matter systems made up of units on a molecular scale—tiny systems that convert stored energy into directed motion, consuming energy as they move and exert mechanical force.

“Active matter systems have garnered significant attention in physics, biology, and due to their and potential applications,” Grigoriev, a professor in the School of Physics at Georgia Tech, explains.

Artificial Intelligence: Transforming Healthcare, Cybersecurity, and Communications

Please see my new FORBES article:

Thanks and please follow me on Linkedin for more tech and cybersecurity insights.


More remarkably, the advent of artificial intelligence (AI) and machine learning-based computers in the next century may alter how we relate to ourselves.

The digital ecosystem’s networked computer components, which are made possible by machine learning and artificial intelligence, will have a significant impact on practically every sector of the economy. These integrated AI and computing capabilities could pave the way for new frontiers in fields as diverse as genetic engineering, augmented reality, robotics, renewable energy, big data, and more.

Three important verticals in this digital transformation are already being impacted by AI: 1) Healthcare, 2) Cybersecurity, and 3) Communications.

Human-0Shot-Robot

Can we learn robot manipulation for everyday tasks, only by watching videos of humans doing arbitrary tasks in different unstructured settings? Unlike widely adopted strategies of learning task-specific behaviors or direct imitation of a human video, we develop a a framework for extracting agent-agnostic action representations from human videos, and then map it to the agent’s embodiment during deployment. Our framework is based on predicting plausible human hand trajectories given an initial image of a scene. After training this prediction model on a diverse set of human videos from the internet, we deploy the trained model zero-shot for physical robot manipulation tasks, after appropriate transformations to the robot’s embodiment. This simple strategy lets us solve coarse manipulation tasks like opening and closing drawers, pushing, and tool use, without access to any in-domain robot manipulation trajectories. Our real-world deployment results establish a strong baseline for action prediction information that can be acquired from diverse arbitrary videos of human activities, and be useful for zero-shot robotic manipulation in unseen scenes.