Toggle light / dark theme

Boston Dynamics – famous for robots like Atlas, BigDog, Handle, and Spot – has now revealed Stretch, its new box-moving robot designed to support the growing demand for flexible automation solutions in the logistics industry. This debut marks the company’s official entrance into warehouse automation, a fast-growing market fuelled by increased demand in e-commerce.

Stretch is Boston Dynamics’ first commercial robot specifically designed for warehouse facilities and distribution centres, of which there are more than 150000 around the world. The multi-purpose, mobile robot is designed to tackle a number of tasks where rapid box moving is required, first starting with truck unloading and later expanding into order building. Stretch’s technology builds upon Boston Dynamics’ decades of advancements in robotics to create a flexible, easily-integrated solution that can work in any warehouse to increase the flow of goods, improve employee safety in physically difficult tasks and lower expensive automation costs.

Engineers at Duke University have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.

The soft robot is described online March 25 in the journal Advanced Intelligent Systems.

Soft robots are a growing trend in the industry due to their versatility. Soft parts can handle delicate objects such as biological tissues that metal or ceramic components would damage. Soft bodies can help robots float or squeeze into tight spaces where rigid frames would get stuck.

Deep neural networks exploit statistical regularities in data to carry out prediction or classification tasks. This makes them very good at handling computer vision tasks such as detecting objects. But reliance on statistical patterns also makes neural networks sensitive to adversarial examples.

An adversarial example is an image that has been subtly modified to cause a deep learning model to misclassify it. This usually happens by adding a layer of noise to a normal image. Each noise pixel changes the numerical values of the image very slightly, enough to be imperceptible to the human eye. But when added together, the noise values disrupt the statistical patterns of the image, which then causes a neural network to mistake it for something else.

The military’s primary advanced research shop wants to be a leader in the “third wave” of artificial intelligence and is looking at new methods of visually tracking objects using significantly less power while producing results that are 10-times more accurate.

The Defense Advanced Research Projects Agency, or DARPA, has been instrumental in many of the most important breakthroughs in modern technology—from the first computer networks to early AI research.

“DARPA-funded R&D enabled some of the first successes in AI, such as expert systems and search, and more recently has advanced machine learning algorithms and hardware,” according to a notice for an upcoming opportunity.

A team of researchers at the University of Georgia has created a backpack equipped with AI gear aimed at replacing guide dogs and canes for the blind. Intel has published a News Byte describing the new technology on their Newsroom page.

Technology to help get around in public has been improving in recent years, thanks mostly to smartphone apps. But such apps, the team notes, are not sufficient given the technology available. To make a better assistance system, the group designed an AI system that could be placed in a backpack and worn by a to give them much better clues about their environment.

The backpack holds a smart AI system running on a laptop, and is fitted with OAK-D cameras (which, in addition to providing obstacle information, can also provide ) hidden in a vest and also in a waist pack. The cameras run Intel’s Movidius VPU and are programmed using the OpenVINO toolkit. The waist pack also holds batteries for the system. The AI system was trained to recognize objects a sighted pedestrian would see when walking around in a town or city, such as cars, bicycles, other pedestrians or even overhanging tree limbs.

A “self-portrait” by humanoid robot Sophia, who “interpreted” a depiction of her own face, has sold at auction for over $688000.


A hand-painted “self-portrait” by the world-famous humanoid robot, Sophia, has sold at auction for over $688000.

The work, which saw Sophia “interpret” a depiction of her own face, was offered as a non-fungible token, or NFT, an encrypted digital signature that has revolutionized the art market in recent months.

Titled “Sophia Instantiation,” the image was created in collaboration with Andrea Bonaceto, an artist and partner at blockchain investment firm Eterna Capital. Bonaceto began the process by producing a brightly colored portrait of Sophia, which was processed by the robot’s neural networks. Sophia then painted an interpretation of the image.