Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1000

Sep 30, 2022

A computational shortcut for neural networks

Posted by in categories: information science, mathematics, quantum physics, robotics/AI

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.

Sep 30, 2022

Stretchy, bio-inspired synaptic transistor can enhance or weaken device memories

Posted by in categories: biotech/medical, robotics/AI, wearables

Robotics and wearable devices might soon get a little smarter with the addition of a stretchy, wearable synaptic transistor developed by Penn State engineers. The device works like neurons in the brain to send signals to some cells and inhibit others in order to enhance and weaken the devices’ memories.

Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and associate professor of biomedical engineering and of and engineering, the team designed the synaptic transistor to be integrated in robots or wearables and use to optimize functions. The details were published Sept. 29 in Nature Electronics.

“Mirroring the human brain, robots and using the synaptic transistor can use its to ‘learn’ and adapt their behaviors,” Yu said. “For example, if we burn our hand on a stove, it hurts, and we know to avoid touching it next time. The same results will be possible for devices that use the synaptic transistor, as the artificial intelligence is able to ‘learn’ and adapt to its environment.”

Sep 30, 2022

Will Artificial Intelligence Drive Robots?

Posted by in categories: futurism, robotics/AI

Agility CEO Damion Shelton and CTO Jonathan Hurst discuss artificial intelligence and its role in robot control. They also discuss the capability of robot learning paired with physics-based locomotion, Cassie setting a new world record using learned policies for control, and an exploration of the future of robotics through Dall-E.

At Agility, we make robots that are made for work. Our robot Digit works alongside us in spaces designed for people. Digit handles the boring and repetitive tasks that are meant for a machine, which allows companies and their people to focus on the work that requires the human element.

Continue reading “Will Artificial Intelligence Drive Robots?” »

Sep 30, 2022

Video: Half human-looking robot breaks speed record

Posted by in category: robotics/AI

Cassie, a robot built by Agility Robotics, set the Guinness World Record for the fastest 100-meter run by a bipedal robot.

Sep 30, 2022

How MLops deployment can be easier with open-source versioning

Posted by in categories: robotics/AI, security

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Modern software development typically follows a very iterative approach known as continuous integration/continuous development (CI/CD). The promise of CI/CD is better software that is released quicker and it’s a promise that ClearML now intends to bring to the world of machine learning (ML).

ClearML today announced the general availability of its enterprise MLops platform that extends the capabilities of the company’s open-source edition. The ClearML Enterprise platform provides organizations with security controls and additional capabilities for rapidly iterating and deploying ML workflows.

Sep 29, 2022

Researchers Warn of New Go-based Malware Targeting Windows and Linux Systems

Posted by in categories: cybercrime/malcode, robotics/AI

A new, multi-functional Go-based malware dubbed Chaos has been rapidly growing in volume in recent months to ensnare a wide range of Windows, Linux, small office/home office (SOHO) routers, and enterprise servers into its botnet.

“Chaos functionality includes the ability to enumerate the host environment, run remote shell commands, load additional modules, automatically propagate through stealing and brute-forcing SSH private keys, as well as launch DDoS attacks,” researchers from Lumen’s Black Lotus Labs said in a write-up shared with The Hacker News.

A majority of the bots are located in Europe, specifically Italy, with other infections reported in China and the U.S., collectively representing “hundreds of unique IP addresses” over a one-month time period from mid-June through mid-July 2022.

Sep 29, 2022

Critical WhatsApp Bugs Could Have Let Attackers Hack Devices Remotely

Posted by in categories: cybercrime/malcode, robotics/AI

WhatsApp for Android and iOS patches two critical remote code execution vulnerabilities that could have allowed attackers to remotely hack targeted de.

Sep 29, 2022

Meta’s new Make-a-Video AI can generate quick movie clips from text prompts

Posted by in categories: information science, robotics/AI

Meta unveiled its Make-a-Scene text-to-image generation AI in July, which like Dall-E and Midjourney, utilizes machine learning algorithms (and massive databases of scraped online artwork) to create fantastical depictions of written prompts. On Thursday, Meta CEO Mark Zuckerberg revealed Make-a-Scene’s more animated contemporary, Make-a-Video.

As its name implies, Make-a-Video is, “a new AI system that lets people turn text prompts into brief, high-quality video clips,” Zuckerberg wrote in a Meta blog Thursday. Functionally, Video works the same way that Scene does — relying on a mix of natural language processing and generative neural networks to convert non-visual prompts into images — it’s just pulling content in a different format.

“Our intuition is simple: learn what the world looks like and how it is described from paired text-image data, and learn how the world moves from unsupervised video footage,” a team of Meta researchers wrote in a research paper published Thursday morning. Doing so enabled the team to reduce the amount of time needed to train the Video model and eliminate the need for paired text-video data, while preserving “the vastness (diversity in aesthetic, fantastical depictions, etc.) of today’s image generation models.”

Sep 29, 2022

Scientists send robot into furious hurricane and capture wild footage

Posted by in categories: climatology, drones, robotics/AI

Hurricane researchers sent a marine drone into Hurricane Fiona, where it captured intense footage of the tropical storm. The research drone will help scientists better understand how tropical storms rapidly intensify.

Sep 29, 2022

Forget Silicon. This Computer Is Made of Fabric

Posted by in categories: robotics/AI, wearables

The existing jacket can perform one logical operation per second, compared to the more than a billion operations per second typical of a home computer, says Preston. In practice, this means the jacket can only execute short command sequences. Due to the speed of the logic, along with some other engineering challenges, Zhang says he thinks it’ll take five to 10 years for these textile-based robots to reach commercial maturity.

In the future, Preston’s team plans to do away with the carbon dioxide canister, which is impractical. (You have to refill it like you would a SodaStream.) Instead, his team wants to just use ambient air to pump up the jacket. As a separate project, the team has already developed a foam insole for a shoe that pumps the surrounding air into a bladder worn around the waist when the wearer takes a step. They plan to integrate a similar design into the jacket.

Preston also envisions clothing that senses and responds to the wearer’s needs. For example, a sensor on a future garment could detect when the wearer is beginning to lift their arm and inflate without any button-pressing. “Based on some stimulus from the environment and the current state, the logic system can allow the wearable robot to choose what to do,” he says. We’ll be waiting for this fashion trend to blow up.