Toggle light / dark theme

Complex motions for simple actuators

Inflatable soft actuators that can change shape with a simple increase in pressure can be powerful, lightweight, and flexible components for soft robotic systems. But there’s a problem: These actuators always deform in the same way upon pressurization.

To enhance the functionality of soft robots, it is important to enable additional and more complex modes of deformation in soft actuators.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have taken inspiration from origami to create inflatable structures that can bend, twist and move in complex, distinct ways from a single source of .

Providing embedded artificial intelligence with a capacity for palimpsest memory storage

Biological synapses are known to store multiple memories on top of each other at different time scales, much like representations of the early techniques of manuscript writing known as “palimpsest,” where annotations can be superimposed alongside traces of earlier writing.

Biological palimpsest consolidation occurs via hidden that govern synaptic efficacy at varying lifetimes. The arrangement can facilitate idle memories to be overwritten without forgetting them, while using previously unseen memories short-term. Embedded can significantly benefit from such functionality; however, the hardware has yet to be demonstrated in practice.

In a new report, now published in Science Advances, Christos Giotis and a team of scientists in Electronics and Computer Science at the University of Southampton and the University of Edinburgh, U.K., showed how the intrinsic properties of metal-oxide volatile memristors mimicked the process of biological palimpsest consolidation.

Meta’s ‘Make-A-Scene’ AI blends human and computer imagination into algorithmic art

Text-to-image generation is the hot algorithmic process right now, with OpenAI’s Craiyon (formerly DALL-E mini) and Google’s Imagen AIs unleashing tidal waves of wonderfully weird procedurally generated art synthesized from human and computer imaginations. On Tuesday, Meta revealed that it too has developed an AI image generation engine, one that it hopes will help to build immersive worlds in the Metaverse and create high digital art.

A lot of work into creating an image based on just the phrase, “there’s a horse in the hospital,” when using a generation AI. First the phrase itself is fed through a transformer model, a neural network that parses the words of the sentence and develops a contextual understanding of their relationship to one another. Once it gets the gist of what the user is describing, the AI will synthesize a new image using a set of GANs (generative adversarial networks).

Thanks to efforts in recent years to train ML models on increasingly expandisve, high-definition image sets with well-curated text descriptions, today’s state-of-the-art AIs can create photorealistic images of most whatever nonsense you feed them. The specific creation process differs between AIs.

A deep learning technique to generate DSN amplification attacks

Deep learning techniques have recently proved to be highly promising for detecting cybersecurity attacks and determining their nature. Concurrently, many cybercriminals have been devising new attacks aimed at interfering with the functioning of various deep learning tools, including those for image classification and natural language processing.

Perhaps the most common among these attacks are adversarial attacks, which are designed to “fool” deep learning algorithms using data that has been modified, prompting them to classify it incorrectly. This can lead to the malfunctioning of many applications, , and other technologies that operate through .

Several past studies have shown the effectiveness of different adversarial attacks in prompting (DNNs) to make unreliable and false predictions. These attacks include the Carlini & Wagner attack, the Deepfool attack, the fast gradient sign method (FGSM) and the Elastic-Net attack (ENA).

Researcher uses ‘fuzzy’ AI algorithms to aid people with memory loss

A new computer algorithm developed by the University of Toronto’s Parham Aarabi can store and recall information strategically—just like our brains.

The associate professor in the Edward S. Rogers Sr. department of electrical and computer engineering, in the Faculty of Applied Science & Engineering, has also created an experimental tool that leverages the to help people with memory loss.

“Most people think of AI as more robot than human,” says Aarabi, whose framework is explored in a paper being presented this week at the IEEE Engineering in Medicine and Biology Society Conference in Glasgow. “I think that needs to change.”

‘AI Bumblebees:’ These AI Robots Act Like Bees to Pollinate Tomato Plants

The AI-powered robot is named “Polly” and will pollinate truss tomato plants in Costa’s tomato glasshouse facilities in Guyra, New South Wales.

In its commercial application, Costa wrote on its website that these robotic pollinators will drive between the rows, detect flowers that are ripe for pollination utilizing artificial intelligence, and then emit air pulses to vibrate the flowers in a certain way that mimics buzz pollination that is carried out by bumblebees.

Compared to using insects, like bees, and the human laborers that are occasionally required to aid with the growth of particular crops, pollination robots could provide future farmers with a major advantage, which is to improve productivity.

/* */