Toggle light / dark theme

Text-to-image generation is the hot algorithmic process right now, with OpenAI’s Craiyon (formerly DALL-E mini) and Google’s Imagen AIs unleashing tidal waves of wonderfully weird procedurally generated art synthesized from human and computer imaginations. On Tuesday, Meta revealed that it too has developed an AI image generation engine, one that it hopes will help to build immersive worlds in the Metaverse and create high digital art.

A lot of work into creating an image based on just the phrase, “there’s a horse in the hospital,” when using a generation AI. First the phrase itself is fed through a transformer model, a neural network that parses the words of the sentence and develops a contextual understanding of their relationship to one another. Once it gets the gist of what the user is describing, the AI will synthesize a new image using a set of GANs (generative adversarial networks).

Thanks to efforts in recent years to train ML models on increasingly expandisve, high-definition image sets with well-curated text descriptions, today’s state-of-the-art AIs can create photorealistic images of most whatever nonsense you feed them. The specific creation process differs between AIs.

Deep learning techniques have recently proved to be highly promising for detecting cybersecurity attacks and determining their nature. Concurrently, many cybercriminals have been devising new attacks aimed at interfering with the functioning of various deep learning tools, including those for image classification and natural language processing.

Perhaps the most common among these attacks are adversarial attacks, which are designed to “fool” deep learning algorithms using data that has been modified, prompting them to classify it incorrectly. This can lead to the malfunctioning of many applications, , and other technologies that operate through .

Several past studies have shown the effectiveness of different adversarial attacks in prompting (DNNs) to make unreliable and false predictions. These attacks include the Carlini & Wagner attack, the Deepfool attack, the fast gradient sign method (FGSM) and the Elastic-Net attack (ENA).

A new computer algorithm developed by the University of Toronto’s Parham Aarabi can store and recall information strategically—just like our brains.

The associate professor in the Edward S. Rogers Sr. department of electrical and computer engineering, in the Faculty of Applied Science & Engineering, has also created an experimental tool that leverages the to help people with memory loss.

“Most people think of AI as more robot than human,” says Aarabi, whose framework is explored in a paper being presented this week at the IEEE Engineering in Medicine and Biology Society Conference in Glasgow. “I think that needs to change.”

The AI-powered robot is named “Polly” and will pollinate truss tomato plants in Costa’s tomato glasshouse facilities in Guyra, New South Wales.

In its commercial application, Costa wrote on its website that these robotic pollinators will drive between the rows, detect flowers that are ripe for pollination utilizing artificial intelligence, and then emit air pulses to vibrate the flowers in a certain way that mimics buzz pollination that is carried out by bumblebees.

Compared to using insects, like bees, and the human laborers that are occasionally required to aid with the growth of particular crops, pollination robots could provide future farmers with a major advantage, which is to improve productivity.

Inspired by research into how infants learn, computer scientists have created a program that can pick up simple physical rules about the behaviour of objects — and express surprise when they seem to violate those rules. The results were published on 11 July in Nature Human Behaviour1.

Developmental psychologists test how babies understand the motion of objects by tracking their gaze. When shown a video of, for example, a ball that suddenly disappears, the children express surprise, which researchers quantify by measuring how long the infants stare in a particular direction.

Luis Piloto, a computer scientist at Google-owned company DeepMind in London, and his collaborators wanted to develop a similar test for artificial intelligence (AI). The team trained a neural network — a software system that learns by spotting patterns in large amounts of data — with animated videos of simple objects such as cubes and balls.