Toggle light / dark theme

While text-based AI models have been found coordinating amongst themselves and developing a language of their own, communication between image-based models remained an unexplored territory, until now. A group of researchers set out to find how well Google Deepmind’s Flamingo and OpenAI’s Dall-E understand each other — their synergy is impressive.

Despite the closeness of the image captioning and text-to-image generation tasks, they are often studied in isolation from each other, i.e the information exchange between these models remains a question someone never looked for an answer to. Researchers from LMU Munich, Siemens AG, and the University of Oxford wrote a paper titled, ‘Do Flamingo and DALL-E Understand Each Other?‘investigating the communication between image captioning and text-to-image models.

The team proposes a reconstruction task where Flamingo generates a description for a given image and DALL-E uses this description as input to synthesise a new image. They argue that these models understand each other if the generated image is similar to the given image. Specifically, they studied the relationship between the quality of the image reconstruction and that of the text generation. As a result, they found that a better caption is the one that leads to better visuals and vice-versa.

Even linguistics experts are largely unable to spot the difference between writing created by artificial intelligence or humans, according to a new study co-authored by a University of South Florida assistant professor.

Research just published in the journal Research Methods in Applied Linguistics revealed that experts from the world’s top linguistic journals could differentiate between AI-and human-generated abstracts less than 39 percent of the time.

“We thought if anybody is going to be able to identify human-produced writing, it should be people in linguistics who’ve spent their careers studying patterns in language and other aspects of human communication,” said Matthew Kessler, a scholar in the USF the Department of World Languages.

Would you like to hear more news stories like this one? If so, head over to LifespanNews for more longevity news, science, and advocacy episodes! Visit https://www.youtube.com/lifespannews.

▼▼ Description, sources, and more below ▼▼

In this episode of Lifespan News:

0:00 Intro.
0:38 Young.ai — artificial intelligence for tracking aging in humans.
1:35 A Link Between the Microbiome, Heat, and Osteoporosis.
2:34 Age-Related Female Fertility Decline Linked to Mitochondrial Mutation.
4:14 A New Microporous Membrane for Skin Regeneration.
5:13 Microtubule Stabilization Ameliorates Alzheimer’s Symptoms in Mice.

Visit https://youtube.com/BrentNally for more of Brent’s content!

Executive Producer: Keith Comito.

Tiny dents on thin material produce photon-polarizing magnetic fields.

Researchers at Los Alamos National Laboratory have developed a technique that can produce polarized photons more easily and cheaply than existing methods. The technique.

Quantum communication uses photons to carry information, much as classical communication uses electrons. But while classical computers encode information by turning current… More.


Researchers at UC San Francisco and UC Berkeley have developed a brain-computer interface (BCI) that has enabled a woman with severe paralysis from a brainstem stroke to speak through a digital avatar.

It is the first time that either speech or facial expressions have been synthesized from brain signals. The system can also decode these signals into text at nearly 80 words per minute, a vast improvement over commercially available technology.