Toggle light / dark theme

I share this revealing interview given by Liz Parrish, “Patient Zero” in biological rejuvenation, to a journalist in Madrid, Spain. It took place in July 10, 2022 and lasts 20 minutes.

During the interview Liz speaks in English. However, the journalist, whose name is María Zabay, speaks mostly in Spanish.

Don’t miss it because Liz says things that most people don’t know about her and her company BioViva Sciences.


June 29 (Reuters) — Germany’s BioNTech (22UAy. DE), Pfizer’s (PFE.N) partner in COVID-19 vaccines, said the two companies would start tests on humans of next-generation shots that protect against a wide variety of coronaviruses in the second half of the year.

Their experimental work on shots that go beyond the current approach include T-cell-enhancing shots, designed to primarily protect against severe disease if the virus becomes more dangerous, and pan-coronavirus shots that protect against the broader family of viruses and its mutations.

In presentation slides posted on BioNTech’s website for its investor day, the German biotech firm said its aim was to “provide durable variant protection”.

Text-to-image generation is the hot algorithmic process right now, with OpenAI’s Craiyon (formerly DALL-E mini) and Google’s Imagen AIs unleashing tidal waves of wonderfully weird procedurally generated art synthesized from human and computer imaginations. On Tuesday, Meta revealed that it too has developed an AI image generation engine, one that it hopes will help to build immersive worlds in the Metaverse and create high digital art.

A lot of work into creating an image based on just the phrase, “there’s a horse in the hospital,” when using a generation AI. First the phrase itself is fed through a transformer model, a neural network that parses the words of the sentence and develops a contextual understanding of their relationship to one another. Once it gets the gist of what the user is describing, the AI will synthesize a new image using a set of GANs (generative adversarial networks).

Thanks to efforts in recent years to train ML models on increasingly expandisve, high-definition image sets with well-curated text descriptions, today’s state-of-the-art AIs can create photorealistic images of most whatever nonsense you feed them. The specific creation process differs between AIs.

A strain-sensing smart skin developed at Rice University that uses very small structures, carbon nanotubes, to monitor and detect damage in large structures is ready for prime time.

The ‘strain paint’ first revealed by Rice in 2012 uses the fluorescent properties of nanotubes to show when a surface has been deformed by stress.

Now developed as part of a non-contact optical monitoring system known as S4, the multilayered coating can be applied to large surfaces — bridges, buildings, ships and airplanes, for starters — where high strain poses an invisible threat.