Toggle light / dark theme

Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code.

As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the CoreNet library, and four instruction tuned models. Apple uses a layer-wise scaling strategy that is aimed at improving accuracy and efficiency.

Adobe researchers have developed a new generative AI model called VideoGigaGAN that can upscale blurry videos at up to eight times their original resolution. Introduced in a paper published on April 18th, Adobe claims VideoGigaGAN is superior to other Video Super Resolution (VSR) methods as it can provide more fine-grained details without introducing any “AI weirdness” to the footage.

In a nutshell, Generative Adversarial Networks (GANs) are effective for upscaling still images to a higher resolution, but struggle to do the same for video without introducing flickering and other unwanted artifacts. Other upscaling methods can avoid this, but the results aren’t as sharp or detailed. VideoGigaGAN aims to provide the best of both worlds — the higher image/video quality of GAN models, with fewer flickering or distortion issues across output frames. The company has provided several examples here that show its work in full resolution.

Some of the finer details in the demo clips Adobe provided appear to be entirely artificial, such as the skin texture and creases in the below example, but the results appear impressively natural. It would be difficult to tell that generative AI was used to improve the resolution, which could extend the “what is a photo” debate to include video.

1/ OpenAI researchers have proposed a new instruction hierarchy approach to reduce the vulnerability of large language models (LLMs) to prompt injection attacks and jailbreaks.


OpenAI researchers propose an instruction hierarchy for AI language models. It is intended to reduce vulnerability to prompt injection attacks and jailbreaks. Initial results are promising.

Language models (LLMs) are vulnerable to prompt injection attacks and jailbreaks, where attackers replace the model’s original instructions with their own malicious prompts.

OpenAI researchers argue that a key vulnerability is that LLMs often give system prompts from developers the same priority as texts from untrusted users and third parties.

Xaira has recruited a group of researchers who developed the leading models for protein and antibody design while in Baker’s lab. The company aims advance these models and develop new methods that can “connect the world of biological targets and engineered molecules to the human experience of disease.”

“Driven by growing data sets and new methods, there has been accelerating progress in artificial intelligence and its applications to medicine, biology and chemistry, including seminal work from David Baker’s lab at the Institute for Protein Design,” said Foresight’s Dr Vikram Bajaj. “In starting Xaira, we have brought together incredible multidisciplinary talent and capabilities at the right time to reimagine our entire approach, from drug discovery to clinical development.”

Boasting proficiency in handling vast and multidimensional datasets, Xaira claims it will enable comprehensive characterization of disease biology at various levels, from molecular to clinical. Drawing from Illumina’s functional genomics R&D effort and integrating a key proteomics group from Interline Therapeutics, the company aims to gain new insights into disease mechanisms.

DNA nanostructures can perform some of the complex robotic fabrication process for manufacturing and self-replication. Building things and performing work with nanorobots has been a major technical and scientific goal. This has been done and published in the peer reviewed journal Science. Nadrian C. “Ned” Seeman (December 16, 1945 – November 16, 2021) was an American nanotechnologist and crystallographer known for inventing the field of DNA nanotechnology. He contributed enough to this work published in 2023 to be listed as a co-author.

Seeman’s laboratory published the synthesis of the first three-dimensional nanoscale object, a cube made of DNA, in 1991. This work won the 1995 Feynman Prize in Nanotechnology. The concept of the dissimilar double DNA crossover introduced by Seeman, was important stepping stone towards the development of DNA origami. The goal of demonstrating designed three-dimensional DNA crystals was achieved by Seeman in 2009, nearly thirty years after his original elucidation of the idea.

The concepts of DNA nanotechnology later found further applications in DNA computing, DNA nanorobotics, and self-assembly of nanoelectronics. He shared the Kavli Prize in Nanoscience 2010 with Donald Eigler for their development of unprecedented methods to control matter on the nanoscale.