Toggle light / dark theme

A collaborative study by the University of Oxford and MIT has uncovered a 3.7-billion-year-old magnetic field record from Greenland, demonstrating that Earth’s ancient magnetic field was as strong as it is today, crucial for protecting life by shielding against cosmic and solar radiation.

A new study has recovered a 3.7-billion-year-old record of Earth’s magnetic field, and found that it appears remarkably similar to the field surrounding Earth today. The findings have been published today (April 24) in the Journal of Geophysical Research.

Without its magnetic field, life on Earth would not be possible since this shields us from harmful cosmic radiation and charged particles emitted by the Sun (the ‘solar wind’). But up to now, there has been no reliable date for when the modern magnetic field was first established.

NASA’s Mars Curiosity rover has made consistent and puzzling findings while roaming the barren surface of the planet’s Gale Crater: mysterious puffs of methane gas that only appear at night and vanish during the day.

Over the years, the rover’s Sample Analysis at Mars (SAM) instrument has repeatedly detected significant concentrations of the gas, sometimes spiking to 40 times the usual levels — and scientists are still trying to figure out the source, as NASA details in a new blog post.

It’s an especially intriguing finding, given that living creatures produce methane here on Earth, giving the findings special significance as NASA scans the Red Planet for signs of subterranean life.

PyTorch 2.3 is here 😎🔥

Details:


By Team PyTorch.

We are excited to announce the release of PyTorch® 2.3 (release note)! PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions or graph breaks. Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models. As well, semi-structured sparsity implements semi-structured sparsity as a Tensor subclass, with observed speedups of up to 1.6 over dense matrix multiplication.

This release is composed of 3,393 commits and 426 contributors since PyTorch 2.2. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.3. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.

Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code.

As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the CoreNet library, and four instruction tuned models. Apple uses a layer-wise scaling strategy that is aimed at improving accuracy and efficiency.

Adobe researchers have developed a new generative AI model called VideoGigaGAN that can upscale blurry videos at up to eight times their original resolution. Introduced in a paper published on April 18th, Adobe claims VideoGigaGAN is superior to other Video Super Resolution (VSR) methods as it can provide more fine-grained details without introducing any “AI weirdness” to the footage.

In a nutshell, Generative Adversarial Networks (GANs) are effective for upscaling still images to a higher resolution, but struggle to do the same for video without introducing flickering and other unwanted artifacts. Other upscaling methods can avoid this, but the results aren’t as sharp or detailed. VideoGigaGAN aims to provide the best of both worlds — the higher image/video quality of GAN models, with fewer flickering or distortion issues across output frames. The company has provided several examples here that show its work in full resolution.

Some of the finer details in the demo clips Adobe provided appear to be entirely artificial, such as the skin texture and creases in the below example, but the results appear impressively natural. It would be difficult to tell that generative AI was used to improve the resolution, which could extend the “what is a photo” debate to include video.

1/ OpenAI researchers have proposed a new instruction hierarchy approach to reduce the vulnerability of large language models (LLMs) to prompt injection attacks and jailbreaks.


OpenAI researchers propose an instruction hierarchy for AI language models. It is intended to reduce vulnerability to prompt injection attacks and jailbreaks. Initial results are promising.

Language models (LLMs) are vulnerable to prompt injection attacks and jailbreaks, where attackers replace the model’s original instructions with their own malicious prompts.

OpenAI researchers argue that a key vulnerability is that LLMs often give system prompts from developers the same priority as texts from untrusted users and third parties.