Toggle light / dark theme

A new project unites world-leading experts in quantum computing and genomics to develop new methods and algorithms to process biological data.

Researchers aim to harness quantum computing to speed up genomics, enhancing our understanding of DNA and driving advancements in personalized medicine

A new collaboration has formed, uniting a world-leading interdisciplinary team with skills across quantum computing, genomics, and advanced algorithms. They aim to tackle one of the most challenging computational problems in genomic science: building, augmenting, and analyzing pangenomic datasets for large population samples. Their project sits at the frontiers of research in both biomedical science and quantum computing.

A collaborative study by the University of Oxford and MIT has uncovered a 3.7-billion-year-old magnetic field record from Greenland, demonstrating that Earth’s ancient magnetic field was as strong as it is today, crucial for protecting life by shielding against cosmic and solar radiation.

A new study has recovered a 3.7-billion-year-old record of Earth’s magnetic field, and found that it appears remarkably similar to the field surrounding Earth today. The findings have been published today (April 24) in the Journal of Geophysical Research.

Without its magnetic field, life on Earth would not be possible since this shields us from harmful cosmic radiation and charged particles emitted by the Sun (the ‘solar wind’). But up to now, there has been no reliable date for when the modern magnetic field was first established.

NASA’s Mars Curiosity rover has made consistent and puzzling findings while roaming the barren surface of the planet’s Gale Crater: mysterious puffs of methane gas that only appear at night and vanish during the day.

Over the years, the rover’s Sample Analysis at Mars (SAM) instrument has repeatedly detected significant concentrations of the gas, sometimes spiking to 40 times the usual levels — and scientists are still trying to figure out the source, as NASA details in a new blog post.

It’s an especially intriguing finding, given that living creatures produce methane here on Earth, giving the findings special significance as NASA scans the Red Planet for signs of subterranean life.

PyTorch 2.3 is here 😎🔥

Details:


By Team PyTorch.

We are excited to announce the release of PyTorch® 2.3 (release note)! PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance regressions or graph breaks. Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models. As well, semi-structured sparsity implements semi-structured sparsity as a Tensor subclass, with observed speedups of up to 1.6 over dense matrix multiplication.

Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code.

As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the CoreNet library, and four instruction tuned models. Apple uses a layer-wise scaling strategy that is aimed at improving accuracy and efficiency.

Adobe researchers have developed a new generative AI model called VideoGigaGAN that can upscale blurry videos at up to eight times their original resolution. Introduced in a paper published on April 18th, Adobe claims VideoGigaGAN is superior to other Video Super Resolution (VSR) methods as it can provide more fine-grained details without introducing any “AI weirdness” to the footage.

In a nutshell, Generative Adversarial Networks (GANs) are effective for upscaling still images to a higher resolution, but struggle to do the same for video without introducing flickering and other unwanted artifacts. Other upscaling methods can avoid this, but the results aren’t as sharp or detailed. VideoGigaGAN aims to provide the best of both worlds — the higher image/video quality of GAN models, with fewer flickering or distortion issues across output frames. The company has provided several examples here that show its work in full resolution.

Some of the finer details in the demo clips Adobe provided appear to be entirely artificial, such as the skin texture and creases in the below example, but the results appear impressively natural. It would be difficult to tell that generative AI was used to improve the resolution, which could extend the “what is a photo” debate to include video.