New experiments suggest it was once RNA’s world, and now we’re just living in it.
Category: futurism – Page 221
Can the ancient past of Mars be unlocked from knowing the orientation of rocks? This is what a study published today in Earth and Space Science hopes to address as an international team of researchers led by the Massachusetts Institute of Technology (MIT) investigated bedrock samples that were drilled by NASA’s Perseverance rover in Jezero Crater on Mars to ascertain the original orientation of the rocks prior to the drilling, with the orientation potentially providing clues about Mars’ magnetic field history and the conditions that existed on ancient Mars.
What makes this study unique is it marks the first time such a method is being conducted on another planet. Additionally, while orienting 3D objects is common on Earth, Perseverance is not equipped to perform such tasks. Therefore, this method had to be conducted using angles of the rover’s arm and using identifiers from the ground, as well. The team notes how this method could be applied to future in-situ studies, as well.
“The orientation of rocks can tell you something about any magnetic field that may have existed on the planet,” said Dr. Benjamin Weiss, who is a professor of planetary sciences at MIT and lead author of the study. “You can also study how water and lava flowed on the planet, the direction of the ancient wind, and tectonic processes, like what was uplifted and what sunk. So, it’s a dream to be able to orient bedrock on another planet, because it’s going to open up so many scientific investigations.”
Researchers create an electrolyte enabling lithium-ion batteries to work efficiently even in ultra-low temperatures.
Uncover the secrets of tying knots in lasers. Find out how cutting-edge research is revolutionizing the possibilities of laser applications.
Generative AI is an utterly transformative technology that is already impacting how organizations and individuals work. But what does the future have in store for this incredible technology? Read on for my top predictions.
We now have generative AI tools that can see, hear, speak, read, write, or create. Increasingly, generative AIs will be able to do many of these things at once – such as being able to create text and images together. As an example, the third iteration of the text-to-image tool Dall-E is reportedly able to generate high-quality text embedded in its images, putting it ahead of rival image-generator tools. Then there was the 2023 announcement that ChatGPT can now see, hear, and speak, as well as write.
So, one of my predictions is that generative AIs will continue this move towards multi-modal AIs that can create in multiple ways – and in real-time, just like the human brain.
Meta presents Learning and Leveraging World Models in Visual Representation Learning.
Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model.
Infimm-hd a leap forward in high-resolution multimodal understanding.
InfiMM-HD
A leap forward in high-resolution multimodal understanding.
Multimodal Large Language Models (MLLMs) have experienced significant advancements recently.
Join the discussion on this paper page.
ByteDance presents ResAdapter Domain Consistent Resolution Adapter for Diffusion Models.
ByteDance presents ResAdapter.
Domain consistent resolution adapter for diffusion models.
Recent advancement in text-to-image models (e.g., Stable Diffusion) and corresponding personalized technologies (e.g., DreamBooth and LoRA) enables individuals to generate…
Join the discussion on this paper page.
Shortgpt layers in large language models are more redundant than you expect.
ShortGPT
Layers in large language models are more redundant than you expect.
As Large Language Models (LLMs) continue to advance in performance, their size has escalated significantly, with current LLMs containing billions or even trillions of parameters…
Join the discussion on this paper page.
We propose Strongly Supervised pre-training with ScreenShots (S4) — a novel pre-training paradigm for Vision-Language Models using data from large-scale web screenshot rendering.
Join the discussion on this paper page.