Device translates thought to speech in real time.
Scientists at St. Jude Children’s Research Hospital, the National Center for Genomic Analysis and the University of Adelaide have created a single-cell RNA analysis method that is 47 times cheaper and more scalable than other techniques.
Single-cell RNA sequencing provides scientists with important information about gene expression in health and disease. However, the technique is expensive and often prohibits analysis of large numbers of cells.
Scientists from St. Jude Children’s Research Hospital, the National Center for Genomic Analysis and the University of Adelaide have created a method that combines microscopy with single-cell RNA analysis to overcome these limitations. The technique called Single-Cell Transcriptomics Analysis and Multimodal Profiling through Imaging (STAMP) can look at millions of single cells for a fraction of the cost of existing approaches.
Scientists have demonstrated after decades of theorising how light interacts with vacuum, recreating a bizarre phenomenon predicted by quantum physics.
Oxford University physicists ran simulations to test how intense laser beams alter vacuum, a state once thought to be empty but predicted by quantum physics to be full of fleeting, temporary particle pairs.
Classical physics predicts that light beams pass through each other undisturbed. But quantum mechanics holds that even what we know as vacuum is always brimming with fleeting particles, which pop in and out of existence, causing light to be scattered.
A brain-computer interface has enabled a man with paralysis to have real-time conversations, without the usual delay in speech
Scientists have mapped how over 140,000 mutations affect the formation of amyloid beta fibrils, offering an unprecedented look at early events in Alzheimer’s disease.
Watch THIS Next: https://youtu.be/6kcNzmUaTdA
Faster-than-light travel still seems like pure science fiction—but it could soon become a reality. Scientists have finally discovered a new way to travel at speeds ten times faster than light! Other research teams have made amazing breakthroughs in WARP technology, and in practice this could mean that in just 10 or 20 years we could have the first prototypes of spaceships capable of traveling enormous distances in ever shorter times.
Researchers at Apple have released an eyebrow-raising paper that throws cold water on the “reasoning” capabilities of the latest, most powerful large language models.
In the paper, a team of machine learning experts makes the case that the AI industry is grossly overstating the ability of its top AI models, including OpenAI’s o3, Anthropic’s Claude 3.7, and Google’s Gemini.
Can artificial intelligence (AI) recognize and understand things like human beings? Chinese scientific teams, by analyzing behavioral experiments with neuroimaging, have for the first time confirmed that multimodal large language models (LLM) based on AI technology can spontaneously form an object concept representation system highly similar to that of humans. To put it simply, AI can spontaneously develop human-level cognition, according to the scientists.
The study was conducted by research teams from Institute of Automation, Chinese Academy of Sciences (CAS); Institute of Neuroscience, CAS, and other collaborators.
The research paper was published online on Nature Machine Intelligence on June 9. The paper states that the findings advance the understanding of machine intelligence and inform the development of more human-like artificial cognitive systems.
Today, we’re excited to share V-JEPA 2, the first world model trained on video that enables state-of-the-art understanding and prediction, as well as zero-shot planning and robot control in new environments. As we work toward our goal of achieving advanced machine intelligence (AMI), it will be important that we have AI systems that can learn about the world as humans do, plan how to execute unfamiliar tasks, and efficiently adapt to the ever-changing world around us.
V-JEPA 2 is a 1.2 billion-parameter model that was built using Meta Joint Embedding Predictive Architecture (JEPA), which we first shared in 2022. Our previous work has shown that JEPA performs well for modalities like images and 3D point clouds. Building on V-JEPA, our first model trained on video that we released last year, V-JEPA 2 improves action prediction and world modeling capabilities that enable robots to interact with unfamiliar objects and environments to complete a task. We’re also sharing three new benchmarks to help the research community evaluate how well their existing models learn and reason about the world using video. By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress—ultimately leading to better and more capable AI systems that will help enhance people’s lives.