Toggle light / dark theme

Watch THIS Next: https://youtu.be/6kcNzmUaTdA

Faster-than-light travel still seems like pure science fiction—but it could soon become a reality. Scientists have finally discovered a new way to travel at speeds ten times faster than light! Other research teams have made amazing breakthroughs in WARP technology, and in practice this could mean that in just 10 or 20 years we could have the first prototypes of spaceships capable of traveling enormous distances in ever shorter times.

Researchers at Apple have released an eyebrow-raising paper that throws cold water on the “reasoning” capabilities of the latest, most powerful large language models.

In the paper, a team of machine learning experts makes the case that the AI industry is grossly overstating the ability of its top AI models, including OpenAI’s o3, Anthropic’s Claude 3.7, and Google’s Gemini.

Can artificial intelligence (AI) recognize and understand things like human beings? Chinese scientific teams, by analyzing behavioral experiments with neuroimaging, have for the first time confirmed that multimodal large language models (LLM) based on AI technology can spontaneously form an object concept representation system highly similar to that of humans. To put it simply, AI can spontaneously develop human-level cognition, according to the scientists.

The study was conducted by research teams from Institute of Automation, Chinese Academy of Sciences (CAS); Institute of Neuroscience, CAS, and other collaborators.

The research paper was published online on Nature Machine Intelligence on June 9. The paper states that the findings advance the understanding of machine intelligence and inform the development of more human-like artificial cognitive systems.

Today, we’re excited to share V-JEPA 2, the first world model trained on video that enables state-of-the-art understanding and prediction, as well as zero-shot planning and robot control in new environments. As we work toward our goal of achieving advanced machine intelligence (AMI), it will be important that we have AI systems that can learn about the world as humans do, plan how to execute unfamiliar tasks, and efficiently adapt to the ever-changing world around us.

V-JEPA 2 is a 1.2 billion-parameter model that was built using Meta Joint Embedding Predictive Architecture (JEPA), which we first shared in 2022. Our previous work has shown that JEPA performs well for modalities like images and 3D point clouds. Building on V-JEPA, our first model trained on video that we released last year, V-JEPA 2 improves action prediction and world modeling capabilities that enable robots to interact with unfamiliar objects and environments to complete a task. We’re also sharing three new benchmarks to help the research community evaluate how well their existing models learn and reason about the world using video. By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress—ultimately leading to better and more capable AI systems that will help enhance people’s lives.

Planning for a future of intelligent robots means thinking about how they might transform your industry, what it means for the future of work, and how it may change the relationship between humans and technology.

Leaders must consider the ethical issues of cognitive manufacturing such as job disruption and displacement, accountability when things go wrong, and the use of surveillance technology when, for example, robots use cameras working alongside humans.

The cognitive industrial revolution, like the industrial revolutions before it, will transform almost every aspect of our world, and change will happen faster and sooner than most expect. Consider for a moment, what will it take for each of us and our organizations to be ready for this future?

One of the biggest mysteries of evolution is how species first developed complex vision. Jellyfish are helping scientists solve this puzzle, as the group has independently evolved eyes at least nine separate times. Different species of jellyfish have strikingly different types of vision, from simple eyespots that detect light intensity to sophisticated lens eyes similar to those in humans.

Biologists have studied jellyfish eye structure, light sensitivity, and visual behavior for decades, but the exact genes involved in jellyfish eye formation remain unknown.

Aide Macias-Muñoz, a professor of ecology and , is exploring how eyes and light detection evolved using genetic tools. Her lab has just completed a high-quality genome sequence of Bougainvillia cf. muscus, a small jellyfish-like animal in the Hydrozoa group that has an astonishing 28 eyes.