Over 175,000 publicly exposed Ollama AI servers across 130 countries, with many enabling tool calling that allows code execution and LLMjacking abuse.
A new Android malware campaign is using the Hugging Face platform as a repository for thousands of variations of an APK payload that collects credentials for popular financial and payment services.
Hugging Face is a popular platform that hosts and distributes artificial intelligence (AI), natural language processing (NLP), and machine learning (ML) models, datasets, and applications.
It is considered a trusted platform unlikely to trigger security warnings, but bad actors have abused it in the past to host malicious AI models.
Genie is a “World Model.” It doesn’t just show you a scene; it simulates the physics, the depth, and the logic of a world you can actually control and navigate in real-time.
Stay up to date with the latest Google AI experiments, innovative tools, and technology. Explore the future of AI responsibly with Google Labs.
“I expect vision-language models to play a major role in future embodied AI systems,” said Dr. Alvaro Cardenas.
How can misleading texts negatively affect AI behavior? This is what a recently submitted study hopes to address as a team of researchers from the University of California, Santa Cruz and Johns Hopkins University investigated the potential security risks of embodied AI, which is AI fixed in a physical body that uses observations to adapt to its environment, as opposed to using text and data, and include cars and robots. This study has the potential to help scientists, engineers, and the public better understand the risks for AI and the steps to take to mitigate them.
For the study, the researchers introduced CHAI (Command Hijacking against embodied AI), which is designed to combat outside threats to embodied AI systems, including misleading text and imagery. Instead, CHAI employs counterattacks that embodied Ais can use to disseminate right from wrong regarding text and images. The researchers tested CHAI on a variety of AI-based systems, including drone emergency landing, autonomous driving, aerial object tracking, and robotic vehicles. In the end, the researchers discovered that CHAI successfully identified incoming attacks while emphasizing the need for enhancing security measures for embodied AI.
Leading artificial intelligence companies have started to use their own systems to accelerate research and development, with each generation of AI systems contributing to building the next generation. This report distills points of consensus and disagreement from our July 2025 expert workshop on how far the automation of AI R&D could go, laying bare crucial underlying assumptions and identifying what new evidence could shed light on the trajectory going forward.
Mathematica creator Stephen Wolfram has spent nearly 50 years arguing that simple computational rules underlie everything from animal patterns to the laws of physics. In his 2023 TED talk, he makes the case that computation isn’t just a useful way to model the world — it’s the fundamental operating system of reality itself.
Wolfram introduces “the ruliad,” an abstract concept encompassing all possible computational processes. Space and matter, he argues, consist of discrete elements governed by simple rules. Gravity and quantum mechanics emerge from the same computational framework. The laws of physics themselves are observer-dependent, arising from our limited perspective within an infinite computational structure.
On AI, Wolfram sees large language models as demonstrating deep connections between semantic grammar and computational thinking. The Wolfram Language, he claims, bridges human conceptualization and computational power, letting people operationalize ideas directly — what he calls a “superpower” for thinking and creation.
NVIDIA’s integration of AI systems now extends beyond GPUs with generic Arm CPUs. The company is introducing its high-performance “Vera” CPUs as a standalone product, marking its first entry as a competitor to Intel Xeon and AMD EPYC server-grade CPUs. NVIDIA CEO Jensen Huang confirmed this new venture in an interview with Bloomberg, stating, “For the very first time, we’re going to be offering Vera CPUs. Vera is such an incredible CPU. We’re going to offer Vera CPUs as a standalone part of the infrastructure. You can now run your computing stack not only on NVIDIA GPUs but also on NVIDIA CPUs. Vera is completely revolutionary… Coreweave will have to act quickly if they want to be the first to implement Vera CPUs. We haven’t announced any of our CPU design wins yet, but there will be many.”
The “Vera” CPU is equipped with 88 custom Armv9.2 “Olympus” cores that utilize Spatial Multithreading technology, allowing it to handle 176 threads through physical resource partitioning. These custom cores support native FP8 processing, enabling some AI workloads to be executed directly on the CPU with 6x128-bit SVE2 implementation. The chip offers 1.2 TB/s of memory bandwidth and supports up to 1.5 TB of LPDDR5X memory, making it ideal for memory-intensive computing tasks. However, with the CPU now being offered as a standalone solution, it is unclear whether there will be any classic memory options like DDR5 RDIMMs, or if the CPU will rely solely on SOCAMM LPDDR5X. A second-generation Scalable Coherency Fabric provides 3.4 TB/s of bisection bandwidth, connecting the cores across a unified monolithic die and eliminating the latency issues common in chiplet architectures. Additionally, NVIDIA has integrated a second-generation NVLink Chip-to-Chip technology, delivering up to 1.
Langer, A., Wilson, J.R., Howard, L. et al. Sci Rep 16, 3,488 (2026). https://doi.org/10.1038/s41598-025-27476-x.