Toggle light / dark theme

In today’s AI news, this year coding might go from one of the most sought-after skills on the job market to one that can be fully automated. Mark Zuckerberg said that Meta and some of the biggest companies in the tech industry are already working toward this on an episode of the Joe Rogan Experience on Friday.

In other advancements, NovaSky, a team of researchers based out of UC Berkeley’s Sky Computing Lab, released Sky-T1-32B-Preview, a reasoning model that’s competitive with an earlier version of OpenAI’s o1. “Remarkably, Sky-T1-32B-Preview was trained for less than $450,” the team wrote in a blog post, “demonstrating that it is possible to replicate high-level reasoning capabilities affordably and efficiently.”

And, no company has capitalized on the AI revolution more dramatically than Nvidia. The world’s leading high-performance GPU maker has used its ballooning fortunes to significantly increase investments in all sorts of startups but particularly in AI startups.

Meanwhile, Sir Keir Starmer has green-lit a plan to use the immigration system to recruit a new wave of AI experts and loosen up data mining regulations to help Britain lead the world in the new technology. The recruitment of thousands of new AI experts by the government and private sector is part of a 50-point plan to transform Britain with the new technology.

In videos, newly deployed at Lawrence Livermore National Laboratory, El Capitan — the National Nuclear Security Administration’s (NNSA) first exascale supercomputer, is setting new benchmarks in computing power. At 2.79 exaFLOPs of peak performance El Capitan’s unprecedented capabilities are already impacting scientific computing and making the previously unimaginable a reality.

Then, François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set. They explore two core solution paradigms—program synthesis (induction) and direct prediction (“transduction”)—and how successful solutions combine both.

And, in this entertaining and important talk, AI ethicist Nadia Lee shares the perils, pitfalls and opportunities that swirl around our emerging AI reality. Nadia Lee is an ethical AI advocate and the founder of ThatsMyFace, an AI company which detects key assets and people in malicious content for businesses.

The mention of gravity and quantum in the same sentence often elicits discomfort from theoretical physicists, yet the effects of gravity on quantum information systems cannot be ignored. In a recently announced collaboration between the University of Connecticut, Google Quantum AI, and the Nordic Institute for Theoretical Physics (NORDITA), researchers explored the interplay of these two domains, quantifying the nontrivial effects of gravity on transmon qubits.

Led by Alexander Balatsky of UConn’s Quantum Initiative, along with Google’s Pedram Roushan and NORDITA researchers Patrick Wong and Joris Schaltegger, the study focuses on the gravitational redshift. This phenomenon slightly detunes the energy levels of qubits based on their position in a gravitational field. While negligible for a single qubit, this effect becomes measurable when scaled.

While quantum computers can effectively be protected from electromagnetic radiation, barring any innovative antigravitic devices expansive enough to hold a quantum computer, quantum technology cannot at this point in time be shielded from the effects of gravity. The team demonstrated that gravitational interactions create a universal dephasing channel, disrupting the coherence required for quantum operations. However, these same interactions could also be used to develop highly sensitive gravitational sensors.

“Our research reveals that the same finely tuned qubits engineered to process information can serve as precise sensors—so sensitive, in fact, that future quantum chips may double as practical gravity sensors. This approach is opening a new frontier in quantum technology.”

To explore these effects, the researchers modeled the gravitational redshift’s impact on energy-level splitting in transmon qubits. Gravitational redshift, a phenomenon predicted by Einstein’s general theory of relativity, occurs when light or electromagnetic waves traveling away from a massive object lose energy and shift to longer wavelengths. This happens because gravity alters the flow of time, causing clocks closer to a massive object to tick more slowly than those farther away.

Historically, gravitational redshift has played a pivotal role in confirming general relativity and is critical to technologies like GPS, where precise timing accounts for gravitational differences between satellites and the Earth’s surface. In this study, the researchers applied the concept to transmon qubits, modeling how gravitational effects subtly shift their energy states depending on their height in a gravitational field.

Using computational simulations and theoretical models, the team was able to quantify these energy-level shifts. While the effects are negligible for individual qubits, they become significant when scaled to arrays of qubits positioned at varying heights on vertically aligned chips, such as Google’s Sycamore chip.

A breakthrough in artificial intelligence.

Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include understanding natural language, recognizing patterns, solving problems, and learning from experience. AI technologies use algorithms and massive amounts of data to train models that can make decisions, automate processes, and improve over time through machine learning. The applications of AI are diverse, impacting fields such as healthcare, finance, automotive, and entertainment, fundamentally changing the way we interact with technology.

Physicists turn to supercomputers to help build a 3D picture of the structures of protons and neutrons.

A team of scientists has made exciting advances in mapping the internal components of hadrons. They employed complex quantum chromodynamics and supercomputer simulations to explore how quarks and gluons interact within protons, aiming to unravel mysteries like the proton’s spin and internal energy distribution.

Unveiling the Parton Landscape.

The future of technology often feels like science fiction, and a recent conversation between Sundar Pichai, CEO of Google, and Elon Musk of SpaceX proved just that. With Google unveiling its groundbreaking quantum chip Willow, a bold idea was floated—launching quantum computers into space. This visionary concept could not only transform quantum computing but also push the boundaries of modern science as we know it.

Quantum computing has long promised to solve problems far beyond the reach of traditional computers, and Google’s Willow chip seems to be delivering on that vision. In a recent demonstration, the chip completed a complex calculation in just five minutes—a task that would take classical supercomputers billions of years.

Google’s researchers describe this milestone as exceeding the known scales of physics, potentially unlocking groundbreaking possibilities in scientific research and technological development. But despite its promise, the field of quantum computing faces significant challenges.

For the first time, the “inertial range connecting large and small eddies in accretion disk turbulence” was reproduced.


Black holes cannot be directly detected by ground or space-based telescopes. But the accretion disks of gas, plasma, and dust that orbit them emit detectable electromagnetic radiation, allowing astronomers to infer the presence of black holes.

This process creates intense turbulence, which has been a challenging phenomenon to study. Previous simulations had been limited by computational power, but this new research has broken new ground.

The researchers leveraged the computational power of supercomputers like RIKEN’s “Fugaku” and NAOJ’s “ATERUI II. Interestingly, Fugaku held the title of the world’s fastest computer until 2022.

Anchored by next-generation IBM Quantum System Two in Illinois Quantum and Microelectronics Park, new initiative will advance useful quantum applications as industries move towards quantum-centric supercomputing.