Toggle light / dark theme

Breaking the code in network theory: Bimodularity reveals direction of influence in complex systems

As summer winds down, many of us in continental Europe are heading back north. The long return journeys from the beaches of southern France, Spain, and Italy once again clog alpine tunnels and Mediterranean coastal routes during the infamous Black Saturday bottlenecks. This annual migration, like many systems in our world, forms a network—not just of connections, but of communities shaped by shared patterns of origin and destination.

This is where —and in particular, community detection—comes in. For decades, researchers have developed powerful tools to uncover in networks: clusters of tightly interconnected nodes. But these tools work best for undirected networks, where connections are mutual. Graphically, the node maps may look familiar.

These clusters can mean that a group of people are all friends on Facebook, follow different sport accounts on X, or all live in the same city. Using a standard modularity algorithm, we can then find connections between different communities and begin to draw useful conclusions. Perhaps users in the fly-fishing community also show up as followers of nonalcoholic beer enthusiasts in Geneva. This type of information extraction, impossible without community analysis, is a layer of meaning that can be leveraged to sell beer or even nefariously influence elections.

New AI attack hides data-theft prompts in downscaled images

Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model.

The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms.

Developed by Trail of Bits researchers Kikimora Morozova and Suha Sabi Hussain, the attack builds upon a theory presented in a 2020 USENIX paper by a German university (TU Braunschweig) exploring the possibility of an image-scaling attack in machine learning.

For the Singularity to Truly Arrive, We’d Need a Machine That Eats the Sun

However, if you’re rich and you don’t like the idea of a limit on computing, you can turn to futurism, longtermism, or “AI optimism,” depending on your favorite flavor. People in these camps believe in developing AI as fast as possible so we can (they claim) keep guardrails in place that will prevent AI from going rogue or becoming evil. (Today, people can’t seem to—or don’t want to—control whether or not their chatbots become racist, are “sensual” with children, or induce psychosis in the general population, but sure.)

The goal of these AI boosters is known as artificial general intelligence, or AGI. They theorize, or even hope for, an AI so powerful that it thinks like… well… a human mind whose ability is enhanced by a billion computers. If someone ever does develop an AGI that surpasses human intelligence, that moment is known as the AI singularity. (There are other, unrelated singularities in physics.) AI optimists want to accelerate the singularity and usher in this “godlike” AGI.

One of the key facts of computer logic is that, if you can slow the processes down enough and look at it in enough detail, you can track and predict every single thing that a program will do. Algorithms (and not the opaque AI kind) guide everything within a computer. Over the decades, experts have written the exact ways information can be sent, one bit—one minuscule electrical zap—at a time through a central processing unit (CPU).

Researchers Demonstrate QuantumShield-BC Blockchain Framework

Researchers have developed QuantumShield-BC, a blockchain framework designed to resist attacks from quantum computers by integrating post-quantum cryptography (PQC) utilising algorithms such as Dilithium and SPHINCS+, quantum key distribution (QKD), and quantum Byzantine fault tolerance (Q-BFT) leveraging quantum random number generation (QRNG) for unbiased leader selection. The framework was tested on a controlled testbed with up to 100 nodes, demonstrating resistance to simulated quantum attacks and achieving fairness through QRNG-based consensus. An ablation study confirmed the contribution of each quantum component to overall security, although the QKD implementation was simulated and scalability to larger networks requires further investigation.

Thermodynamic computing system for AI applications

Recent breakthroughs in artificial intelligence (AI) algorithms have highlighted the need for alternative computing hardware in order to truly unlock the potential for AI. Physics-based hardware, such as thermodynamic computing, has the potential to provide a fast, low-power means to accelerate AI primitives, especially generative AI and probabilistic AI. In this work, we present a small-scale thermodynamic computer, which we call the stochastic processing unit. This device is composed of RLC circuits, as unit cells, on a printed circuit board, with 8 unit cells that are all-to-all coupled via switched capacitances. It can be used for either sampling or linear algebra primitives, and we demonstrate Gaussian sampling and matrix inversion on our hardware. The latter represents a thermodynamic linear algebra experiment. We envision that this hardware, when scaled up in size, will have significant impact on accelerating various probabilistic AI applications.

#Repost Nature Publishing


Current digital hardware struggles with high computational demands in applications such as probabilistic AI. Here, authors present a small-scale thermodynamic computer composed of eight RLC circuits, demonstrating Gaussian sampling and matrix inversion, suggesting potential speed and energy efficiency advantages over digital GPUs.

China data link could offer faster coordination during hypersonic attacks

China’s military data link could offer faster coordination during hypersonic attacks.


Chinese researchers explain that traditional tactical data links rely on round-trip time (RTT) for synchronization, which works for low-speed aircraft. Systems like NATO’s Link-16 achieve roughly 100-nanosecond accuracy under these conditions.

However, in hypersonic cooperative strike systems operating above Mach 5, the rapid relative motion between widely dispersed platforms creates asymmetric transmission paths, severely reducing the precision of conventional RTT algorithms. This highlights the need for new communication technologies capable of maintaining ultra-precise timing at extreme speeds.

What came before the Big Bang? Supercomputers may hold the answer

Scientists are rethinking the universe’s deepest mysteries using numerical relativity, complex computer simulations of Einstein’s equations in extreme conditions. This method could help explore what happened before the Big Bang, test theories of cosmic inflation, investigate multiverse collisions, and even model cyclic universes that endlessly bounce through creation and destruction.

A new perspective on how cosmological correlations change based on kinematic parameters

To study the origin and evolution of the universe, physicists rely on theories that describe the statistical relationships between different events or fields in spacetime, broadly referred to as cosmological correlations. Kinematic parameters are essentially the data that specify a cosmological correlation—the positions of particles, or the wavenumbers of cosmological fluctuations.

Changes in cosmological correlations influenced by variations in parameters can be described using so-called differential equations. These are a type of mathematical equation that connect a function (i.e., a relationship between an input and an output) to its rate of change. In physics, these equations are used extensively as they are well-suited for capturing the universe’s highly dynamic nature.

Researchers at Princeton’s Institute for Advanced Study, the Leung Center for Cosmology and Particle Astrophysics in Taipei, Caltech’s Walter Burke Institute for Theoretical Physics, the University of Chicago, and the Scuola Normale Superiore in Pisa recently introduced a new perspective to approach equations describing how cosmological correlations are affected by smooth changes in kinematic parameters.

Relativistic Motion Boosts Engine Efficiency Beyond Limits

The pursuit of more efficient engines continually pushes the boundaries of thermodynamics, and recent work demonstrates that relativistic effects may offer a surprising pathway to surpass conventional limits. Tanmoy Pandit from the Leibniz Institute of Hannover, along with Tanmoy Pandit from TU Berlin and Pritam Chattopadhyay from the Weizmann Institute of Science, and colleagues, investigate a novel thermal machine that harnesses the principles of relativity to achieve efficiencies beyond those dictated by the Carnot cycle. Their research reveals that by incorporating relativistic motion into the system, specifically through the reshaping of energy spectra via the Doppler effect, it becomes possible to extract useful work even without a temperature difference, effectively establishing relativistic motion as a valuable resource for energy conversion. This discovery not only challenges established thermodynamic boundaries, but also opens exciting possibilities for designing future technologies that leverage the fundamental principles of relativity to enhance performance.


The appendices detail the Lindblad superoperator used to describe the system’s dynamics and the transformation to a rotating frame to simplify the analysis. They show how relativistic motion affects the average number of quanta in the reservoir and the superoperators, and present the detailed derivation of the steady-state density matrix elements for the three-level heat engine, providing equations for power output and efficiency. The document describes the Monte Carlo method used to estimate the generalized Carnot-like efficiency bound in relativistic quantum thermal machines, providing pseudocode for implementation and explaining how the efficiency bound is extracted from efficiency and power pairs. Overall, this is an excellent supplementary material document that provides a comprehensive and detailed explanation of the theoretical framework, calculations, and numerical methods used in the research paper. The clear organization, detailed derivations, and well-explained physical concepts make it a valuable resource for anyone interested in relativistic quantum thermal machines.

Relativistic Motion Boosts Heat Engine Efficiency

Researchers have demonstrated that relativistic motion can function as a genuine thermodynamic resource, enabling a heat engine to surpass the conventional limits of efficiency. The team investigated a three-level maser, where thermal reservoirs are in constant relativistic motion relative to the working medium, using a model that accurately captures the effects of relativistic motion on energy transfer. The results reveal that the engine’s performance is not solely dictated by temperature differences, but is significantly influenced by the velocity of the thermal reservoirs. Specifically, the engine can operate with greater efficiency than predicted by the Carnot limit, due to the reshaping of the energy spectrum caused by relativistic motion.

Grok answers my questions about what Elon meant when he said Tesla FSD v14 will seem sentient

Questions to inspire discussion.

Advanced Navigation and Obstacle Recognition.

🛣️ Q: How will FSD v14 handle unique driveway features? A: The improved neural net and higher resolution video processing will help FSD v14 better recognize and navigate features like speed bumps and humps, adjusting speed and steering smoothly based on their shape and height.

🚧 Q: What improvements are expected in distinguishing real obstacles? A: Enhanced object detection driven by improved algorithms and higher resolution video inputs will make FSD v14 better at distinguishing real obstacles from false positives like tire marks, avoiding abrupt breaking and overreacting.

Edge case handling and smooth operation.

🧩 Q: How will FSD v14 handle complex edge cases? A: The massive jump in parameter count and better video compression will help the AI better understand edge cases, allowing it to reason that non-threatening objects like a stationary hatch in the road aren’t obstacles, maintaining smooth cruising.

/* */