Toggle light / dark theme

Quantum simulators are now addressing complex physics problems, such as the dynamics of 1D quantum magnets and their potential similarities to classical phenomena like snow accumulation. Recent research confirms some aspects of this theory, but also highlights challenges in fully validating the KPZ universality class in quantum systems. Credit: Google LLC

Quantum simulators are advancing quickly and can now tackle issues previously confined to theoretical physics and numerical simulation. Researchers at Google Quantum AI and their collaborators demonstrated this new potential by exploring dynamics in one-dimensional quantum magnets, specifically focusing on chains of spin-1/2 particles.

They investigated a statistical mechanics problem that has been the focus of attention in recent years: Could such a 1D quantum magnet be described by the same equations as snow falling and clumping together? It seems strange that the two systems would be connected, but in 2019, researchers at the University of Ljubljana found striking numerical evidence that led them to conjecture that the spin dynamics in the spin-1⁄2 Heisenberg model are in the Kardar-Parisi-Zhang (KPZ) universality class, based on the scaling of the infinite-temperature spin-spin correlation function.

This post is also available in: עברית (Hebrew)

AI technology is spreading quickly throughout many different industries, and its integration depends on users’ trust and safety concerns. This matter becomes complicated when the algorithms powering AI-based tools are vulnerable to cyberattacks that could have detrimental results.

Dr. David P. Woodruff from Carnegie Mellon University and Dr. Samson Zhou from Texas A&M University are working to strengthen the algorithms used by big data AI models against attacks.

ICTP lectures “Topology and dynamics of higher-order networks”

- Network topology: 1 https://youtube.com/watch?v=mbmsv9RS3Pc&t=7562s.

- Network topology:2 https://youtube.com/watch?v=F6m5lPfk5Mc&t=3808s.

-Network geometry.


Topological Dirac equation and Discrete Network Geometry-Metric cohomology Speaker: Ginestra Bianconi (Queen Mary University of London) Higher-order networks [1] capture the many-body interactions present in complex systems and are dramatically changing our understanding of the interplay between topology of and dynamics. In this context, the new field of topological signals is emerging with the potential to significantly transform our understanding of the interplay between the structure and the dynamics in complex interacting systems. This field combines higher-order structures with discrete topology, discrete topology and dynamics and shows the emergence of new dynamical states and collective phenomena. Topological signals are dynamical variables, not only sustained on the nodes but also on edges, or even triangles and higher-order cells of higher-order networks. While traditionally network dynamics is studied by focusing only on dynamical variables associated to the nodes of simple and higher-order networks topological signals greatly enrich our understanding of dynamics in discrete topologies. These topological signals are treated by using algebraic topology operators as the Hodge Laplacian and the discrete Dirac operator. Recently, growing attention has been devoted to the study of topological signals showing that topological signals undergo collective phenomena and that they offer new paradigms to understand on one side how topology shape dynamics and on the other side how dynamics learns the underlying network topology. These concepts and idea have wide applications. Here we cover example of their applications in mathematical physics and dynamical systems. The field is topical at the moment with many new results already established and an already rich bibliography, therefore it is very timely to propose a series of lectures on the topic to introduce new scientists to this emergent field. Here we propose a series of lectures for a broad audience of scientists addressed mostly to physicist and mathematicians, but including also computer scientists and neuroscientists. The course is planned to be introductory, and self-contained starting from minimum set of prerequisites and focus mostly on the mathematical physics aspect of this field. The course will cover 4 lectures and 1 seminar. Ref: [1] Bianconi, G.: Higher-order networks: An introduction to simplicial complexes. Cambridge University Press (2021). [2] Bianconi, G., 2021. The topological Dirac equation of networks and simplicial complexes. Journal of Physics: Complexity, 2, p.035022.[3]Bianconi, G., 2023. The mass of simple and higher-order networks. Journal of Physics A: Mathematical and Theoretical, 57, p.015001.[4] Bianconi, G., 2024. Quantum entropy couples matter with geometry. arXiv preprint arXiv:2404.08556.[5] Millán, A.P., Torres, J.J. and Bianconi, G., 2020. Explosive higher-order Kuramoto dynamics on simplicial complexes. Physical Review Letters, 124(21), p.218301.

I don’t know if this true but it definitely could be as most civilizations are probably more advanced than the earth.


A survey of five million distant solar systems, aided by ‘neural network’ algorithms, has discovered 60 stars that appear to be surrounded by giant alien power plants.

Seven of the stars — so-called M-dwarf stars that range between 60 percent and 8 percent the size of our sun — were recorded giving off unexpectedly high infrared ‘heat signatures,’ according to the astronomers.

Natural, and better understood, outer space ‘phenomena,’ as they report in their new study, ‘cannot easily account for the observed infrared excess emission.’

The accelerated expansion of the present universe, believed to be driven by a mysterious dark energy, is one of the greatest puzzles in our understanding of the cosmos. The standard model of cosmology called Lambda-CDM, explains this expansion as a cosmological constant in Einstein’s field equations. However, the cosmological constant itself lacks a complete theoretical understanding, particularly regarding its very small positive value.

Differential neuromorphic computing, as a memristor-assisted perception method, holds the potential to enhance subsequent decision-making and control processes. Compared with conventional technologies, both the PID control approach and the proposed differential neuromorphic computing share a fundamental principle of smartly adjusting outputs in response to feedback, they diverge significantly in the data manipulation process (Supplementary Discussion 12 and Fig. S26); our method leverages the nonlinear characteristics of the memristor and a dynamic selection scheme to execute more complex data manipulation than linear coefficient-based error correction in PID. Additionally, the intrinsic memory function of memristors in our system enables real-time adaptation to changing environments. This represents a significant advantage compared to the static parameter configuration of PID systems. To perform similar adaptive control functions in tactile experiments, the von Neumann architecture follows a multi-step process involving several data movements: 1. Input data about the piezoresistive film state is transferred to the system memory via an I/O interface. 2. This sensory data is then moved from the memory to the cache. 3. Subsequently, it is forwarded to the Arithmetic Logic Unit (ALU) and waits for processing.4. Historical tactile information is also transferred from the memory to the cache unless it is already present. 5. This historical data is forwarded to the ALU. 6. ALU calculates the current sensory and historical data and returns the updated historical data to the cache. In contrast, our memristor-based approach simplifies this process, reducing it to three primary steps: 1. ADC reads data from the piezoresistive film. 2. ADC reads the current state of the memristor, which represents the historical tactile stimuli. 3. DAC, controlled by FPGA logic, updates the memristor state based on the inputs. This process reduces the costs of operation and enhances data processing efficiency.

In real-world settings, robotic tactile systems are required to elaborate large amounts of tactile data and respond as quickly as possible, taking less than 100 ms, similar to human tactile systems58,59. The current state-of-the-art robotics tactile technologies are capable of elaborating sudden changes in force, such as slip detection, at millisecond levels (from 500 μs to 50 ms)59,60,61,62, and the response time of our tactile system has also reached this detection level. For the visual processing, suppose a vehicle travels 40 km per hour in an urban area and wants control effective for every 1 m. In that case, the requirement translates a maximum allowable response time of 90 ms for the entire processing pipeline, which includes sensors, operating systems, middleware, and applications such as object detection, prediction, and vehicle control63,64. When incorporating our proposed memristor-assisted method with conventional camera systems, the additional time delay includes the delay from filter circuits (less than 1 ms) and the switching time for the memristor device, which ranges from nanoseconds (ns) to even picoseconds (ps)21,65,66,67. Compared to the required overall response time of the pipeline, these additions are negligible, demonstrating the potential of our method application in real-world driving scenarios68. Although our memristor-based perception method meets the response time requirement for described scenarios, our approach faces several challenges that need to be addressed for real-world applications. Apart from the common issues such as variability in device performance and the nonlinear dynamics of memristive responses, our approach needs to overcome the following challenges:

Currently, the modulation voltage applied to memristors is preset based on the external sensory feature, and the control algorithm is based on hard threshold comparison. This setting lacks the flexibility required for diverse real-world environments where sensory inputs and required responses can vary significantly. Therefore, it is crucial to develop a more automatic memristive modulation method along with a control algorithm that can dynamically adjust based on varying application scenarios.

In artificial neural networks, many models are trained for a narrow task using a specific dataset. They face difficulties in solving problems that include dynamic input/output data types and changing objective functions. Whenever the input/output tensor dimension or the data type is modified, the machine learning models need to be rebuilt and subsequently retrained from scratch. Furthermore, many machine learning algorithms that are trained for a specific objective, such as classification, may perform poorly at other tasks, such as reinforcement learning or quantification.

Even if the input/output dimensions and the objective functions remain constant, the algorithms do not generalize well across different datasets. For example, a neural network trained on classifying cats and dogs does not perform well on classifying humans and horses despite both of the datasets having the exact same image input1. Moreover, neural networks are highly susceptible to adversarial attacks2. A small deviation from the training dataset, such as changing one pixel, could cause the neural network to have significantly worse performance. This problem is known as the generalization problem3, and the field of transfer learning can help to solve it.

Transfer learning4,5,6,7,8,9,10 solves the problems presented above by allowing knowledge transfer from one neural network to another. A common way to use supervised transfer learning is obtaining a large pre-trained neural network and retraining it for a different but closely related problem. This significantly reduces training time and allows the model to be trained on a less powerful computer. Many researchers used pre-trained neural networks such as ResNet-5011 and retrained them to classify malicious software12,13,14,15. Another application of transfer learning is tackling the generalization problem, where the testing dataset is completely different from the training dataset. For example, every human has unique electroencephalography (EEG) signals due to them having distinctive brain structures. Transfer learning solves the generalization problem by pretraining on a general population EEG dataset and retraining the model for a specific patient16,17,18,19,20. As a result, the neural network is dynamically tailored for a specific person and can interpret their specific EEG signals properly. Labeling large datasets by hand is tedious and time-consuming. In semi-supervised transfer learning21,22,23,24, either the source dataset or the target dataset is unlabeled. That way, the neural networks can self-learn which pieces of information to extract and process without many labels.

Digital twin models may enhance future autonomous systems.

Systems controlled by next-generation computing algorithms could give rise to better and more efficient machine learning products, a new study suggests.

Using machine learning tools to create a digital twin, or a virtual copy, of an electronic circuit that exhibits chaotic behavior, researchers found that they were successful at predicting how it would behave and using that information to control it.

Join my mailing list https://briankeating.com/list to win a real 4 billion year old meteorite! All.edu emails in the USA 🇺🇸 will WIN!

Previous guest and friend of the show, Sir Roger Penrose, argues that human consciousness is not algorithmic and, therefore, cannot be modeled by Turing machines. In fact, he believes in a quantum mechanical understanding of human consciousness. However, as with any issue related to human consciousness, many disagree with him. One of his opponents is Daniel Dennett, with whom I recently had the pleasure of talking. Tune in to find out why Dennett thinks Penrose is wrong!

If you liked this clip, you will for sure love the full interview: • Video.

Shortly after our interview, Daniel sadly passed away at the age of 82. He was a renowned philosopher, thought-provoking writer, brilliant cognitive scientist, and vocal atheist. He was the co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy at Tufts University in Massachusetts, a member of the editorial board for The Rutherford Journal, and a co-founder of The Clergy Project.

Known as one of the \.