Toggle light / dark theme

Emergence, a fascinating and complex concept, illuminates how intricate patterns and behaviors can spring from simple interactions. It’s akin to marveling at a symphony, where each individual note, simple in itself, contributes to a rich, complex musical experience far surpassing the sum of its parts. Although definitions of emergence vary across disciplines, they converge on a common theme: small quantitative changes in a system’s parameters can lead to significant qualitative transformations in its behavior. These qualitative shifts represent different “regimes” where the fundamental “rules of the game”-the underlying principles or equations governing the behavior-change dramatically.

To make this abstract concept more tangible, let’s explore relatable examples from various fields:

1. Physics: Phase Transitions: Emergence is vividly illustrated through phase transitions, like water turning into ice. Here, minor temperature changes (quantitative parameter) lead to a drastic change from liquid to solid (qualitative behavior). Each molecule behaves simply, but collectively, they transition into a distinctly different state with their properties.

A physicist investigating black holes has found that, in an expanding universe, Einstein’s equations require that the rate of the universe’s expansion at the event horizon of every black hole must be a constant, the same for all black holes. In turn this means that the only energy at the event horizon is dark energy, the so-called cosmological constant. The study is published on the arXiv preprint server.

“Otherwise,” said Nikodem Popławski, a Distinguished Lecturer at the University of New Haven, “the pressure of matter and curvature of spacetime would have to be infinite at a horizon, but that is unphysical.”

Black holes are a fascinating topic because they are about the simplest things in the universe: their only properties are mass, electric charge and angular momentum (spin). Yet their simplicity gives rise to a fantastical property—they have an event horizon at a critical distance from the black hole, a nonphysical surface around it, spherical in the simplest cases. Anything closer to the black hole, that is, inside the event horizon, can never escape the black hole.

Artificial intelligence algorithms, fueled by continuous technological development and increased computing power, have proven effective across a variety of tasks. Concurrently, quantum computers have shown promise in solving problems beyond the reach of classical computers. These advancements have contributed to a misconception that quantum computers enable hypercomputation, sparking speculation about quantum supremacy leading to an intelligence explosion and the creation of superintelligent agents. We challenge this notion, arguing that current evidence does not support the idea that quantum technologies enable hypercomputation. Fundamental limitations on information storage within finite spaces and the accessibility of information from quantum states constrain quantum computers from surpassing the Turing computing barrier.

What are good policy options for academic journals regarding the detection of AI generated content and publication decisions? As a group of associate editors of Dialectica note below, there are several issues involved, including the uncertain performance of AI detection tools and the risk that material checked by such tools is used for the further training of AIs. They’re interested in learning about what policies, if any, other journals have instituted in regard to these challenges and how they’re working, as well as other AI-related problems journals should have policies about. They write: As associate editors of a philosophy journal, we face the challenge of dealing with content that we suspect was generated by AI. Just like plagiarized content, AI generated content is submitted under false claim of authorship. Among the unique challenges posed by AI, the following two are pertinent for journal editors. First, there is the worry of feeding material to AI while attempting to minimize its impact. To the best of our knowledge, the only available method to check for AI generated content involves websites such as GPTZero. However, using such AI detectors differs from plagiarism software in running the risk of making copyrighted material available for the purposes of AI training, which eventually aids the development of a commercial product. We wonder whether using such software under these conditions is justifiable. Second, there is the worry of delegating decisions to an algorithm the workings of which are opaque. Unlike plagiarized texts, texts generated by AI routinely do not stand in an obvious relation of resemblance to an original. This renders it extremely difficult to verify whether an article or part of an article was AI generated; the basis for refusing to consider an article on such grounds is therefore shaky at best. We wonder whether it is problematic to refuse to publish an article solely because the likelihood of its being generated by AI passes a specific threshold (say, 90%) according to a specific website. We would be interested to learn about best practices adopted by other journals and about issues we may have neglected to consider. We especially appreciate the thoughts of fellow philosophers as well as members of other fields facing similar problems. — Aleks…

Memristors have recently attracted significant interest due to their applicability as promising building blocks of neuromorphic computing and electronic systems. The dynamic reconfiguration of memristors, which is based on the history of applied electrical stimuli, can mimic both essential analog synaptic and neuronal functionalities. These can be utilized as the node and terminal devices in an artificial neural network. Consequently, the ability to understand, control, and utilize fundamental switching principles and various types of device architectures of the memristor is necessary for achieving memristor-based neuromorphic hardware systems. Herein, a wide range of memristors and memristive-related devices for artificial synapses and neurons is highlighted. The device structures, switching principles, and the applications of essential synaptic and neuronal functionalities are sequentially presented. Moreover, recent advances in memristive artificial neural networks and their hardware implementations are introduced along with an overview of the various learning algorithms. Finally, the main challenges of the memristive synapses and neurons toward high-performance and energy-efficient neuromorphic computing are briefly discussed. This progress report aims to be an insightful guide for the research on memristors and neuromorphic-based computing.

Keywords: artificial neural networks; artificial neurons; artificial synapses; memristive electronic devices; memristors; neuromorphic electronics.

© 2020 Wiley-VCH GmbH.

Euler’s Identity:

The most beautiful equation in mathematics that combines five of the most important constants of nature: 0, 1, π, e and i, with the three fundamental operations: addition, multiplication and exponentiation.

It’s mystical.


Euler’s identity is an equality found in mathematics that has been compared to a Shakespearean sonnet and described as “the most beautiful equation.” It is a special case of a foundational equation in complex arithmetic called Euler’s Formula, which the late great physicist Richard Feynman called in his lectures “our jewel” and “the most remarkable formula in mathematics.”

As a supplement to optical super-resolution microscopy techniques, computational super-resolution methods have demonstrated remarkable results in alleviating the spatiotemporal imaging trade-off. However, they commonly suffer from low structural fidelity and universality. Therefore, we herein propose a deep-physics-informed sparsity framework designed holistically to synergize the strengths of physical imaging models (image blurring processes), prior knowledge (continuity and sparsity constraints), a back-end optimization algorithm (image deblurring), and deep learning (an unsupervised neural network). Owing to the utilization of a multipronged learning strategy, the trained network can be applied to a variety of imaging modalities and samples to enhance the physical resolution by a factor of at least 1.67 without requiring additional training or parameter tuning.

As information technology is moving toward the era of big data, the traditional Von-Neumann architecture shows limitations in performance. The field of computing has already struggled with the latency and bandwidth required to access memory (“the memory wall”) and energy dissipation (“the power wall”). These challenging issues, such as “the memory bottleneck,” call for significant research investments to develop a new architecture for the next generation of computing systems. Brain-inspired computing is a new computing architecture providing a method of high energy efficiency and high real-time performance for artificial intelligence computing. Brain-inspired neural network system is based on neuron and synapse. The memristive device has been proposed as an artificial synapse for creating neuromorphic computer applications. In this study, post-silicon nano-electronic device and its application in brain-inspired chips are surveyed. First, we introduce the development of neural networks and review the current typical brain-inspired chips, including brain-inspired chips dominated by analog circuit and brain-inspired chips of the full-digital circuit, leading to the design of brain-inspired chips based on post-silicon nano-electronic device. Then, through the analysis of N kinds of post-silicon nano-electronic devices, the research progress of constructing brain-inspired chips using post-silicon nano-electronic device is expounded. Lastly, the future of building brain-inspired chips based on post-silicon nano-electronic device has been prospected.

Keywords: brain-inspired chips; neuron; phase change memory; post-silicon nano-electronic device; resistive memory; synapse.

Copyright © 2022 Lv, Chen, Wang, Li, Xie and Song.