Toggle light / dark theme

The First Quantum Supercomputer is Here

The first #Quantum #Supercomputers are here! Quantum enabled supercomputing promises to shed light on new quantum algorithms, hardware innovations, and error mitigation schemes. Large collaborations in the field are kicking off between corporations and supercomputing centers. Companies like NVIDIA, IBM, IQM, QuEra, and others are some of the earliest to participate in these partnerships.

Join My Discord: / discord.
Become a patron: https://patreon.com/user?u=100800416
for access to my animation source code, video scripts, and research materials.
Also check out my instagram: / lukasinthelab.

Like a Child, This Brain-Inspired AI Can Explain Its Reasoning

But deep learning has a massive drawback: The algorithms can’t justify their answers. Often called the “black box” problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms—even if they have high diagnostic accuracy—can’t provide that information.

To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In a study in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.

The resulting AI acts a bit like a child. It condenses different types of information into “hubs.” Each hub is then transcribed into coding guidelines for humans to read—CliffsNotes for programmers that explain the algorithm’s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.

New computational microscopy technique provides more direct route to crisp images

For hundreds of years, the clarity and magnification of microscopes were ultimately limited by the physical properties of their optical lenses. Microscope makers pushed those boundaries by making increasingly complicated and expensive stacks of lens elements. Still, scientists had to decide between high resolution and a small field of view on the one hand or low resolution and a large field of view on the other.

In 2013, a team of Caltech engineers introduced a called FPM (for Fourier ptychographic microscopy). This technology marked the advent of computational microscopy, the use of techniques that wed the sensing of conventional microscopes with that process detected information in new ways to create deeper, sharper images covering larger areas. FPM has since been widely adopted for its ability to acquire high-resolution images of samples while maintaining a large field of view using relatively inexpensive equipment.

Now the same lab has developed a new method that can outperform FPM in its ability to obtain images free of blurriness or distortion, even while taking fewer measurements. The new technique, described in a paper that appeared in the journal Nature Communications, could lead to advances in such areas as biomedical imaging, digital pathology, and drug screening.

Exploring the Emergent Abilities of Large Language Models

Emergence, a fascinating and complex concept, illuminates how intricate patterns and behaviors can spring from simple interactions. It’s akin to marveling at a symphony, where each individual note, simple in itself, contributes to a rich, complex musical experience far surpassing the sum of its parts. Although definitions of emergence vary across disciplines, they converge on a common theme: small quantitative changes in a system’s parameters can lead to significant qualitative transformations in its behavior. These qualitative shifts represent different “regimes” where the fundamental “rules of the game”-the underlying principles or equations governing the behavior-change dramatically.

To make this abstract concept more tangible, let’s explore relatable examples from various fields:

1. Physics: Phase Transitions: Emergence is vividly illustrated through phase transitions, like water turning into ice. Here, minor temperature changes (quantitative parameter) lead to a drastic change from liquid to solid (qualitative behavior). Each molecule behaves simply, but collectively, they transition into a distinctly different state with their properties.

The surprising behavior of black holes in an expanding universe

A physicist investigating black holes has found that, in an expanding universe, Einstein’s equations require that the rate of the universe’s expansion at the event horizon of every black hole must be a constant, the same for all black holes. In turn this means that the only energy at the event horizon is dark energy, the so-called cosmological constant. The study is published on the arXiv preprint server.

“Otherwise,” said Nikodem Popławski, a Distinguished Lecturer at the University of New Haven, “the pressure of matter and curvature of spacetime would have to be infinite at a horizon, but that is unphysical.”

Black holes are a fascinating topic because they are about the simplest things in the universe: their only properties are mass, electric charge and angular momentum (spin). Yet their simplicity gives rise to a fantastical property—they have an event horizon at a critical distance from the black hole, a nonphysical surface around it, spherical in the simplest cases. Anything closer to the black hole, that is, inside the event horizon, can never escape the black hole.

On quantum computing for artificial superintelligence

Artificial intelligence algorithms, fueled by continuous technological development and increased computing power, have proven effective across a variety of tasks. Concurrently, quantum computers have shown promise in solving problems beyond the reach of classical computers. These advancements have contributed to a misconception that quantum computers enable hypercomputation, sparking speculation about quantum supremacy leading to an intelligence explosion and the creation of superintelligent agents. We challenge this notion, arguing that current evidence does not support the idea that quantum technologies enable hypercomputation. Fundamental limitations on information storage within finite spaces and the accessibility of information from quantum states constrain quantum computers from surpassing the Turing computing barrier.

AI Generated Content and Academic Journals

What are good policy options for academic journals regarding the detection of AI generated content and publication decisions? As a group of associate editors of Dialectica note below, there are several issues involved, including the uncertain performance of AI detection tools and the risk that material checked by such tools is used for the further training of AIs. They’re interested in learning about what policies, if any, other journals have instituted in regard to these challenges and how they’re working, as well as other AI-related problems journals should have policies about. They write: As associate editors of a philosophy journal, we face the challenge of dealing with content that we suspect was generated by AI. Just like plagiarized content, AI generated content is submitted under false claim of authorship. Among the unique challenges posed by AI, the following two are pertinent for journal editors. First, there is the worry of feeding material to AI while attempting to minimize its impact. To the best of our knowledge, the only available method to check for AI generated content involves websites such as GPTZero. However, using such AI detectors differs from plagiarism software in running the risk of making copyrighted material available for the purposes of AI training, which eventually aids the development of a commercial product. We wonder whether using such software under these conditions is justifiable. Second, there is the worry of delegating decisions to an algorithm the workings of which are opaque. Unlike plagiarized texts, texts generated by AI routinely do not stand in an obvious relation of resemblance to an original. This renders it extremely difficult to verify whether an article or part of an article was AI generated; the basis for refusing to consider an article on such grounds is therefore shaky at best. We wonder whether it is problematic to refuse to publish an article solely because the likelihood of its being generated by AI passes a specific threshold (say, 90%) according to a specific website. We would be interested to learn about best practices adopted by other journals and about issues we may have neglected to consider. We especially appreciate the thoughts of fellow philosophers as well as members of other fields facing similar problems. — Aleks…

Emerging Memristive Artificial Synapses and Neurons for Energy-Efficient Neuromorphic Computing

Memristors have recently attracted significant interest due to their applicability as promising building blocks of neuromorphic computing and electronic systems. The dynamic reconfiguration of memristors, which is based on the history of applied electrical stimuli, can mimic both essential analog synaptic and neuronal functionalities. These can be utilized as the node and terminal devices in an artificial neural network. Consequently, the ability to understand, control, and utilize fundamental switching principles and various types of device architectures of the memristor is necessary for achieving memristor-based neuromorphic hardware systems. Herein, a wide range of memristors and memristive-related devices for artificial synapses and neurons is highlighted. The device structures, switching principles, and the applications of essential synaptic and neuronal functionalities are sequentially presented. Moreover, recent advances in memristive artificial neural networks and their hardware implementations are introduced along with an overview of the various learning algorithms. Finally, the main challenges of the memristive synapses and neurons toward high-performance and energy-efficient neuromorphic computing are briefly discussed. This progress report aims to be an insightful guide for the research on memristors and neuromorphic-based computing.

Keywords: artificial neural networks; artificial neurons; artificial synapses; memristive electronic devices; memristors; neuromorphic electronics.

© 2020 Wiley-VCH GmbH.