Toggle light / dark theme

Researchers at Duke University and the University of Maryland have used the frequency of measurements on a quantum computer to get a glimpse into the quantum phenomena of phase changes—something analogous to water turning to steam.

By measuring the number of operations that can be implemented on a quantum computing system without triggering the collapse of its quantum state, the researchers gained insight into how other systems—both natural and computational—meet their tipping points between phases. The results also provide guidance for working to implement that will eventually enable quantum computers to achieve their full potential.

The results appeared online June 3 in the journal Nature Physics.

A major campaign of domino-toppling simulations yields new insights into the effects of friction.

Despite the apparent simplicity of toppling dominoes, physicists still don’t have a complete model of the phenomenon. But new numerical simulations get a step closer by untangling the influence of two types of friction—one between neighboring dominoes and the other between each domino and the surface beneath it [1]. The researchers found that, in some cases, these two friction coefficients play competing roles in determining the speed of the domino cascade. They also found that one of the coefficients behaves similar to friction in granular systems such as piles of sand or pharmaceutical pills, suggesting that the domino simulations may provide insights into other situations where friction is important.

A YouTube video by engineer Destin Sandlin (on his channel Smarter Every Day) inspired David Cantor of Montreal Polytechnic and Kajetan Wojtacki of the Polish Academy of Sciences in Warsaw to study dominoes. Sandlin recorded a series of domino toppling experiments with a high-speed camera and quickly discovered just how complex the problem is. He determined that the wave of falling dominoes moves slightly faster on felt than on a slippery hardwood floor. He also saw surprising anomalies, such as cases where the train of toppling dominoes would abruptly stop.

View insights.

0 post reach.


Noise in an electronic circuit is a nuisance that can scramble information or reduce a detector’s sensitivity. But noise also offers a way to learn about the microscopic quantum mechanisms at play in a material or device. By measuring a circuit’s “shot noise,” a form of white noise, researchers have previously shed light on conduction in quantum Hall and spintronic systems, for instance. Now, a collaboration led by Oren Tal at the Weizmann Institute of Science, Israel, and by Dvira Segal at the University of Toronto, Canada, has shown that an easier-to-measure form of noise, called “flicker noise,” can also be a powerful probe of quantum effects [1].

Flicker noise is a type of pink noise, whose spectrum is dominated by low frequencies—the kind of noise associated with light rainfall. Flicker noise also appears in electrical circuits, but its connection to microscopic transport channels remains poorly understood. To investigate this connection, the team studied an atomic-scale junction between two wires. They modeled the electrons passing through the junction as coherent quantum-mechanical waves that scatter off fluctuating defects located near the junction. These fluctuations can represent the trapping and releasing of electrons by static defects, the movement of charged impurities between lattice sites, and the fluctuations of atoms and molecules adsorbed on surfaces.

Researchers have cooled indium atoms to a temperature close to 1 mK, making indium the first group-III atom to be made ultracold.

At temperatures near to absolute zero, atoms move slower than a three-toed sloth, allowing physicists to gain unprecedented experimental control over these systems. New phases of matter can form when atoms become ultracold and quirky quantum properties can emerge, yet much of the periodic table remains unexplored in the ultracold regime. Now, Travis Nicholson of the National University of Singapore and colleagues have successfully cooled indium to close to 1 mK [1]. Indium is the first “main group-III” atom—a specific group of transition metals on the periodic table—to be cooled to such a low temperature. The demonstration opens the door to studying systems with properties previously unexplored by ultracold physicists.

For their experiments, Nicholson and colleagues used a magneto-optical trap—a standard tool for trapping and cooling atoms. But because this was the first attempt at making indium atoms ultracold, the team had to make their own version of the apparatus rather than using one designed to cool other atoms. “The systems used for this research are highly customized to specific atoms,” Nicholson says. So every part of the setup from designing the laser systems to picking the screws had to be “hashed out by us.” With their custom setup, the group loaded 500,000,000 indium atoms into the trap using a laser beam and then cooled them.

Abstract: Superintelligence, the next phase beyond today’s narrow AI and tomorrow’s AGI, almost intrinsically evades our attempts at detailed comprehension. Yet very different perspectives on superintelligence exist today and have concrete influence on thinking about matters ranging from AGI architectures to technology regulation.
One paradigm considers superintelligences as resembling modern deep reinforcement learning systems, obsessively concerned with optimizing particular goal functions. Another considers superintelligences as open-ended, complex evolving systems, ongoingly balancing drives.
toward individuation and radical self-transcendence in a paraconsistent way. In this talk I will argue that the open-ended conception of superintelligence is both more desirable and more realistic, and will discuss how concrete work being done today on projects like OpenCog Hyperon, SingularityNET and Hypercycle potentially paves the way for a path through beneficial decentralized integrative AGI and on to open-ended superintelligence and ultimately the Singularity.

Bio: In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of “achieving complex goals in complex environments”. A “baby-like” artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as “attention values”, with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.

This talk is part of the ‘Stepping Into the Future‘conference. http://www.scifuture.org/open-ended-vs-closed-minded-concept…elligence/

Many thanks for tuning in!

Early experiences with the new Tesla Model Y with 4,680 cells and a structural battery pack are showing some impressive potential for faster charging and better energy density.

When Tesla delivered its first made-in-Texas Model Y vehicles, we noted that it was strange that Tesla didn’t reveal any details – like specs and pricing – about the new version of the electric SUV.

We learned a little more over the next few weeks. The new Texas-built Model Y Standard starts at $59,990, has a range of 279 miles, goes 0–60 mph acceleration in five seconds, and is equipped with a few new features, including a magnetic center console armrest and a parcel shelf. But we are more interested in the impact of the new battery cell and structural battery pack.

The way she uses dots and strokes looks a bit like ones and zeroes.


Artificial intelligence is playing a huge role in the development of all kinds of technologies. It can be combined with deep learning techniques to do amazing things that have the potential to improve all our lives. Things like learning how to safely control nuclear fusion (opens in new tab), or making delicious pizzas (opens in new tab).

One of the many questions surrounding AI is its use in art. There’s no denying AI can have some amazing abilities when it comes to producing images. Nvidia’s GuaGan2 that can take words and turn them into photorealistic pictures (opens in new tab) is one example of this. Or Ubisoft’s ZooBuilder AI (opens in new tab), which is a prototype for animating animals.