Toggle light / dark theme

Circa 2018


After 12 years of work, researchers at the University of Manchester in England have completed construction of a “SpiNNaker” (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.

The human brain contains approximately 100 billion neurons, exchanging signals through hundreds of trillions of synapses. While these numbers are imposing, a digital brain simulation needs far more than raw processing power: rather, what’s needed is a radical rethinking of the standard computer architecture on which most computers are built.

Nvidia is is going to be powering the world’s fastest AI supercomputer, a new system dubbed “Leonardo” that’s being built by the Italian multi-university consortium CINECA, a global supercomputing leader. The Leonardo system will offer as much as 10 exaflops of FP16 AI performance capabilities, and be made up of more than 14,000 Nvidia Ampere-based GPUS once completed.

Leonardo will be one of four new supercomputers supported by a cross-European effort to advance high-performance computing capabilities in the region, which will eventually offer advanced AI capabilities for processing applications across both science and industry. Nvidia will also be supplying its Mellanox HDR InfiniBand networks to the project in order to enable performance across the clusters with low-latency broadband connections.

The other computers in the cluster include MeluXina in Luxembourg and Vega in Slovenia, as well as a new supercooling unit coming online in the Czech Republic. The pan-European consortium also plans four more Supercomputers for Bulgaria, Finland, Portugal and Spain; though, those will follow later and specifics around their performance and locations aren’t yet available.

A team of researchers led by Osaka University discovers “microtube implosion,” a novel mechanism that demonstrates the generation of megatesla-order magnetic fields.

Magnetic fields are used in various areas of modern physics and engineering, with practical applications ranging from doorbells to maglev trains. Since Nikola Tesla’s discoveries in the 19th century, researchers have strived to realize strong magnetic fields in laboratories for fundamental studies and diverse applications, but the magnetic strength of familiar examples are relatively weak. Geomagnetism is 0.3−0.5 gauss (G) and magnetic tomography (MRI) used in hospitals is about 1 tesla (T = 104 G). By contrast, future magnetic fusion and maglev trains will require magnetic fields on the kilotesla (kT = 107 G) order. To date, the highest magnetic fields experimentally observed are on the kT order.

Recently, scientists at Osaka University discovered a novel mechanism called a “microtube implosion,” and demonstrated the generation of megatesla (MT = 1010 G) order magnetic fields via particle simulations using a supercomputer. Astonishingly, this is three orders of magnitude higher than what has ever been achieved in a laboratory. Such high magnetic fields are expected only in celestial bodies like neutron stars and black holes.

The blockchain revolution, online gaming and virtual reality are powerful new technologies that promise to change our online experience. After summarizing advances in these hot technologies, we use the collective intelligence of our TechCast Experts to forecast the coming Internet that is likely to emerge from their application.

Here’s what learned:

Security May Arrive About 2027 We found a sharp division of opinion, with roughly half of our experts thinking there is little or no chance that the Internet would become secure — and the other half thinks there is about a 60% probability that blockchain and quantum cryptography will solve the problem at about 2027. After noting the success of Gilder’s previous forecasts, we tend to accept those who agree with Gilder.

This could be important!


A new algorithm that fast forwards simulations could bring greater use ability to current and near-term quantum computers, opening the way for applications to run past strict time limits that hamper many quantum calculations.

“Quantum computers have a limited time to perform calculations before their useful quantum nature, which we call coherence, breaks down,” said Andrew Sornborger of the Computer, Computational, and Statistical Sciences division at Los Alamos National Laboratory, and senior author on a paper announcing the research. “With a we have developed and tested, we will be able to fast forward quantum simulations to solve problems that were previously out of reach.”

Computers built of quantum components, known as qubits, can potentially solve extremely difficult problems that exceed the capabilities of even the most powerful modern supercomputers. Applications include faster analysis of large data sets, , and unraveling the mysteries of superconductivity, to name a few of the possibilities that could lead to major technological and scientific breakthroughs in the near future.

Scientists have created a device which could make it easier to harness super-fast quantum computers for real-world applications, a team at Finland’s Aalto University said on Wednesday.

Quantum computers are a new generation of machines powered by energy transfers between so-called “”— a fraction of a millimetre across.

Scientists believe the devices will eventually be able to vastly outperform even the world’s most powerful conventional supercomputers.

Researchers from the Moscow Institute of Physics and Technology and King’s College London cleared the obstacle that had prevented the creation of electrically driven nanolasers for integrated circuits. The approach, reported in a recent paper in Nanophotonics, enables coherent light source design on the scale not only hundreds of times smaller than the thickness of a human hair but even smaller than the wavelength of light emitted by the laser. This lays the foundation for ultrafast optical data transfer in the manycore microprocessors expected to emerge in the near future.

Light signals revolutionized information technologies in the 1980s, when optical fibers started to replace copper wires, making data transmission orders of magnitude faster. Since optical communication relies on light— with a frequency of several hundred terahertz—it allows transferring terabytes of data every second through a single fiber, vastly outperforming electrical interconnects.

Fiber optics underlies the modern internet, but light could do much more for us. It could be put into action even inside the microprocessors of supercomputers, workstations, smartphones, and other devices. This requires using optical communication lines to interconnect the purely , such as processor cores. As a result, vast amounts of information could be transferred across the chip nearly instantaneously.