Toggle light / dark theme

Could it really happen?Looks like Meta is swinging for the cheap seats.


Looks like Meta is swinging for the cheap seats.

The social media superpower Meta (formerly Facebook) has announced that it has built an “AI supercomputer” — an unconscionably fast computer designed to train and enhance machine-learning systems, according to a Monday post from Meta CEO Mark Zuckerberg.

“Meta has developed what we believe is the world’s fastest AI supercomputer,” said Zuckerberg in his post. “We’re calling it RSC for AI Research SuperCluster and it’ll be complete later this year.”

Meta says it wants to build the most powerful artificial intelligence supercomputer in the world.

The Facebook owner has already designed and built what it calls the AI Research SuperCluster, or RSC, which it says is among the fastest AI supercomputers in the world.

It hopes to top that league by mid-2022, it said, in what would be a major step towards increasing its artificial intelligence capabilities.

Has the first phase of a new AI. Once the AI Research SuperCluster (RSC) is fully built out later this year, the company believes it will be the fastest AI supercomputer on the planet, capable of “performing at nearly 5 exaflops of mixed precision compute.”

The company says RSC will help researchers develop better AI models that can learn from trillions of examples. Among other things, the models will be able to build better augmented reality tools and “seamlessly analyze text, images and video together,” according to Meta. Much of this work is in service of its vision for the metaverse, in which it says AI-powered apps and products will have a key role.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together,” technical program manager Kevin Lee and software engineer Shubho Sengupta wrote.

What’s next? Human brain-scale AI.

Funded by the Slovakian government using funds allocated by the EU, the I4DI consortium is behind the initiative to build a 64 AI exaflop machine (that’s 64 billion, billion AI operations per second) on our platform by the end of 2022. This will enable Slovakia and the EU to deliver for the first time in the history of humanity a human brain-scale AI supercomputer. Meanwhile, almost a dozen other countries are watching this project closely, with interest in replicating this supercomputer in their own countries.

There are multiple approaches to achieve human brain-like AI. These include machine learning, spiking neural networks like SpiNNaker, neuromorphic computing, bio AI, explainable AI and general AI. Multiple AI approaches require universal supercomputers with universal processors for humanity to deliver human brain-scale AI.

Official launch marks a milestone in the development of quantum computing in Europe.

A quantum annealer with more than 5,000 qubits has been put into operation at Forschungszentrum Jülich. The Jülich Supercomputing Centre (JSC) and D-Wave Systems, a leading provider of quantum computing systems, today launched the company’s first cloud-based quantum service outside North America. The new system is located at Jülich and will work closely with the supercomputers at JSC in the future. The annealing quantum computer is part of the Jülich UNified Infrastructure for Quantum computing (JUNIQ), which was established in autumn 2019 to provide researchers in Germany and Europe with access to various quantum systems.

Light-matter interactions form the basis of many important technologies, including lasers, light-emitting diodes (LEDs), and atomic clocks. However, usual computational approaches for modeling such interactions have limited usefulness and capability. Now, researchers from Japan have developed a technique that overcomes these limitations.

In a study published this month in The International Journal of High Performance Computing Applications, a research team led by the University of Tsukuba describes a highly efficient method for simulating light-matter interactions at the atomic scale.

What makes these interactions so difficult to simulate? One reason is that phenomena associated with the interactions encompass many areas of physics, involving both the propagation of light waves and the dynamics of electrons and ions in matter. Another reason is that such phenomena can cover a wide range of length and time scales.

Chromium defects in silicon carbide may provide a new platform for quantum information.

Quantum computers may be able to solve science problems that are impossible for today’s fastest conventional supercomputers. Quantum sensors may be able to measure signals that cannot be measured by today’s most sensitive sensors. Quantum bits (qubits) are the building blocks for these devices. Scientists are investigating several quantum systems for quantum computing and sensing applications. One system, spin qubits, is based on the control of the orientation of an electron’s spin at the sites of defects in the semiconductor materials that make up qubits. Defects can include small amounts of materials that are different from the main material a semiconductor is made of. Researchers recently demonstrated how to make high quality spin qubits based on chromium defects in silicon carbide.

How to check the trends of Supercomputing Progress, and how this is as close to a pure indicator of technological progress rates as one can find. The recent flattening of this trend has revealed a flattening in all technological and economic progress relative to long-term trendlines.

Top500.org chart : https://top500.org/statistics/perfdevel/

#Supercomputing, #EconomicGrowth #TechnologicalProgress #MooresLaw

For many years, a bottleneck in technological development has been how to get processors and memories to work faster together. Now, researchers at Lund University in Sweden have presented a new solution integrating a memory cell with a processor, which enables much faster calculations, as they happen in the memory circuit itself.

In an article in Nature Electronics, the researchers present a new configuration, in which a cell is integrated with a vertical transistor selector, all at the nanoscale. This brings improvements in scalability, speed and compared with current mass storage solutions.

The fundamental issue is that anything requiring large amounts of data to be processed, such as AI and , requires speed and more capacity. For this to be successful, the memory and processor need to be as close to each other as possible. In addition, it must be possible to run the calculations in an energy-efficient manner, not least as current technology generates high temperatures with high loads.