Archive for the ‘supercomputing’ category: Page 4

Apr 18, 2022

Tachyum Prodigy Processor — Small can be Amazing!

Posted by in categories: robotics/AI, supercomputing

The world’s first universal processor. See the benefits of the fastest running processor for hyperscale data centers and supercomputers and AI.

Apr 12, 2022

How to build brain-inspired neural networks based on light

Posted by in categories: biotech/medical, robotics/AI, supercomputing

Supercomputers are extremely fast, but also use a lot of power. Neuromorphic computing, which takes our brain as a model to build fast and energy-efficient computers, can offer a viable and much-needed alternative. The technology has a wealth of opportunities, for example in autonomous driving, interpreting medical images, edge AI or long-haul optical communications. Electrical engineer Patty Stabile is a pioneer when it comes to exploring new brain-and biology-inspired computing paradigms. “TU/e combines all it takes to demonstrate the possibilities of photon-based neuromorphic computing for AI applications.”

Patty Stabile, an associate professor in the department of Electrical Engineering, was among the first to enter the emerging field of photonic neuromorphic computing.

“I had been working on a proposal to build photonic digital artificial neurons when in 2017 researchers from MIT published an article describing how they developed a small chip for carrying out the same algebraic operations, but in an analog way. That is when I realized that synapses based on analog technology were the way to go for running artificial intelligence, and I have been hooked on the subject ever since.”

Apr 10, 2022

Swiss researchers make spin ice supercomputing breakthrough

Posted by in categories: energy, supercomputing

The smallest artificial spin ice ever created could be part of novel low-power HPC.

Apr 7, 2022

Scientists Just Broke The Record For Calculating Pi, And Infinity Never Felt So Close

Posted by in categories: mathematics, supercomputing

Circa 2021

Swiss researchers said Monday they had calculated the mathematical constant pi to a new world-record level of exactitude, hitting 62.8 trillion figures using a supercomputer.

“The calculation took 108 days and nine hours” using a supercomputer, the Graubuenden University of Applied Sciences said in a statement.

Continue reading “Scientists Just Broke The Record For Calculating Pi, And Infinity Never Felt So Close” »

Apr 2, 2022

How China Made An Exascale Supercomputer Out Of Old 14 Nanometer Tech

Posted by in categories: robotics/AI, supercomputing

If you need any proof that it doesn’t take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway “OceanLight” system housed at the National Supercomputing Center in Wuxi, China.

Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called “brain-scale” where the number of parameters starts approaching the number of synapses in the human brain). But, as it turns out, some of these architectural details were hinted at in the three of the six nominations for the Gordon Bell Prize last fall, which we covered here. To our chagrin and embarrassment, we did not dive into the details of the architecture at the time (we had not seen that they had been revealed), and the BaGuaLu paper gives us a chance to circle back.

Before this slew of papers were announced with details on the new Sunway many-core processor, we did take a stab at figuring out how the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC) might build an exascale system, scaling up from the SW26010 processor used in the Sunway “TaihuLight” machine that took the world by storm back in June 2016. The 260-core SW26010 processor was etched by Chinese foundry Semiconductor Manufacturing International Corporation using 28 nanometer processes – not exactly cutting edge. And the SW26010-Pro processor, etched using 14 nanometer processes, is not on an advanced node, but China is perfectly happy to burn a lot of coal to power and cool the OceanLight kicker system based on it. (Also known as the Sunway exascale system or the New Generation Sunway supercomputer.)

Continue reading “How China Made An Exascale Supercomputer Out Of Old 14 Nanometer Tech” »

Mar 31, 2022

DeepMind Mafia, DishBrain, PRIME, ZooKeeper AI, Instant NeRF

Posted by in categories: biological, climatology, robotics/AI, supercomputing

Mar 31, 2022

Our 91st episode with a summary and discussion of last week’s big AI news!

Continue reading “DeepMind Mafia, DishBrain, PRIME, ZooKeeper AI, Instant NeRF” »

Mar 21, 2022

Cluster Your Pi Zeros In Style With 3D Printed Cray-1

Posted by in categories: energy, supercomputing

From a performance standpoint we know building a homebrew Raspberry Pi cluster doesn’t make a lot of sense, as even a fairly run of the mill desktop x86 machine is sure to run circles around it. That said, there’s an argument to be made that rigging up a dozen little Linux boards gives you a compact and affordable playground to experiment with things like parallel computing and load balancing. Is it a perfect argument? Not really. But if you’re anything like us, the whole thing starts making a lot more sense when you realize your cluster of Pi Zeros can be built to look like the iconic Cray-1 supercomputer.

Continue reading “Cluster Your Pi Zeros In Style With 3D Printed Cray-1” »

Mar 21, 2022

This Insane Chinese Supercomputer Changes EVERYTHING

Posted by in categories: government, robotics/AI, supercomputing

The smartest Scientists of both China and the United States are working hard on creating the fastest hardware for future Supercomputers in the exaflop and zettaflop performance range. Companies such as Intel, Nvidia and AMD are continuing Moore’s Law with the help of amazing new processes by TSMC. These supercomputers are secret projects by the government in hopes of beating each other in the tech industry and to prepare for Artificial Intelligence.

00:00 A new Superpower in the making.
00:46 A Brain-Scale Supercomputer?
02:47 China Tech vs USA Tech.
05:30 Chinese Semiconductor Technology.
07:39 Last Words.

#china #computing #usa

Mar 12, 2022

Faster analog computer could be based on mathematics of complex systems

Posted by in categories: mathematics, quantum physics, supercomputing

Researchers have proposed a novel principle for a unique kind of computer that would use analog technology in place of digital or quantum components.

The unique device would be able to carry out complex computations extremely quickly—possibly, even faster than today’s supercomputers and at vastly less cost than any existing quantum computers.

The principle uses to overcome the barriers in optimization problems (choosing the best option from a large number of possibilities), such as Google searches—which aim to find the optimal results matching the search request.

Mar 12, 2022

Synthetic synapses get more like a real brain

Posted by in categories: biological, chemistry, food, nanotechnology, robotics/AI, supercomputing

The human brain, fed on just the calorie input of a modest diet, easily outperforms state-of-the-art supercomputers powered by full-scale station energy inputs. The difference stems from the multiple states of brain processes versus the two binary states of digital processors, as well as the ability to store information without power consumption—non-volatile memory. These inefficiencies in today’s conventional computers have prompted great interest in developing synthetic synapses for use in computers that can mimic the way the brain works. Now, researchers at King’s College London, UK, report in ACS Nano Letters an array of nanorod devices that mimic the brain more closely than ever before. The devices may find applications in artificial neural networks.

Efforts to emulate biological synapses have revolved around types of memristors with different resistance states that act like memory. However, unlike the the devices reported so far have all needed a reverse polarity to reset them to the initial state. “In the brain a change in the changes the output,” explains Anatoly Zayats, a professor at King’s College London who led the team behind the recent results. The King’s College London researchers have now been able to demonstrate this brain-like behavior in their synaptic synapses as well.

Zayats and team build an array of gold nanorods topped with a polymer junction (poly-L-histidine, PLH) to a metal contact. Either light or an electrical voltage can excite plasmons—collective oscillations of electrons. The plasmons release hot electrons into the PLH, gradually changing the chemistry of the polymer, and hence changing it to have different levels of conductivity or light emissivity. How the polymer changes depends on whether oxygen or hydrogen surrounds it. A chemically inert nitrogen chemical environment will preserve the state without any energy input required so that it acts as non-volatile memory.

Page 4 of 5712345678Last