Menu

Blog

Archive for the ‘supercomputing’ category

Mar 17, 2024

New Insights on How Galaxies are Formed

Posted by in categories: cosmology, education, space travel, supercomputing

Astronomers can use supercomputers to simulate the formation of galaxies from the Big Bang 13.8 billion years ago to the present day. But there are a number of sources of error. An international research team, led by researchers in Lund, has spent a hundred million computer hours over eight years trying to correct these.

The last decade has seen major advances in computer simulations that can realistically calculate how galaxies form. These cosmological simulations are crucial to our understanding of where galaxies, stars and planets come from. However, the predictions from such models are affected by limitations in the resolution of the simulations, as well as assumptions about a number of factors, such as how stars live and die and the evolution of the interstellar medium.

To minimise the sources of error and produce more accurate simulations, 160 researchers from 60 higher education institutions – led by Santi Roca-Fàbrega at Lund University, Ji-hoon Kim at Seoul National University and Joel R. Primack at the University of California – have collaborated and now present the results of the largest comparison of simulations done ever.

Mar 15, 2024

Google Just Turned the RPi into a Supercomputer… — YouTube

Posted by in categories: robotics/AI, supercomputing

Coral.ai @raspberrypi =???Raspberry Pi 4 👉 https://amzn.to/3SBCRW0Coral AI USB Accelerator 👉 https://amzn.to/3SBGrzMRaspberry Pi Camera V3 Module 👉 https…

Mar 15, 2024

How a quantum technique highlights math’s mysterious link to physics

Posted by in categories: mathematics, quantum physics, supercomputing

Everybody involved has long known that some math problems are too hard to solve (at least without unlimited time), but a proposed solution could be rather easily verified. Suppose someone claims to have the answer to such a very hard problem. Their proof is much too long to check line by line. Can you verify the answer merely by asking that person (the “prover”) some questions? Sometimes, yes. But for very complicated proofs, probably not. If there are two provers, though, both in possession of the proof, asking each of them some questions might allow you to verify that the proof is correct (at least with very high probability). There’s a catch, though — the provers must be kept separate, so they can’t communicate and therefore collude on how to answer your questions. (This approach is called MIP, for multiprover interactive proof.)

Verifying a proof without actually seeing it is not that strange a concept. Many examples exist for how a prover can convince you that they know the answer to a problem without actually telling you the answer. A standard method for coding secret messages, for example, relies on using a very large number (perhaps hundreds of digits long) to encode the message. It can be decoded only by someone who knows the prime factors that, when multiplied together, produce the very large number. It’s impossible to figure out those prime numbers (within the lifetime of the universe) even with an army of supercomputers. So if someone can decode your message, they’ve proved to you that they know the primes, without needing to tell you what they are.

Mar 14, 2024

World’s largest computer chip WSE-3 will power massive AI supercomputer 8 times faster than the current record-holder

Posted by in categories: robotics/AI, space, supercomputing

Cerebras’ Wafer Scale Engine 3 (WSE-3) chip contains four trillion transistors and will power the 8-exaFLOP Condor Galaxy 3 supercomputer one day.

Mar 10, 2024

Unlocking the Secrets Behind Galaxy Formation

Posted by in categories: cosmology, space travel, supercomputing

Astronomers can use supercomputers to simulate the formation of galaxies from the Big Bang 13.8 billion years ago to the present day. But there are a number of sources of error. An international research team, led by researchers in Lund, has spent a hundred million computer hours over eight years trying to correct these.

The last decade has seen major advances in computer simulations that can realistically calculate how galaxies form. These cosmological simulations are crucial to our understanding of where galaxies, stars, and planets come from. However, the predictions from such models are affected by limitations in the resolution of the simulations, as well as assumptions about a number of factors, such as how stars live and die and the evolution of the interstellar medium.

Collaborative Efforts Enhance Accuracy

Mar 9, 2024

Aurora at Argonne National Laboratory in Lemont on track to be world’s fastest supercomputer

Posted by in categories: climatology, supercomputing

The Aurora supercomputer at Argonne National Laboratory in Lemont, IL, could soon be the world’s fastest. It could revolutionize climate forecasting.

LEMONT, Ill. (WLS) — This is what scientists at Argonne National Laboratory in Lemont call a node: six huge graphics processors and two large CPUs cooled with water to make major calculations a cinch.

Argonne’s new supercomputer doesn’t just have one node, 10 or 100, instead it has 10,000 of them. Each single rack of nodes weighs eight tons and are cooled by thousands of gallons of water.

Mar 9, 2024

D-Wave says its quantum computers can solve otherwise impossible tasks

Posted by in categories: quantum physics, supercomputing

Quantum computing firm D-Wave says its machines are the first to achieve “computational supremacy” by solving a practically useful problem that would otherwise take millions of years on an ordinary supercomputer.

By Matthew Sparkes

Feb 27, 2024

Frontiers: Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems

Posted by in categories: biotech/medical, information science, neuroscience, robotics/AI, supercomputing

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

Feb 20, 2024

I built an 8008 Supercomputer. 8 ancient 8008 vintage microprocessors computing in parallel

Posted by in category: supercomputing

I’ve done some videos lately on the 8,008 CPU, widely regarded as the world’s first 8-bit programmable microprocessor. Previously I built a nice little single board computer. In this video I connect eight of these 8,008 microprocessors together, designate one as a controller, design a shared memory abstraction between then, and use them to solve a simple parallel computing program — Conway’s Game of Life. Using my simple straightforward assembly implementation of Conway’s, I was about to show that the seven CPUs (one controller, 6 workers) worked together to solve the problem significantly faster than the single processor alone. The 8,008 debuted commercially in the early 1970s. It’s a physically small chip, only 18 pins, and requires a triplexed address and data bus. The clock rate is 500 KHz and the instruction set is fairly limited. Nevertheless, you can do a lot with this little CPU. For more vintage computer projects, see https://www.smbaker.com/.

Feb 16, 2024

US researchers develop ‘unhackable’ computer chip that works on light

Posted by in categories: quantum physics, robotics/AI, supercomputing

Researchers at the University of Pennsylvania have developed a new computer chip that uses light instead of electricity. This could improve the training of artificial intelligence (AI) models by improving the speed of data transfer and, more efficiently, reducing the amount of electricity consumed.

Humanity is building the exascale supercomputers today that can carry out a quintillion computations per second. While the scale of the computation may have increased, computing technology is still working on the principles that were first used in the 1960s.

Researchers have been working on developing computing systems based on quantum mechanics, too, but these computers are at least a few years from becoming widely available if not more. The recent explosion of AI models in technology has resulted in a demand for computers that can process large sets of information. The inefficient computing systems, though, result in high consumption of energy.

Page 1 of 8512345678Last