Menu

Blog

Archive for the ‘supercomputing’ category: Page 14

Mar 9, 2024

Aurora at Argonne National Laboratory in Lemont on track to be world’s fastest supercomputer

Posted by in categories: climatology, supercomputing

The Aurora supercomputer at Argonne National Laboratory in Lemont, IL, could soon be the world’s fastest. It could revolutionize climate forecasting.

LEMONT, Ill. (WLS) — This is what scientists at Argonne National Laboratory in Lemont call a node: six huge graphics processors and two large CPUs cooled with water to make major calculations a cinch.

Argonne’s new supercomputer doesn’t just have one node, 10 or 100, instead it has 10,000 of them. Each single rack of nodes weighs eight tons and are cooled by thousands of gallons of water.

Mar 9, 2024

D-Wave says its quantum computers can solve otherwise impossible tasks

Posted by in categories: quantum physics, supercomputing

Quantum computing firm D-Wave says its machines are the first to achieve “computational supremacy” by solving a practically useful problem that would otherwise take millions of years on an ordinary supercomputer.

By Matthew Sparkes

Feb 27, 2024

Frontiers: Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems

Posted by in categories: biotech/medical, information science, neuroscience, robotics/AI, supercomputing

And this feature distinguishes neuromorphic systems from conventional computing systems. The brain has evolved over billions of years to solve difficult engineering problems by using efficient, parallel, low-power computation. The goal of NE is to design systems capable of brain-like computation. Numerous large-scale neuromorphic projects have emerged recently. This interdisciplinary field was listed among the top 10 technology breakthroughs of 2014 by the MIT Technology Review and among the top 10 emerging technologies of 2015 by the World Economic Forum. NE has two-way goals: one, a scientific goal to understand the computational properties of biological neural systems by using models implemented in integrated circuits (ICs); second, an engineering goal to exploit the known properties of biological systems to design and implement efficient devices for engineering applications. Building hardware neural emulators can be extremely useful for simulating large-scale neural models to explain how intelligent behavior arises in the brain. The principal advantages of neuromorphic emulators are that they are highly energy efficient, parallel and distributed, and require a small silicon area. Thus, compared to conventional CPUs, these neuromorphic emulators are beneficial in many engineering applications such as for the porting of deep learning algorithms for various recognitions tasks. In this review article, we describe some of the most significant neuromorphic spiking emulators, compare the different architectures and approaches used by them, illustrate their advantages and drawbacks, and highlight the capabilities that each can deliver to neural modelers. This article focuses on the discussion of large-scale emulators and is a continuation of a previous review of various neural and synapse circuits (Indiveri et al., 2011). We also explore applications where these emulators have been used and discuss some of their promising future applications.

“Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints. It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models. For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012). The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).

The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain. The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a). Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer. Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations.

Feb 20, 2024

I built an 8008 Supercomputer. 8 ancient 8008 vintage microprocessors computing in parallel

Posted by in category: supercomputing

I’ve done some videos lately on the 8,008 CPU, widely regarded as the world’s first 8-bit programmable microprocessor. Previously I built a nice little single board computer. In this video I connect eight of these 8,008 microprocessors together, designate one as a controller, design a shared memory abstraction between then, and use them to solve a simple parallel computing program — Conway’s Game of Life. Using my simple straightforward assembly implementation of Conway’s, I was about to show that the seven CPUs (one controller, 6 workers) worked together to solve the problem significantly faster than the single processor alone. The 8,008 debuted commercially in the early 1970s. It’s a physically small chip, only 18 pins, and requires a triplexed address and data bus. The clock rate is 500 KHz and the instruction set is fairly limited. Nevertheless, you can do a lot with this little CPU. For more vintage computer projects, see https://www.smbaker.com/.

Feb 16, 2024

US researchers develop ‘unhackable’ computer chip that works on light

Posted by in categories: quantum physics, robotics/AI, supercomputing

Researchers at the University of Pennsylvania have developed a new computer chip that uses light instead of electricity. This could improve the training of artificial intelligence (AI) models by improving the speed of data transfer and, more efficiently, reducing the amount of electricity consumed.

Humanity is building the exascale supercomputers today that can carry out a quintillion computations per second. While the scale of the computation may have increased, computing technology is still working on the principles that were first used in the 1960s.

Researchers have been working on developing computing systems based on quantum mechanics, too, but these computers are at least a few years from becoming widely available if not more. The recent explosion of AI models in technology has resulted in a demand for computers that can process large sets of information. The inefficient computing systems, though, result in high consumption of energy.

Feb 5, 2024

US firm plans to build 10,000 qubit quantum computer by 2026

Posted by in categories: quantum physics, supercomputing

QuEra is cofident that by 2026 it would have built a commercial quantum computer that can beat supercomputers of today with ease.

Feb 4, 2024

Researchers use supercomputer to determine whether ‘molecules of life’ can be formed naturally in right conditions

Posted by in categories: biotech/medical, education, robotics/AI, supercomputing

Basic biology textbooks will tell you that all life on Earth is built from four types of molecules: proteins, carbohydrates, lipids, and nucleic acids. And each group is vital for every living organism.

But what if humans could actually show that these “molecules of life,” such as amino acids and DNA bases, can be formed naturally in the right environment? Researchers at the University of Florida are using the HiPerGator—the in U.S. higher education—to test this experiment.

HiPerGator—with its AI models and vast capacity for graphics processing units, or GPUs (specialized processors designed to accelerate graphics renderings)—is transforming the molecular research game.

Feb 3, 2024

Tiny ‘bending station’ transforms everyday materials into quantum conductors

Posted by in categories: quantum physics, supercomputing

Using this technique, even a non-conducting material like glass could be turned into a conductor some day feel researchers.


A collaboration between scientists at the University of California, Irvine (UCI) and Los Alamos National Laboratory (LANL) has developed a method that converts everyday materials into conductors that can be used to build quantum computers, a press release said.

Computing devices that are ubiquitous today are built of silicon, a semiconductor material. Under certain conditions, silicon behaves like a conducting material but has limitations that impact its ability to compute larger numbers. The world’s fastest supercomputers are built by putting together silicon-based components but are touted to be slower than quantum computers.

Continue reading “Tiny ‘bending station’ transforms everyday materials into quantum conductors” »

Jan 31, 2024

A method for examining ensemble averaging forms during the transition to turbulence in HED systems for application to RANS models

Posted by in categories: engineering, physics, space, supercomputing

Simulating KH-, RT-, or RM-driven mixing using direct numerical simulations (DNS) can be prohibitively expensive because all the spatial and temporal scales have to be resolved, making approaches such as Reynolds-averaged Navier–Stokes (RANS) often the more favorable engineering option for applications like ICF. To this day, no DNS has been performed for ICF even on the largest supercomputers, as the resolution requirements are too stringent.8 However, RANS approaches also face their own challenges: RANS is based on the Reynolds decomposition of a flow where mean quantities are intended to represent an average over an ensemble of realizations, which is often replaced by a spatial average due to the scarcity of ensemble datasets. Replacing ensemble averages by space averages may be appropriate for flows that are in homogenous-, isotropic-, and fully developed turbulent states in which spatial, temporal, and ensemble averaging are often equivalent. However, most HED hydrodynamic experiments involve transitional periods in which the flow is neither homogeneous nor isotropic nor fully developed but may contain large-scale unsteady dynamics; thus, the equivalency of averaging can no longer be assumed. Yet, RANS models often still require to be initialized in such states of turbulence, and knowing how and when to initialize them in a transitional state is, therefore, challenging and is still poorly understood.

The goal of this paper is to develop a strategy allowing the initialization of a RANS model to describe an unsteady transitional RM-induced flow. We seek to examine how ensemble-averaged quantities evolve during the transition to turbulence based on some of the first ensemble experiments repeated under HED conditions. Our strategy involves using 3D high-resolution implicit large eddy simulations (ILES) to supplement the experiments and both initialize and validate the RANS model. We use the Besnard–Harlow–Rauenzahn (BHR) model,9–12 specifically designed to predict variable-density turbulent physics involved in flows like RM. Previous studies have considered different ways of initializing the BHR model.

Jan 30, 2024

The Professions of the Future (1)

Posted by in categories: automation, big data, business, computing, cyborgs, disruptive technology, education, Elon Musk, employment, evolution, futurism, information science, innovation, internet, life extension, lifeboat, machine learning, posthumanism, Ray Kurzweil, robotics/AI, science, singularity, Skynet, supercomputing, transhumanism

We are witnessing a professional revolution where the boundaries between man and machine slowly fade away, giving rise to innovative collaboration.

Photo by Mateusz Kitka (Pexels)

As Artificial Intelligence (AI) continues to advance by leaps and bounds, it’s impossible to overlook the profound transformations that this technological revolution is imprinting on the professions of the future. A paradigm shift is underway, redefining not only the nature of work but also how we conceptualize collaboration between humans and machines.

As creator of the ETER9 Project (2), I perceive AI not only as a disruptive force but also as a powerful tool to shape a more efficient, innovative, and inclusive future. As we move forward in this new world, it’s crucial for each of us to contribute to building a professional environment that celebrates the interplay between humanity and technology, where the potential of AI is realized for the benefit of all.

In the ETER9 Project, dedicated to exploring the interaction between artificial intelligences and humans, I have gained unique insights into the transformative potential of AI. Reflecting on the future of professions, it’s evident that adaptability and a profound understanding of technological dynamics will be crucial to navigate this new landscape.

Continue reading “The Professions of the Future (1)” »

Page 14 of 97First1112131415161718Last