Archive for the ‘supercomputing’ category: Page 4

Feb 3, 2024

Tiny ‘bending station’ transforms everyday materials into quantum conductors

Posted by in categories: quantum physics, supercomputing

Using this technique, even a non-conducting material like glass could be turned into a conductor some day feel researchers.

A collaboration between scientists at the University of California, Irvine (UCI) and Los Alamos National Laboratory (LANL) has developed a method that converts everyday materials into conductors that can be used to build quantum computers, a press release said.

Computing devices that are ubiquitous today are built of silicon, a semiconductor material. Under certain conditions, silicon behaves like a conducting material but has limitations that impact its ability to compute larger numbers. The world’s fastest supercomputers are built by putting together silicon-based components but are touted to be slower than quantum computers.

Continue reading “Tiny ‘bending station’ transforms everyday materials into quantum conductors” »

Jan 31, 2024

A method for examining ensemble averaging forms during the transition to turbulence in HED systems for application to RANS models

Posted by in categories: engineering, physics, space, supercomputing

Simulating KH-, RT-, or RM-driven mixing using direct numerical simulations (DNS) can be prohibitively expensive because all the spatial and temporal scales have to be resolved, making approaches such as Reynolds-averaged Navier–Stokes (RANS) often the more favorable engineering option for applications like ICF. To this day, no DNS has been performed for ICF even on the largest supercomputers, as the resolution requirements are too stringent.8 However, RANS approaches also face their own challenges: RANS is based on the Reynolds decomposition of a flow where mean quantities are intended to represent an average over an ensemble of realizations, which is often replaced by a spatial average due to the scarcity of ensemble datasets. Replacing ensemble averages by space averages may be appropriate for flows that are in homogenous-, isotropic-, and fully developed turbulent states in which spatial, temporal, and ensemble averaging are often equivalent. However, most HED hydrodynamic experiments involve transitional periods in which the flow is neither homogeneous nor isotropic nor fully developed but may contain large-scale unsteady dynamics; thus, the equivalency of averaging can no longer be assumed. Yet, RANS models often still require to be initialized in such states of turbulence, and knowing how and when to initialize them in a transitional state is, therefore, challenging and is still poorly understood.

The goal of this paper is to develop a strategy allowing the initialization of a RANS model to describe an unsteady transitional RM-induced flow. We seek to examine how ensemble-averaged quantities evolve during the transition to turbulence based on some of the first ensemble experiments repeated under HED conditions. Our strategy involves using 3D high-resolution implicit large eddy simulations (ILES) to supplement the experiments and both initialize and validate the RANS model. We use the Besnard–Harlow–Rauenzahn (BHR) model,9–12 specifically designed to predict variable-density turbulent physics involved in flows like RM. Previous studies have considered different ways of initializing the BHR model.

Jan 30, 2024

The Professions of the Future (1)

Posted by in categories: automation, big data, business, computing, cyborgs, disruptive technology, education, Elon Musk, employment, evolution, futurism, information science, innovation, internet, life extension, lifeboat, machine learning, posthumanism, Ray Kurzweil, robotics/AI, science, singularity, Skynet, supercomputing, transhumanism

We are witnessing a professional revolution where the boundaries between man and machine slowly fade away, giving rise to innovative collaboration.

Photo by Mateusz Kitka (Pexels)

As Artificial Intelligence (AI) continues to advance by leaps and bounds, it’s impossible to overlook the profound transformations that this technological revolution is imprinting on the professions of the future. A paradigm shift is underway, redefining not only the nature of work but also how we conceptualize collaboration between humans and machines.

As creator of the ETER9 Project (2), I perceive AI not only as a disruptive force but also as a powerful tool to shape a more efficient, innovative, and inclusive future. As we move forward in this new world, it’s crucial for each of us to contribute to building a professional environment that celebrates the interplay between humanity and technology, where the potential of AI is realized for the benefit of all.

In the ETER9 Project, dedicated to exploring the interaction between artificial intelligences and humans, I have gained unique insights into the transformative potential of AI. Reflecting on the future of professions, it’s evident that adaptability and a profound understanding of technological dynamics will be crucial to navigate this new landscape.

Continue reading “The Professions of the Future (1)” »

Jan 30, 2024

The most powerful AI processing supercomputer in the world is set to be built in Germany, and planned to become operational within a mere year. Crikey

Posted by in categories: robotics/AI, supercomputing

AI processing can take a huge amount of computing power, but by the looks of this latest joint project from the Jülich Supercomputing Center and French computing provider Eviden, power will not be in short supply.

“But can it run Crysis” is an old gag, but I’m still going to see if I get away with it.

Jan 29, 2024

China’s first natively built supercomputer goes online — the Central Intelligent Computing Center is liquid-cooled and built for AI

Posted by in categories: robotics/AI, supercomputing

China Telecom claims it has built the country’s first supercomputer constructed entirely with Chinese-made components and technology (via ITHome). Based in Wuhan, the Central Intelligent Computing Center supercomputer is reportedly built for AI and can train large language models (LLM) with trillions of parameters. Although China has built supercomputers with domestic hardware and software before, going entirely domestic is a new milestone for the country’s tech industry.

Exact details on the Central Intelligent Computing Center are scarce. What’s clear so far: The supercomputer is purportedly made with only Chinese parts; it can train AI models with trillions of parameters; and it uses liquid cooling. It’s unclear exactly how much performance the supercomputer has. A five-exaflop figure is mentioned in ITHome’s report, but to our eyes it seems that the publication was talking about the total computational power of China Telecom’s supercomputers, and not just this one.

Jan 29, 2024

Scientists Use Supercomputer To Unravel Mysteries of Dark Matter and the Universe’s Evolution

Posted by in categories: cosmology, evolution, particle physics, supercomputing

“The memory requirements for PRIYA simulations are so big you cannot put them on anything other than a supercomputer,” Bird said.

TACC awarded Bird a Leadership Resource Allocation on the Frontera supercomputer. Additionally, analysis computations were performed using the resources of the UC Riverside High-Performance Computer Cluster.

The PRIYA simulations on Frontera are some of the largest cosmological simulations yet made, needing over 100,000 core-hours to simulate a system of 30723 (about 29 billion) particles in a ‘box’ 120 megaparsecs on edge, or about 3.91 million light-years across. PRIYA simulations consumed over 600,000 node hours on Frontera.

Jan 29, 2024

Tesla investing over $500 million to build Dojo supercomputer in New York

Posted by in categories: supercomputing, sustainability

Tesla is set to invest $500 million into building a Dojo supercomputer at its New York Gigafactory, as announced this week.

Jan 28, 2024

Tesla to build Dojo supercomputer at NY gigafactory, Elon Musk confirms

Posted by in categories: Elon Musk, robotics/AI, supercomputing, sustainability, transportation

Tesla is gearing up to build its next-generation Dojo supercomputer at its Gigafactory in Buffalo, New York, as part of a $500 million investment announced by the state’s governor on Friday.

The Dojo supercomputer is designed to process massive amounts of data from Tesla’s vehicles and train its artificial intelligence (AI) systems for autonomous driving and other applications. It is expected to be one of the most powerful computing clusters in the world, surpassing the current leader, NVIDIA.

Jan 25, 2024

Chemists use blockchain to simulate more than 4 billion chemical reactions essential to origins of life

Posted by in categories: blockchains, chemistry, cryptocurrencies, finance, mathematics, supercomputing

Cryptocurrency is usually “mined” through the blockchain by asking a computer to perform a complicated mathematical problem in exchange for tokens of cryptocurrency. But in research appearing in the journal Chem a team of chemists has repurposed this process, asking computers to instead generate the largest network ever created of chemical reactions which may have given rise to prebiotic molecules on early Earth.

This work indicates that at least some primitive forms of metabolism might have emerged without the involvement of enzymes, and it shows the potential to use blockchain to solve problems outside the financial sector that would otherwise require the use of expensive, hard to access supercomputers.

“At this point we can say we exhaustively looked for every possible combination of chemical reactivity that scientists believe to had been operative on primitive Earth,” says senior author Bartosz A. Grzybowski of the Korea Institute for Basic Science and the Polish Academy of Sciences.

Jan 20, 2024

Supercomputer uses machine learning to set new speed record

Posted by in categories: biotech/medical, genetics, robotics/AI, space travel, supercomputing

Give people a barrier, and at some point they are bound to smash through. Chuck Yeager broke the sound barrier in 1947. Yuri Gagarin burst into orbit for the first manned spaceflight in 1961. The Human Genome Project finished cracking the genetic code in 2003. And we can add one more barrier to humanity’s trophy case: the exascale barrier.

The exascale barrier represents the challenge of achieving exascale-level computing, which has long been considered the benchmark for high performance. To reach that level, however, a computer needs to perform a quintillion calculations per second. You can think of a quintillion as a million trillion, a billion billion, or a million million millions. Whichever you choose, it’s an incomprehensibly large number of calculations.

Continue reading “Supercomputer uses machine learning to set new speed record” »

Page 4 of 8712345678Last