Toggle light / dark theme

How expensive and difficult does hyperscale-class AI training have to be for a maker of self-driving electric cars to take a side excursion to spend how many hundreds of millions of dollars to go off and create its own AI supercomputer from scratch? And how egotistical and sure would the company’s founder have to be to put together a team that could do it?

Like many questions, when you ask these precisely, they tend to answer themselves. And what is clear is that Elon Musk, founder of both SpaceX and Tesla as well as a co-founder of the OpenAI consortium, doesn’t have time – or money – to waste on science projects.

Just like the superpowers of the world underestimated the amount of computing power it would take to fully simulate a nuclear missile and its explosion, perhaps the makers of self-driving cars are coming to the realization that teaching a car to drive itself in a complex world that is always changing is going to take a lot more supercomputing. And once you reconcile yourself to that, then you start from scratch and build the right machine to do this specific job.

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest on the planet in the semiannual Top500 list. Frontier more than doubled the speed of the last titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds over a quintillion calculations a second—a milestone computing has pursued for 14 years.

That’s a big number. So before we go on, it’s worth putting into more human terms.

Imagine giving all 7.9 billion people on the planet a pencil and a list of simple arithmetic or multiplication problems. Now, ask everyone to solve one problem per second for four and half years. By marshaling the math skills of the Earth’s population for a half-decade, you’ve now solved over a quintillion problems.

Bishop: They can still be computationally very expensive. Additionally, emulators learn from data, so they’re typically not more accurate than the data used to train them. Moreover, they may give insufficiently accurate results when presented with scenarios that are markedly different from those on which they’re trained.

“I believe in “use-inspired basic research”—[like] the work of Pasteur. He was a consultant for the brewing industry. Why did this beer keep going sour? He basically founded the whole field of microbiology.” —Chris Bishop, Microsoft Research.

When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner—and the team is looking for ways to improve.

“We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC.

To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

Quantum Information Science / Quantum Computing (QIS / QC) continues to make substantial progress into 2023 with commercial applications coming where difficult practical problems can be solved significantly faster using QC (quantum advantage) and QC solving seemingly impossible problems and test cases (not practical problems) that for classical computers such as supercomputers would take thousands of years or beyond classical computing capabilities (quantum supremacy). Often the two terms are interchanged. Claims of quantum advantage or quantum supremacy, at times, are able to be challenged through new algorithms on classical computers.

The potential is for hybrid systems with quantum computers and classical computers such as supercomputers (and perhaps analog computing in the future) could operate in the thousands and potentially millions of times faster in lending more understanding into intractable challenges and problems. Imagine the possibilities and the implications for the benefit of Earth’s ecosystems and humankind significantly impacting in dozens of areas of computational science such as big data analytics, weather forecasting, aerospace and novel transportation engineering, novel new energy paradigms such as renewable energy, healthcare and drug discovery, omics (genomics, transcriptomics, proteomics, metabolomic), economics, AI, large-scale simulations, financial services, new materials, optimization challenges, … endless.

The stakes are so high in competitive and strategic advantage that top corporations and governments are investing in and working with QIS / QC. (See my Forbes article: Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum—they (BDC Deep Tech Fund) invested in QC company Xanadu). For the US, in 2018, there is the USD $1.2 billion National Quantum Initiative Act and related U.S. Department of Energy providing USD $625 million over five years for five quantum information research hubs led by national laboratories: Argonne, Brookhaven, Fermi, Lawrence Berkeley and Oak Ridge. In August 2022, the US CHIPS and Science Act providing hundreds of millions in funding as well. Coverage includes: accelerating the discovery of quantum applications; growing a diverse and domestic quantum workforce; development of critical infrastructure and standardization of cutting-edge R&D.

The U.S. Department of Energy (DOE) has published a request for information from computer hardware and software vendors to assist in the planning, design, and commission of next-generation supercomputing systems.

The DOE request calls for computing systems in the 2025–2030 timeframe that are five to 10 times faster than those currently available and/or able to perform more complex applications in “data science, artificial intelligence, edge deployments at facilities, and science ecosystem problems, in addition to traditional modelling and simulation applications.”

U.S. and Slovakia-based company Tachyum has now responded with its proposal for a 20 exaFLOP system. This would be based on Prodigy, its flagship product and described as the world’s first “universal” processor. According to Tachyum, the chip integrates 128 64-bit compute cores running at 5.7 GHz and combining the functionality of a CPU, GPU, and TPU into a single device with homogeneous architecture. This allows Prodigy to deliver performance at up to 4x that of the highest performing x86 processors (for cloud workloads) and 3x that of the highest performing GPU for HPC and 6x for AI applications.

New AI supercomputer from Graphcore will have 500 trillion parameters, (5x that of human brain) and compute at a speed of 10 exaflops per second (10x that of human brain) for a cost of $120 million USD. New AI powered exoskeleton uses machine learning to help patients walk. AI detects diabetes and prediabetes using machine learning to identify ECG signals indicative of the disease. AI identifies cancerous lesions in IBD patients.

AI News Timestamps:
0:00 New AI Supercomputer To Beat Human Brain.
3:06 AI Powered Exoskeleton.
4:35 AI Predicts Diabetes.
6:55 AI Detects Cancerous Lesions For IBD

👉 Crypto AI News: https://www.youtube.com/c/CryptoAINews/videos.

#ai #news #supercomputer

Capturing details of faraway members of our universe is an understandably complicated affair, but translating these details into the stunning space images that we see from space agencies around the world is equally difficult. It is here that supercomputers step in, helping process the massive amounts of data that are captured by terrestrial and space telescopes. On August 11, that is exactly what Australia’s upcoming supercomputer, called Setonix, helped achieve.

Capturing details of faraway members of our universe is an understandably complicated affair, but translating these details into the stunning space images that we see from space agencies around the world is equally difficult. It is here that supercomputers step in, helping process the massive amounts of data that are captured by terrestrial and space telescopes. On August 11, that is exactly what Australia’s upcoming supercomputer, called Setonix, helped achieve.

As its first project, Setonix processed the image of a dying supernova — the last stages of a dying star — from data sent to it by the Australian Square Kilometer Array Pathfinder (Askap). The latter is a terrestrial radio telescope, which has 36 individual antennas working together to capture radio frequency data about objects that are far away in space.

Such data contains intricate details about the object being observed. This not only increases the volume of the data being captured by the telescope, but also puts increasing pressure on a supercomputer to process it into a composite image.