Menu

Blog

Archive for the ‘supercomputing’ category: Page 5

Apr 1, 2024

Pushing material boundaries for better electronics

Posted by in categories: nanotechnology, robotics/AI, supercomputing

A recently tenured faculty member in MIT’s departments of Mechanical Engineering and Materials Science and Engineering, Kim has made numerous discoveries about the nanostructure of materials and is funneling them directly into the advancement of next-generation electronics.

His research aims to push electronics past the inherent limits of silicon — a material that has reliably powered transistors and most other electronic elements but is reaching a performance limit as more computing power is packed into ever smaller devices.

Today, Kim and his students at MIT are exploring materials, devices, and systems that could take over where silicon leaves off. Kim is applying his insights to design next-generation devices, including low-power, high-performance transistors and memory devices, artificial intelligence chips, ultra-high-definition micro-LED displays, and flexible electronic “skin.” Ultimately, he envisions such beyond-silicon devices could be built into supercomputers small enough to fit in your pocket.

Apr 1, 2024

Scientists Ignited a Thermonuclear Explosion Inside a Supercomputer

Posted by in categories: space, supercomputing

Computer simulations are giving us new insight into the riotous behavior of cannibal neutron stars.

When a neutron star slurps up material from a close binary companion, the unstable thermonuclear burning of that accumulated material can produce a wild explosion that sends X-radiation bursting across the Universe.

How exactly these powerful eruptions evolve and spread across the surface of a neutron star is something of a mystery. But by trying to replicate the observed X-ray flares using simulations, scientists are learning more about their ins and outs – as well as the ultra-dense neutron stars that produce them.

Mar 31, 2024

Frontiers: The Internet comprises a decentralized global system that serves humanity’s collective effort to generate

Posted by in categories: biotech/medical, education, internet, nanotechnology, Ray Kurzweil, robotics/AI, supercomputing

Process, and store data, most of which is handled by the rapidly expanding cloud. A stable, secure, real-time system may allow for interfacing the cloud with the human brain. One promising strategy for enabling such a system, denoted here as a “human brain/cloud interface” (“B/CI”), would be based on technologies referred to here as “neuralnanorobotics.” Future neuralnanorobotics technologies are anticipated to facilitate accurate diagnoses and eventual cures for the ∼400 conditions that affect the human brain. Neuralnanorobotics may also enable a B/CI with controlled connectivity between neural activity and external data storage and processing, via the direct monitoring of the brain’s ∼86 × 109 neurons and ∼2 × 1014 synapses. Subsequent to navigating the human vasculature, three species of neuralnanorobots (endoneurobots, gliabots, and synaptobots) could traverse the blood–brain barrier (BBB), enter the brain parenchyma, ingress into individual human brain cells, and autoposition themselves at the axon initial segments of neurons (endoneurobots), within glial cells (gliabots), and in intimate proximity to synapses (synaptobots). They would then wirelessly transmit up to ∼6 × 1016 bits per second of synaptically processed and encoded human–brain electrical information via auxiliary nanorobotic fiber optics (30 cm3) with the capacity to handle up to 1018 bits/sec and provide rapid data transfer to a cloud based supercomputer for real-time brain-state monitoring and data extraction. A neuralnanorobotically enabled human B/CI might serve as a personalized conduit, allowing persons to obtain direct, instantaneous access to virtually any facet of cumulative human knowledge. Other anticipated applications include myriad opportunities to improve education, intelligence, entertainment, traveling, and other interactive experiences. A specialized application might be the capacity to engage in fully immersive experiential/sensory experiences, including what is referred to here as “transparent shadowing” (TS). Through TS, individuals might experience episodic segments of the lives of other willing participants (locally or remote) to, hopefully, encourage and inspire improved understanding and tolerance among all members of the human family.

“We’ll have nanobots that… connect our neocortex to a synthetic neocortex in the cloud… Our thinking will be a… biological and non-biological hybrid.”

— Ray Kurzweil, TED 2014

Mar 31, 2024

Microsoft and OpenAI Reportedly Building $100 Billion Secret Supercomputer to Train Advanced AI

Posted by in categories: robotics/AI, supercomputing

“Microsoft has demonstrated its ability to build pioneering AI infrastructure used to train and deploy the world’s leading AI models,” a Microsoft spokesperson told the site. “We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability.”

Needless to say, that’s a mammoth investment. As such, it shines an even brighter spotlight on a looming question for the still-nascent AI industry: how’s the whole thing going to pay for itself?

So far, most companies in the space — Microsoft and OpenAI included — have offered significant AI services for free, sometimes with a more advanced upsell version like OpenAI’s ChatGPT Plus.

Mar 30, 2024

Microsoft Reportedly Building ‘Stargate’ to Transport OpenAI Into the Future

Posted by in categories: robotics/AI, supercomputing

Microsoft and OpenAI might be concocting a $100 billion supercomputer to accelerate their artificial intelligence models.

Mar 29, 2024

Microsoft and OpenAI reportedly plan to build a $100 billion AI supercomputer called “Stargate”

Posted by in categories: robotics/AI, supercomputing

According to insiders, Microsoft and OpenAI are planning to build a $100 billion supercomputer called “Stargate” to massively accelerate the development of OpenAI’s AI models, The Information reports.

Microsoft and OpenAI executives are forging plans for a data center with a supercomputer made up of millions of specialized server processors to accelerate OpenAI’s AI development, according to three people who took part in confidential talks.

The project, code-named “Stargate,” could cost as much as $100 billion, according to one person who has spoken with OpenAI CEO Sam Altman about it and another who has seen some of Microsoft’s initial cost estimates.

Mar 28, 2024

Cerebras Update: The Wafer Scale Engine 3 Is A Door Opener

Posted by in categories: robotics/AI, supercomputing

Cerebras held an AI Day, and in spite of the concurrently running GTC, there wasn’t an empty seat in the house.

As we have noted, Cerebras Systems is one of the very few startups that is actually getting some serious traction in training AI, at least from a handful of clients. They just introduced the third generation of Wafer-Scale Engines, a monster of a chip that can outperform racks of GPUs, as well as a partnership with Qualcomm to provide custom training and Go-To-Market collaboration with the Edge AI leader. Here’s a few take-aways from the AI Day event. Lots of images from Cerebras, but they tell the story quite well! We will cover the challenges this bold startup still faces in the Conclusions at the end.

As the third generation of wafer-scale engines, the new WSE-3 and the system in which it runs, the CS-3, is an engineering marvel. While Cerebras likes to compare it to a single GPU chip, thats really not the point, which is to simplify scaling. Why cut up a a wafer of chips, package each with HBM, put the package on a board, connect to CPUs with a fabric, then tie them all back together with networking chips and cables? Thats a lot of complexity that leads to a lot of programing to distribute the workload via various forms of parallelism then tie them all back together into a supercomputer. Cerebras thinks it has a better idea.

Mar 27, 2024

Nvidia and Cerebras highlight the crazy acceleration in processing power

Posted by in categories: robotics/AI, space, supercomputing

Coming hot on the heels of two massive announcements last year, last week Nvidia and Cerebras showed yet again that the pace of computing is still accelerating.

The first CS-2 based Condor Galaxy AI supercomputers went online in late 2023, and already Cerebras is unveiling its successor the CS-3, based on the newly launched Wafer Scale Engine 3, an update to the WSE-2 using 5nm fabrication and boasting a staggering 900,000 AI optimized cores with sparse compute support. CS-3 incorporates Qualcomm AI 100 Ultra processors to speed up inference.

Continue reading “Nvidia and Cerebras highlight the crazy acceleration in processing power” »

Mar 24, 2024

Cerebras Systems Unveils World’s Fastest AI Chip with Whopping 4 Trillion Transistors

Posted by in categories: robotics/AI, supercomputing

Third Generation 5 nm Wafer Scale Engine (WSE-3) Powers Industry’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs via 2048 Nodes.

SUNNYVALE, CALIFORNIA – March 13,202 4 – Cerebras Systems, the pioneer in accelerating generative AI, has doubled down on its existing world record of fastest AI chip with the introduction of the Wafer Scale Engine 3. The WSE-3 delivers twice the performance of the previous record-holder, the Cerebr as WSE-2, at the same power draw and for the same price. Purpose built for training the industry’s largest AI models, the 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI perform ance through 900,000 AI optimized compute cores.

Key Specs:

Mar 22, 2024

Supercomputer simulations of super-diamond suggest a path to its creation

Posted by in categories: particle physics, space, supercomputing

Diamond is the strongest material known. However, another form of carbon has been predicted to be even tougher than diamond. The challenge is how to create it on Earth.

The eight-atom body-centered cubic (BC8) crystal is a distinct carbon phase: not diamond, but very similar. BC8 is predicted to be a stronger material, exhibiting a 30% greater resistance to compression than diamond. It is believed to be found in the center of carbon-rich exoplanets. If BC8 could be recovered under ambient conditions, it could be classified as a super-diamond.

This crystalline high-pressure phase of carbon is theoretically predicted to be the most stable phase of carbon under pressures surpassing 10 million atmospheres.

Page 5 of 90First23456789Last