Menu

Blog

Archive for the ‘supercomputing’ category: Page 35

Dec 6, 2022

Fractal parallel computing, a geometry-inspired productivity booster

Posted by in categories: robotics/AI, supercomputing

When trying to make a purchase with a shopping app, we may quickly browse the recommendation list while admitting that the machine does know about us—at least, it is learning to do so. As an effective emerging technology, machine learning (ML) has become pretty much pervasive with an application spectrum ranging from miscellaneous apps to supercomputing.

Dedicated ML computers are thus being developed at various scales, but their productivity is somewhat limited: the workload and development cost are largely concentrated in their software stacks, which need to be developed or reworked on an ad hoc basis to support every scaled model.

To solve the problem, researchers from the Chinese Academy of Sciences (CAS) proposed a parallel computing model and published their research in Intelligent Computing on Sept. 5.

Dec 5, 2022

Elon Musk JUST REVEALED His NEW Secret Weapon That Will CHANGE EVERYTHING!

Posted by in categories: Elon Musk, internet, space, supercomputing, sustainability

https://www.youtube.com/watch?v=YO12_V3AE3M

Elon musk JUST REVEALED powerful dojo supercomputer that tripped the power grid!

🔔 Subscribe now with all notifications on for more Elon Musk, SpaceX, and Tesla videos!

Continue reading “Elon Musk JUST REVEALED His NEW Secret Weapon That Will CHANGE EVERYTHING!” »

Dec 1, 2022

[ML News] GPT-4 Rumors | AI Mind Reading | Neuron Interaction Solved | AI Theorem Proving

Posted by in categories: media & arts, robotics/AI, supercomputing

Your weekly news from the AI & Machine Learning world.

OUTLINE:
0:00 — Introduction.
0:25 — AI reads brain signals to predict what you’re thinking.
3:00 — Closed-form solution for neuron interactions.
4:15 — GPT-4 rumors.
6:50 — Cerebras supercomputer.
7:45 — Meta releases metagenomics atlas.
9:15 — AI advances in theorem proving.
10:40 — Better diffusion models with expert denoisers.
12:00 — BLOOMZ & mT0
13:05 — ICLR reviewers going mad.
21:40 — Scaling Transformer inference.
22:10 — Infinite nature flythrough generation.
23:55 — Blazing fast denoising.
24:45 — Large-scale AI training with MultiRay.
25:30 — arXiv to include Hugging Face spaces.
26:10 — Multilingual Diffusion.
26:30 — Music source separation.
26:50 — Multilingual CLIP
27:20 — Drug response prediction.
27:50 — Helpful Things.

Continue reading “[ML News] GPT-4 Rumors | AI Mind Reading | Neuron Interaction Solved | AI Theorem Proving” »

Nov 30, 2022

NASA uses a climate simulation supercomputer to better understand black hole jets

Posted by in categories: climatology, cosmology, evolution, particle physics, supercomputing

NASA’s Discover supercomputer simulated the extreme conditions of the distant cosmos.

A team of scientists from NASA’s Goddard Space Flight Center used the U.S. space agency’s Center for Climate Simulation (NCCS) Discover supercomputer to run 100 simulations of jets emerging from supermassive black holes.

Continue reading “NASA uses a climate simulation supercomputer to better understand black hole jets” »

Nov 28, 2022

Researchers publish 31,618 molecules with potential for energy storage in batteries

Posted by in categories: chemistry, information science, robotics/AI, supercomputing

Scientists from the Dutch Institute for Fundamental Energy Research (DIFFER) have created a database of 31,618 molecules that could potentially be used in future redox-flow batteries. These batteries hold great promise for energy storage. Among other things, the researchers used artificial intelligence and supercomputers to identify the molecules’ properties. Today, they publish their findings in the journal Scientific Data.

In recent years, chemists have designed hundreds of molecules that could potentially be useful in flow batteries for energy storage. It would be wonderful, researchers from DIFFER in Eindhoven (the Netherlands) imagined, if the properties of these molecules were quickly and easily accessible in a database. The problem, however, is that for many molecules the properties are not known. Examples of molecular properties are redox potential and water solubility. Those are important since they are related to the power generation capability and energy density of redox flow batteries.

To find out the still-unknown properties of molecules, the researchers performed four steps. First, they used a and smart algorithms to create thousands of virtual variants of two types of molecules. These molecule families, the quinones and aza aromatics, are good at reversibly accepting and donating electrons. That is important for batteries. The researchers fed the computer with backbone structures of 24 quinones and 28 aza-aromatics plus five different chemically relevant side groups. From that, the computer created 31,618 different molecules.

Nov 28, 2022

Fuel Ignition and Bottle Bubbles Snag Video Prize

Posted by in categories: energy, supercomputing

An annual APS video prize went to supercomputer simulations, control of chaotic Faraday waves, and studies of a large bubble in a bottle.

Nov 23, 2022

An optical chip that can train machine learning hardware

Posted by in categories: robotics/AI, supercomputing

A multi-institution research team has developed an optical chip that can train machine learning hardware. Their research is published today in Optica.

Machine learning applications have skyrocketed to $165 billion annually, according to a recent report from McKinsey. But before a machine can perform intelligence tasks such as recognizing the details of an image, it must be trained. Training of modern-day (AI) systems like Tesla’s autopilot costs several million dollars in electric power consumption and requires supercomputer-like infrastructure.

This surging AI “appetite” leaves an ever-widening gap between computer hardware and demand for AI. Photonic integrated circuits, or simply optical chips, have emerged as a possible solution to deliver higher computing performance, as measured by the number of operations performed per second per watt used, or TOPS/W. However, though they’ve demonstrated improved core operations in machine intelligence used for data classification, photonic chips have yet to improve the actual front-end learning and machine training process.

Nov 23, 2022

Artificial Intelligence & Robotics Tech News For October 2022

Posted by in categories: cyborgs, drones, Elon Musk, information science, quantum physics, robotics/AI, supercomputing, transhumanism, virtual reality

https://www.youtube.com/watch?v=QrXnYHubFPc

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI News Timestamps:
0:00 New AI Robot Dog Beats Human Soccer Skills.
2:34 Breakthrough Humanoid Robotics & AI Tech.
5:21 Google AI Makes HD Video From Text.
8:41 New OpenAI DALL-E Robotics.
11:31 Elon Musk Reveals Tesla Optimus AI Robot.
16:49 Machine Learning Driven Exoskeleton.
19:33 Google AI Makes Video Game Objects From Text.
22:12 Breakthrough Tesla AI Supercomputer.
25:32 Underwater Drone Humanoid Robot.
29:19 Breakthrough Google AI Edits Images With Text.
31:43 New Deep Learning Tech With Light waves.
34:50 Nvidia General Robot Manipulation AI
36:31 Quantum Computer Breakthrough.
38:00 In-Vitro Neural Network Plays Video Games.
39:56 Google DeepMind AI Discovers New Matrices Algorithms.
45:07 New Meta Text To Video AI
48:00 Bionic Tech Feels In Virtual Reality.
53:06 Quantum Physics AI
56:40 Soft Robotics Gripper Learns.
58:13 New Google NLP Powered Robotics.
59:48 Ionic Chips For AI Neural Networks.
1:02:43 Machine Learning Interprets Brain Waves & Reads Mind.

Nov 22, 2022

This AI Supercomputer Has 13.5 Million Cores—and Was Built in Just Three Days

Posted by in categories: robotics/AI, supercomputing

At the time, all this was theoretical. But last week, the company announced they’d linked 16 CS-2s together into a world-class AI supercomputer.

Meet Andromeda

The new machine, called Andromeda, has 13.5 million cores capable of speeds over an exaflop (one quintillion operations per second) at 16-bit half precision. Due to the unique chip at its core, Andromeda isn’t easily compared to supercomputers running on more traditional CPUs and GPUs, but Feldman told HPC Wire Andromeda is roughly equivalent to Argonne National Laboratory’s Polaris supercomputer, which ranks 17th fastest in the world, according to the latest Top500 list.

Nov 21, 2022

Microsoft and Nvidia partner to build AI supercomputer in the cloud

Posted by in categories: robotics/AI, supercomputing

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

A supercomputer, providing massive amounts of computing power to tackle complex challenges, is typically out of reach for the average enterprise data scientist. However, what if you could use cloud resources instead? That’s the rationale that Microsoft Azure and Nvidia are taking with this week’s announcement designed to coincide with the SC22 supercomputing conference.

Nvidia and Microsoft announced that they are building a “massive cloud AI computer.” The supercomputer in question, however, is not an individually-named system, like the Frontier system at the Oak Ridge National Laboratory or the Perlmutter system, which is the world’s fastest Artificial Intelligence (AI) supercomputer. Rather, the new AI supercomputer is a set of capabilities and services within Azure, powered by Nvidia technologies, for high performance computing (HPC) uses.

Page 35 of 97First3233343536373839Last