Toggle light / dark theme

In the cons column, quantum computers are hard to use, require a very controlled set up to operate, and have to contend with “decoherence” or losing their quantum state which gives weird results. They’re also rare, expensive, and for most tasks, way less efficient than a traditional computer.

Still, a lot of these issues can be offset by combining a quantum computer with a traditional computer, just as VTT has done. Researchers can create a hybrid algorithm that has LUMI, the traditional supercomputer, handle the parts it does best while handing off anything that could benefit from quantum computing to HELMI. LUMI can then integrate the results of HELMI’s quantum calculations, perform any additional calculations necessary or even send more calculations to HELMI, and return the complete results to the researchers.

Finland is now one of few nations in the world with a quantum computer and a supercomputer, and LUMI is the most powerful quantum-enabled supercomputer. While quantum computers are still a way from being broadly commercially viable, these kinds of integrated research programs are likely to accelerate progress. VTT is currently developing a 20-qubit quantum computer with a 50-qubit upgrade planned for 2024.

Tesla CEO Elon Musk recently provided a teaser on what will be happening during the company’s AI Day 2 event this Friday. Considering Musk’s recent comments, it appears that AI Day 2 will be filled to the brim with exciting discussions and demos of next-generation tech.

This is not Tesla’s first AI Day. Last year, the electric vehicle maker held a similar event, outlining the company’s work in artificial intelligence. During the event, Tesla held an extensive discussion on its neural networks, Dojo supercomputer, and humanoid robot, the Tesla Bot (Optimus). Interestingly enough, mainstream coverage of the event later suggested that AI Day was underwhelming or disappointing.

The merged computing power can give rise to faster and more accurate machine learning applications.

Last month, LUMI, the fastest supercomputer in Europe, was connected to HELMI, Finland’s first quantum computer, a five-qubit system operational since 2021. This makes Finland the first country in Europe to have created such a hybrid system — it is one of the few countries worldwide to have done the same.

LUMI is famous — the supercomputer ranks third in the latest Top 500 list of the world’s fastest supercomputer and can carry out 309 petaflops. LUMI, too became operational in 2021.

VTT Technical Research Centre of Finland worked with CSC and Aalto University, within the Finnish Quantum Computing Infrastructure framework, to make the connection between the computers, according to a release.

When trying to make a purchase with a shopping app, we may quickly browse the recommendation list while admitting that the machine does know about us—at least, it is learning to do so. As an effective emerging technology, machine learning (ML) has become pretty much pervasive with an application spectrum ranging from miscellaneous apps to supercomputing.

Dedicated ML computers are thus being developed at various scales, but their productivity is somewhat limited: the workload and development cost are largely concentrated in their software stacks, which need to be developed or reworked on an ad hoc basis to support every scaled model.

To solve the problem, researchers from the Chinese Academy of Sciences (CAS) proposed a parallel computing model and published their research in Intelligent Computing on Sept. 5.

Elon musk JUST REVEALED powerful dojo supercomputer that tripped the power grid!

🔔 Subscribe now with all notifications on for more Elon Musk, SpaceX, and Tesla videos!

Elon’s breaking inventions have always been taking over the internet and this time, yet again his latest invention has been making headlines and people are going crazy over it! Elon musk has just revealed the powerful dojo supercomputer that tripped the power grid!

But what exactly is this power grid? How does it help? What is Elon planning to do with it?

Your weekly news from the AI & Machine Learning world.

OUTLINE:
0:00 — Introduction.
0:25 — AI reads brain signals to predict what you’re thinking.
3:00 — Closed-form solution for neuron interactions.
4:15 — GPT-4 rumors.
6:50 — Cerebras supercomputer.
7:45 — Meta releases metagenomics atlas.
9:15 — AI advances in theorem proving.
10:40 — Better diffusion models with expert denoisers.
12:00 — BLOOMZ & mT0
13:05 — ICLR reviewers going mad.
21:40 — Scaling Transformer inference.
22:10 — Infinite nature flythrough generation.
23:55 — Blazing fast denoising.
24:45 — Large-scale AI training with MultiRay.
25:30 — arXiv to include Hugging Face spaces.
26:10 — Multilingual Diffusion.
26:30 — Music source separation.
26:50 — Multilingual CLIP
27:20 — Drug response prediction.
27:50 — Helpful Things.

ERRATA:
HF did not acquire spaces, they launched spaces themselves and supported Gradio from the start. They later acquired Gradio.

References:
AI reads brain signals to predict what you’re thinking.
https://mind-vis.github.io/?s=09&utm_source=pocket_saves.

Brain-Machine Interface Device Predicts Internal Speech

Closed-form solution for neuron interactions.

https://github.com/raminmh/CfC/blob/main/torch_cfc.py.

GPT-4 rumors.
https://thealgorithmicbridge.substack.com/p/gpt-4-rumors-fro…ket_reader.

NASA’s Discover supercomputer simulated the extreme conditions of the distant cosmos.

A team of scientists from NASA’s Goddard Space Flight Center used the U.S. space agency’s Center for Climate Simulation (NCCS) Discover supercomputer to run 100 simulations of jets emerging from supermassive black holes.

The scientists set out to better understand these jets — massive beams of energetic particles shooting out into the cosmos — as they play a crucial role in the evolution of the universe.

Scientists from the Dutch Institute for Fundamental Energy Research (DIFFER) have created a database of 31,618 molecules that could potentially be used in future redox-flow batteries. These batteries hold great promise for energy storage. Among other things, the researchers used artificial intelligence and supercomputers to identify the molecules’ properties. Today, they publish their findings in the journal Scientific Data.

In recent years, chemists have designed hundreds of molecules that could potentially be useful in flow batteries for energy storage. It would be wonderful, researchers from DIFFER in Eindhoven (the Netherlands) imagined, if the properties of these molecules were quickly and easily accessible in a database. The problem, however, is that for many molecules the properties are not known. Examples of molecular properties are redox potential and water solubility. Those are important since they are related to the power generation capability and energy density of redox flow batteries.

To find out the still-unknown properties of molecules, the researchers performed four steps. First, they used a and smart algorithms to create thousands of virtual variants of two types of molecules. These molecule families, the quinones and aza aromatics, are good at reversibly accepting and donating electrons. That is important for batteries. The researchers fed the computer with backbone structures of 24 quinones and 28 aza-aromatics plus five different chemically relevant side groups. From that, the computer created 31,618 different molecules.

A multi-institution research team has developed an optical chip that can train machine learning hardware. Their research is published today in Optica.

Machine learning applications have skyrocketed to $165 billion annually, according to a recent report from McKinsey. But before a machine can perform intelligence tasks such as recognizing the details of an image, it must be trained. Training of modern-day (AI) systems like Tesla’s autopilot costs several million dollars in electric power consumption and requires supercomputer-like infrastructure.

This surging AI “appetite” leaves an ever-widening gap between computer hardware and demand for AI. Photonic integrated circuits, or simply optical chips, have emerged as a possible solution to deliver higher computing performance, as measured by the number of operations performed per second per watt used, or TOPS/W. However, though they’ve demonstrated improved core operations in machine intelligence used for data classification, photonic chips have yet to improve the actual front-end learning and machine training process.