Toggle light / dark theme

Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.

The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.

Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.

The European Union is finalizing plans for an ambitious “digital twin” of planet Earth that would simulate the atmosphere, ocean, ice, and land with unrivaled precision, providing forecasts of floods, droughts, and fires from days to years in advance. Destination Earth, as the effort is called, won’t stop there: It will also attempt to capture human behavior, enabling leaders to see the impacts of weather events and climate change on society and gauge the effects of different climate policies.

“It’s a really bold mission, I like it a lot,” says Ruby Leung, a climate scientist at the U.S. Department of Energy’s (DOE’s) Pacific Northwest National Laboratory. By rendering the planet’s atmosphere in boxes only 1 kilometer across, a scale many times finer than existing climate models, Destination Earth can base its forecasts on far more detailed real-time data than ever before. The project, which will be described in detail in two workshops later this month, will start next year and run on one of the three supercomputers that Europe will deploy in Finland, Italy, and Spain.

Destination Earth rose out of the ashes of Extreme Earth, a proposal led by the European Centre for Medium-Range Weather Forecasts (ECMWF) for a billion-euro flagship research program. The European Union ultimately canceled the flagship program, but retained interest in the idea. Fears that Europe was falling behind China, Japan, and the United States in supercomputing led to the European High-Performance Computing Joint Undertaking, an €8 billion investment to lay the groundwork for eventual “exascale” machines capable of 1 billion billion calculations per second. The dormant Extreme Earth proposal offered a perfect use for such capacity. “This blows a soul into your digital infrastructure,” says Peter Bauer, ECMWF’s deputy director of research, who coordinated Extreme Earth and has been advising the European Union on the new program.

Cerebras Systems has unveiled its new Wafer Scale Engine 2 processor with a record-setting 2.6 trillion transistors and 850000 AI-optimized cores. It’s built for supercomputing tasks, and it’s the second time since 2019 that Los Altos, California-based Cerebras has unveiled a chip that is basically an entire wafer.

Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware.

But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one.

Large-scale supercomputer simulations at the atomic level show that the dominant G form variant of the COVID-19-causing virus is more infectious partly because of its greater ability to readily bind to its target host receptor in the body, compared to other variants. These research results from a Los Alamos National Laboratory-led team illuminate the mechanism of both infection by the G form and antibody resistance against it, which could help in future vaccine development.

“We found that the interactions among the basic building blocks of the Spike protein become more symmetrical in the G form, and that gives it more opportunities to bind to the in the host—in us,” said Gnana Gnanakaran, corresponding author of the paper published today in Science Advances. “But at the same time, that means antibodies can more easily neutralize it. In essence, the variant puts its head up to bind to the receptor, which gives antibodies the chance to attack it.”

Researchers knew that the variant, also known as D614G, was more infectious and could be neutralized by antibodies, but they didn’t know how. Simulating more than a million and requiring about 24 million CPU hours of supercomputer time, the new work provides molecular-level detail about the behavior of this variant’s Spike.

The Lightmatter photonic computer is 10 times faster than the fastest NVIDIA artificial intelligence GPU while using far less energy. And it has a runway for boosting that massive advantage by a factor of 100, according to CEO Nicholas Harris.

In the process, it may just restart a moribund Moore’s Law.

Or completely blow it up.

As the Tufts scientists were creating the physical xenobot organisms, researchers working in parallel at the University of Vermont used a supercomputer to run simulations to try and find ways of assembling these living robots in order to perform useful tasks.


Scientists at Tufts University have created a strange new hybrid biological/mechanical organism that’s made of living cells, but operates like a robot.

Researchers from Tokyo Metropolitan University have devised and implemented a simplified algorithm for turning freely drawn lines into holograms on a standard desktop CPU. They dramatically cut down the computational cost and power consumption of algorithms that require dedicated hardware. It is fast enough to convert writing into lines in real time, and makes crisp, clear images that meet industry standards. Potential applications include hand-written remote instructions superimposed on landscapes and workbenches.

T potential applications of holography include important enhancements to vital, practical tasks, including remote instructions for surgical procedures, electronic assembly on circuit boards, or directions projected on landscapes for navigation. Making holograms available in a wide range of settings is vital to bringing this technology out of the lab and into daily life.

One of the major drawbacks of this state-of-the-art technology is the computational load of generation. The kind of quality we’ve come to expect in our 2D displays is prohibitive in 3D, requiring supercomputing levels of number crunching to achieve. There is also the issue of power consumption. More widely available hardware like GPUs in gaming rigs might be able to overcome some of these issues with raw power, but the amount of electricity they use is a major impediment to mobile applications. Despite improvements to available hardware, the solution can’t be achieved by brute force.

A new AI model that harnesses the power of the world’s fastest supercomputer, Fugaku, can rapidly predict tsunami flooding in coastal areas before the tsunami reaches land.

The development of the new technology was announced as part of a joint project between the International Research Institute of Disaster Science (IREDeS) at Tohoku University, the Earthquake Research Institute at the University of Tokyo, and Fujitsu Laboratories.

The 2011 Great East Japan Earthquake and subsequent tsunami highlighted the shortcomings in disaster mitigation and the need to utilize information for efficient and safe evacuations.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.