Toggle light / dark theme

The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.

H200 Tensor Core GPU. Credit: NVIDIA

In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.

An experimental computing system physically modeled after the biological brain has “learned” to identify handwritten numbers with an overall accuracy of 93.4%. The key innovation in the experiment was a new training algorithm that gave the system continuous information about its success at the task in real time while it learned. The study was published in Nature Communications.

The algorithm outperformed a conventional machine-learning approach in which training was performed after a batch of data had been processed, producing 91.4% accuracy. The researchers also showed that memory of past inputs stored in the system itself enhanced learning. In contrast, other computing approaches store memory within software or hardware separate from a device’s processor.

For 15 years, researchers at the California NanoSystems Institute at UCLA, or CNSI, have been developing a new platform technology for computation. The technology is a brain-inspired system composed of a tangled-up network of wires containing silver, laid on a bed of electrodes. The system receives input and produces output via pulses of electricity. The individual wires are so small that their diameter is measured on the nanoscale, in billionths of a meter.

The devices are controlled via voice commands or a smartphone app.


Active noise control technology is used by noise-canceling headphones to minimize or completely block out outside noise. These headphones are popular because they offer a quieter, more immersive listening experience—especially in noisy areas. However, despite the many advancements in the technology, people still don’t have much control over which sounds their headphones block out and which they let pass.

Semantic hearing

Now, deep learning algorithms have been developed by a group of academics at the University of Washington that enable users to select which noises to filter through their headphones in real-time. The system has been named “semantic hearing” by its creators.

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms, and robotic applications.

The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy.

“As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM).

From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.

Now, MIT engineers have developed an approach that can be paired with any , to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.

The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.

Einstein’s fascination with light, considered quirky at the time, would lead him down the path to a brand new theory of physics.

Living half a century before Einstein, a Scotsman, James Clerk Maxwell, revealed a powerful unification and universalization of nature, taking the disparate sciences of electricity and magnetism and merging them into one communion. It was a titanic tour-de-force that compressed decades of tangled experimental results and hazy theoretical insights into a tidy set of four equations that govern a wealth of phenomena. And through Maxwell’s efforts was born a second great force of nature, electromagnetism, which describes, again in a mere four equations, everything from static shocks, the invisible power of magnets, the flow of electricity, and even radiation – that is, light – itself.

At the time Einstein’s fascination with electromagnetism was considered unfashionable. While electromagnetism is now a cornerstone of every young physicist’s education, in the early 20th century it was seen as nothing more than an interesting bit of theoretical physics, but really something that those more aligned in engineering should study deeply. Though Einstein was no engineer, as a youth his mind burned with a simple thought experiment: what would happen if you could ride a bicycle so quickly that you raced beside a beam of light? What would the light look like from the privileged perspective?

face_with_colon_three Year 2017


Ray Kurzweil is an inventor, thinker, and futurist famous for forecasting the pace of technology and predicting the world of tomorrow. In this video, Kurzweil suggests the blueprint for the master algorithm—or a single, general purpose learning algorithm—is hidden in the brain.

The brain, according to Kurzweil, consists of repeating modules that self-organize into hierarchies that build simple patterns into complex concepts. We don’t have a complete understanding of how this process works yet, but Kurzweil believes that as we study the brain more and reverse engineer what we find, we’ll learn to write the master algorithm.

Oh yea? I just learned the steps to copperhead road so… whatever.


Light is something in our world that we are very familiar with, and yet it can still throw some incredible curveballs when you look at it in detail. A newly discovered one comes from a pretty well-established phenomenon: what happens when light passes through an interface? That could be glass, water, or something completely different. The solution for that has long been established, but scientists have now found something weird going on in the middle.

As light goes through an interface, its speed changes. The solution for the behavior of light on one side of the interface or the other is the well-established standard wave equation. They can be linked with no problem (a piecewise continuous solution) but this still doesn’t explain what happens at the interface itself. There, the wave should experience an acceleration that is not accounted for by the current solution.

Now, an equation has been put forward in the case of a universe with one space dimension and one time dimension.

The rise in capabilities of artificial intelligence (AI) systems has led to the view that these systems might soon be conscious. However, we might be underestimating the neurobiological mechanisms underlying human consciousness.

Modern AI systems are capable of many amazing behaviors. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. When we, humans, are interacting with ChatGPT, we consciously perceive the text the model generates, just as you are currently consciously perceiving this text here.

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious.