Toggle light / dark theme

What the FENCE program hopes to do is to create event-based cameras that are more intelligent thanks to the use of brain-mimicking or neuromorphic circuits. What these do is to drastically reduce the amount of data that needs to be handled by disregarding irrelevant parts of the image. Instead of dealing with an entire scene, the event-based camera focuses only on the pixels that have changed.


DARPA has announced the start of the Fast Event-based Neuromorphic Camera and Electronics (FENCE) program, which is designed to make computer vision cameras more efficient by mimicking how the human brain processes information. Three teams of scientists led by Raytheon, BAE Systems, and Northrop Grumman, are tasked with developing an infrared (IR) camera system that needs to process less data, operates faster, and uses less power.

Modern imaging cameras are growing increasingly sophisticated, but they are also becoming victims of their own success. While state-of-the-art cameras can capture high-resolution images and track objects with great precision, they do so by processing large amounts of data, which takes time and power.

According to DARPA, this is fine when the task is something like tracking an airplane in a clear blue sky, but if the background becomes cluttered or starts to change, as is often the case in military operations, these cameras can soon be overwhelmed.

ASML’s machine has effectively turned into a choke point in the supply chain for chips, which act as the brains of computers and other digital devices. The tool’s three-continent development and production — using expertise and parts from Japan, the United States and Germany — is also a reminder of just how global that supply chain is, providing a reality check for any country that wants to leap ahead in semiconductors by itself.


A $150 million chip-making tool from a Dutch company has become a lever in the U.S.-Chinese struggle. It also shows how entrenched the global supply chain is.

Quantum computers could make modern day Macs look like the very first Commodore computer.

Initial tests on Google and NASA’s quantum computing system D-Wave showed that it was a staggering one hundred million times faster than a traditional desktop.

Hartmut Nevan, director of engineering at Google, claimed: “What a D-Wave does in a second would take a conventional computer 10000 years to do.”

The big picture: Japan’s share of global semiconductor sales has gone from 50 percent in 1988 to less than 10 percent today. The country has more chip factories than any other country — 84 to be exact — but only a few of them use advanced sub-10nm process nodes. This is why the country is scrambling to reignite its semiconductor industry, even if it comes at an incredibly high cost over the next decade.

The ongoing chip shortage has affected everything from LCD displays to graphics cards, game consoles, TVs, and even automakers. For consumers, this has created a hostile buying environment in some instances, while some governments have become acutely aware of the fragility of the global tech supply chain.

In the US, the Biden administration is trying to fix the situation by committing $52 billion towards boosting the local semiconductor industry, heeding the call of the Silicon Industry Association but at the same time falling short of the $100 billion that China is pouring into government subsidies for semiconductor companies.

An elegant new algorithm developed by Danish researchers can significantly reduce the resource consumption of the world’s computer servers. Computer servers are as taxing on the climate as global air traffic combined, thereby making the green transition in IT an urgent matter. The researchers, from the University of Copenhagen, expect major IT companies to deploy the algorithm immediately.

One of the flipsides of our runaway internet usage is its impact on climate due to the massive amount of electricity consumed by . Current CO2 emissions from data centers are as high as from global air traffic combined—with emissions expected to double within just a few years.

Only a handful of years have passed since Professor Mikkel Thorup was among a group of researchers behind an that addressed part of this problem by producing a groundbreaking recipe to streamline computer server workflows. Their work saved energy and resources. Tech giants including Vimeo and Google enthusiastically implemented the algorithm in their systems, with online video platform Vimeo reporting that the algorithm had reduced their bandwidth usage by a factor of eight.

Year after year, the explosive growth of computing power relies on manufacturers’ ability to fit more and more components into the same amount of space on a silicon chip. That progress, however, is now approaching the limits of the laws of physics, and new materials are being explored as potential replacements for the silicon semiconductors long at the heart of the computer industry.

New materials may also enable entirely new paradigms for individual chip components and their overall design. One long-promised advance is the ferroelectric field-effect transistor, or FE-FET. Such devices could switch states rapidly enough to perform computation, but also be able to hold those states without being powered, enabling them to function as long-term memory storage. Serving double duty as both RAM and ROM, FE-FET devices would make chips more space efficient and powerful.

The hurdle for making practical FE-FET devices has always been in manufacturing; the materials that best exhibit the necessary ferroelectric effect aren’t compatible with techniques for mass-producing silicon components due the high temperature requirements of the ferroelectric materials.