Toggle light / dark theme

As they researched why the avalanche occurred with such force, researchers studying climate change pored over images taken in the days and weeks before and saw that ominous cracks had begun to form in the ice and snow. Then, scanning photos of a nearby glacier, they noticed similar crevasses forming, touching off a scramble to warn local authorities that it was also about to come crashing down.

The images of the glaciers came from a constellation of satellites no bigger than a shoebox, in orbit 280 miles up. Operated by San Francisco-based company Planet, the satellites, called Doves, weigh just over 10 pounds each and fly in “flocks” that today include 175 satellites. If one fails, the company replaces it, and as better batteries, solar arrays and cameras become available, the company updates its satellites the way Apple unveils a new iPhone.

The revolution in technology that transformed personal computing, put smart speakers in homes and gave rise to the age of artificial intelligence and machine learning is also transforming space. While rockets and human exploration get most of the attention, a quiet and often overlooked transformation has taken place in the way satellites are manufactured and operated. The result is an explosion of data and imagery from orbit.

Although universal fault-tolerant quantum computers – with millions of physical quantum bits (or qubits) – may be a decade or two away, quantum computing research continues apace. It has been hypothesized that quantum computers will one day revolutionize information processing across a host of military and civilian applications from pharmaceuticals discovery, to advanced batteries, to machine learning, to cryptography. A key missing element in the race toward fault-tolerant quantum systems, however, is meaningful metrics to quantify how useful or transformative large quantum computers will actually be once they exist.

To provide standards against which to measure quantum computing progress and drive current research toward specific goals, DARPA announced its Quantum Benchmarking program. Its aim is to re-invent key quantum computing metrics, make those metrics testable, and estimate the required quantum and classical resources needed to reach critical performance thresholds.

“It’s really about developing quantum computing yardsticks that can accurately measure what’s important to focus on in the race toward large, fault-tolerant quantum computers,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “Building a useful quantum computer is really hard, and it’s important to make sure we’re using the right metrics to guide our progress towards that goal. If building a useful quantum computer is like building the first rocket to the moon, we want to make sure we’re not quantifying progress toward that goal by measuring how high our planes can fly.”

Fueled by the need for faster life sciences and healthcare research, especially in the wake of the deadly COVID-19 pandemic, IBM and the 100-year-old Cleveland Clinic are partnering to bolster the Clinic’s research capabilities by integrating a wide range of IBM’s advanced technologies in quantum computing, AI and the cloud.

Access to IBM’s quantum systems has so far been primarily cloud-based, but IBM is providing the Cleveland Clinic with IBM’s first private-sector, on-premises quantum computer in the U.S. Scheduled for delivery next year, the initial IBM Quantum System One will harness between 50 to 100 qubits, according to IBM, but the goal is to stand up a more powerful, more advanced, next-generation 1000+ qubit quantum system at the Clinic as the project matures.

For the Cleveland Clinic, the 10-year partnership with IBM will add huge research capabilities and power as part of an all-new Discovery Center being created at the Clinic’s campus in Cleveland, Ohio. The Accelerator will serve as the technology foundation for the Clinic’s new Global Center for Pathogen Research & Human Health, which is being developed to drive research in areas including genomics, single-cell transcriptomics, population health, clinical applications and chemical and drug discovery, according to the Clinic.

TAE Technologies, the California, USA-based fusion energy technology company, has announced that its proprietary beam-driven field-reversed configuration (FRC) plasma generator has produced stable plasma at over 50 million degrees Celsius. The milestone has helped the company raise USD280 million in additional funding.

Norman — TAE’s USD150 million National Laboratory-scale device named after company founder, the late Norman Rostoker — was unveiled in May 2017 and reached first plasma in June of that year. The device achieved the latest milestone as part of a “well-choreographed sequence of campaigns” consisting of over 25000 fully-integrated fusion reactor core experiments. These experiments were optimised with the most advanced computing processes available, including machine learning from an ongoing collaboration with Google (which produced the Optometrist Algorithm) and processing power from the US Department of Energy’s INCITE programme that leverages exascale-level computing.

Plasma must be hot enough to enable sufficiently forceful collisions to cause fusion and sustain itself long enough to harness the power at will. These are known as the ‘hot enough’ and ‘long enough’ milestone. TAE said it had proved the ‘long enough’ component in 2015, after more than 100000 experiments. A year later, the company began building Norman, its fifth-generation device, to further test plasma temperature increases in pursuit of ‘hot enough’.

Elon Musk finally got to show off his monkey.

Neuralink, a company founded by Musk that is developing artificial-intelligence-powered microchips to go in people’s brains, released a video Thursday appearing to show a macaque using the tech to play video games, including “Pong.”

Musk has boasted about Neuralink’s tests on primates before, but this is the first time the company has put one on display. During a presentation in 2019, Musk said the company had enabled a monkey to “control a computer with its brain.”

NGAD is the Navy’s effort to replace the Super Hornet. Note: It’s a completely separate program from the Air Force’s own NGAD—which recently designed, tested, and flew a secret new fighter jet—and will produce a completely separate plane. The two aircraft will almost certainly be quite different, with the Air Force’s jet more optimized for air superiority. It’s likely the two fighters, developed roughly within the same time period, will share much of the same technology.


The U.S. Navy elaborated on its plans to replace the F/A-18E/F Super Hornet, saying the service’s next strike fighter will “most likely be manned.” The jet will probably fly alongside robotic allies, and remotely crewed aircraft could eventually account for six out of 10 planes on a carrier flight deck.

“As we look at it right now, the Next-Gen Air Dominance [NGAD] is a family of systems, which has as its centerpiece the F/A-XX—which may or may not be manned—platform. It’s the fixed-wing portion of the Next-Gen Air Dominance family of systems,” said Rear Adm. Gregory Harris, the head of the Chief of Naval Operations’ air warfare directorate, during a Navy League event.

The F/A-18E/F Super Hornet dominates Navy’s strike fighter fleet, made up of fighters that can execute both fighter and attack missions. Although the Navy is buying the F-35C Joint Strike Fighter, it’s only purchasing enough of the planes to replace one or two of the four strike fighter squadrons per deployed aircraft carrier. The Navy believes it needs to replace the Super Hornet and its electronic warfare variant, the EA-18G Growler, in the 2030s.

Rice University computer scientists have demonstrated artificial intelligence (AI) software that runs on commodity processors and trains deep neural networks 15 times faster than platforms based on graphics processors.

“The cost of training is the actual bottleneck in AI,” said Anshumali Shrivastava, an assistant professor of computer science at Rice’s Brown School of Engineering. “Companies are spending millions of dollars a week just to train and fine-tune their AI workloads.”

Shrivastava and collaborators from Rice and Intel will present research that addresses that bottleneck April 8 at the machine learning systems conference MLSys.