Toggle light / dark theme

Microsoft Corp. is in advanced talks to buy artificial intelligence and speech technology company Nuance Communications Inc., according to people familiar with the matter.

An agreement could be announced as soon as this week, said the people, who asked not to be identified because the information is private. The price being discussed could value Nuance at about $56 a share, though the terms could still change, one of the people said.

Moore’s Law is dead, right? Think again.

Although the historical annual improvement of about 40% in central processing unit performance is slowing, the combination of CPUs packaged with alternative processors is improving at a rate of more than 100% per annum. These unprecedented and massive improvements in processing power combined with data and artificial intelligence will completely change the way we think about designing hardware, writing software and applying technology to businesses.

Every industry will be disrupted. You hear that all the time. Well, it’s absolutely true and we’re going to explain why and what it all means.

Computer scientists from Rice University have displayed an artificial intelligence (AI) software that can run on commodity processors and train deep neural networks 15 times faster than platforms based on graphics processors.

According to Anshumali Shrivastava, an assistant professor of computer science at Rice’s Brown School of Engineering, the resources spent on training are the actual bottleneck in AI. Companies are spending millions of dollars a week to train and fine-tune their AI workloads.

Deep neural networks (DNN) are a very powerful type of artificial intelligence that can outperform humans at some tasks. DNN training is a series of matrix multiplication operations and an ideal workload for graphics processing units (GPUs), which costs nearly three times more than general-purpose central processing units (CPUs).

As they researched why the avalanche occurred with such force, researchers studying climate change pored over images taken in the days and weeks before and saw that ominous cracks had begun to form in the ice and snow. Then, scanning photos of a nearby glacier, they noticed similar crevasses forming, touching off a scramble to warn local authorities that it was also about to come crashing down.

The images of the glaciers came from a constellation of satellites no bigger than a shoebox, in orbit 280 miles up. Operated by San Francisco-based company Planet, the satellites, called Doves, weigh just over 10 pounds each and fly in “flocks” that today include 175 satellites. If one fails, the company replaces it, and as better batteries, solar arrays and cameras become available, the company updates its satellites the way Apple unveils a new iPhone.

The revolution in technology that transformed personal computing, put smart speakers in homes and gave rise to the age of artificial intelligence and machine learning is also transforming space. While rockets and human exploration get most of the attention, a quiet and often overlooked transformation has taken place in the way satellites are manufactured and operated. The result is an explosion of data and imagery from orbit.

Although universal fault-tolerant quantum computers – with millions of physical quantum bits (or qubits) – may be a decade or two away, quantum computing research continues apace. It has been hypothesized that quantum computers will one day revolutionize information processing across a host of military and civilian applications from pharmaceuticals discovery, to advanced batteries, to machine learning, to cryptography. A key missing element in the race toward fault-tolerant quantum systems, however, is meaningful metrics to quantify how useful or transformative large quantum computers will actually be once they exist.

To provide standards against which to measure quantum computing progress and drive current research toward specific goals, DARPA announced its Quantum Benchmarking program. Its aim is to re-invent key quantum computing metrics, make those metrics testable, and estimate the required quantum and classical resources needed to reach critical performance thresholds.

“It’s really about developing quantum computing yardsticks that can accurately measure what’s important to focus on in the race toward large, fault-tolerant quantum computers,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “Building a useful quantum computer is really hard, and it’s important to make sure we’re using the right metrics to guide our progress towards that goal. If building a useful quantum computer is like building the first rocket to the moon, we want to make sure we’re not quantifying progress toward that goal by measuring how high our planes can fly.”

Fueled by the need for faster life sciences and healthcare research, especially in the wake of the deadly COVID-19 pandemic, IBM and the 100-year-old Cleveland Clinic are partnering to bolster the Clinic’s research capabilities by integrating a wide range of IBM’s advanced technologies in quantum computing, AI and the cloud.

Access to IBM’s quantum systems has so far been primarily cloud-based, but IBM is providing the Cleveland Clinic with IBM’s first private-sector, on-premises quantum computer in the U.S. Scheduled for delivery next year, the initial IBM Quantum System One will harness between 50 to 100 qubits, according to IBM, but the goal is to stand up a more powerful, more advanced, next-generation 1000+ qubit quantum system at the Clinic as the project matures.

For the Cleveland Clinic, the 10-year partnership with IBM will add huge research capabilities and power as part of an all-new Discovery Center being created at the Clinic’s campus in Cleveland, Ohio. The Accelerator will serve as the technology foundation for the Clinic’s new Global Center for Pathogen Research & Human Health, which is being developed to drive research in areas including genomics, single-cell transcriptomics, population health, clinical applications and chemical and drug discovery, according to the Clinic.

TAE Technologies, the California, USA-based fusion energy technology company, has announced that its proprietary beam-driven field-reversed configuration (FRC) plasma generator has produced stable plasma at over 50 million degrees Celsius. The milestone has helped the company raise USD280 million in additional funding.

Norman — TAE’s USD150 million National Laboratory-scale device named after company founder, the late Norman Rostoker — was unveiled in May 2017 and reached first plasma in June of that year. The device achieved the latest milestone as part of a “well-choreographed sequence of campaigns” consisting of over 25000 fully-integrated fusion reactor core experiments. These experiments were optimised with the most advanced computing processes available, including machine learning from an ongoing collaboration with Google (which produced the Optometrist Algorithm) and processing power from the US Department of Energy’s INCITE programme that leverages exascale-level computing.

Plasma must be hot enough to enable sufficiently forceful collisions to cause fusion and sustain itself long enough to harness the power at will. These are known as the ‘hot enough’ and ‘long enough’ milestone. TAE said it had proved the ‘long enough’ component in 2015, after more than 100000 experiments. A year later, the company began building Norman, its fifth-generation device, to further test plasma temperature increases in pursuit of ‘hot enough’.