Toggle light / dark theme

Google has been dominating the development of artificial intelligence (AI) systems for years. This has undoubtedly been helped by its 2014 acquisition of DeepMind, the London-based startup focused on AI research that developed AlphaGo, a program capable of defeating a grand champion of complex Asian board game Go, which opened debate on whether the AI would eventually surpass the human mind.

But Google’s unquestioned dominance was interrupted last year by another startup — OpenAI. The launch of ChatGPT, the most successful application in history, caught big technology companies off guard, and forced them to accelerate their AI programs. In April of this year, DeepMind — which until then had functioned as a relatively independent research laboratory— and Google Brain — the technology company’s other major research division — merged into a single organization: Google DeepMind, which has some of the best AI scientists in the world.

Colin Murdoch, 45, is the chief business officer of Google’s new AI super division, which has just presented its first toy: Gemini, a multimodal generative AI platform that can process and generate text, code, images, audio and video from different data sources. Those who have used it say that it far surpasses the latest version of ChatGPT, and that it puts Google back in the fight to dominate the market.

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, “Say what?” It turns out that the Hypercomputer is Google’s take on a modular supercomputer with a healthy dose of its homegrown TPU v5p AI accelerators, which were also announced this month.

The modular design also allows workloads to be sliced up between TPUs and GPUs, with Google’s software tools doing the provisioning and orchestration in the background. Theoretically, if Google were to add a quantum computer to the Google Cloud, it could also be plugged into the Hypercomputer.

While the Hypercomputer was advertised as an AI supercomputer, the good news is that the system also runs scientific computing applications.

The company revealed the bot ahead of its appearance at CES 2024, which it’s touting as an “all-around home manager and companion.”

In addition to serving as a remote monitoring system, LG says the bipedal bot can also interact with humans using voice and image recognition. Apparently, one of its abilities includes greeting users when they arrive home and playing music based on their detected mood.

Renowned journalist and science fiction author Cory Doctorow is convinced that the AI is doomed to drop off a cliff.

“Of course AI is a bubble,” he wrote in a recent piece for sci-fi magazine Locus. “It has all the hallmarks of a classic tech bubble.”

Doctorow likens the AI bubble to the dotcom crisis of the early 2000s, when Silicon Valley firms started dropping like flies when venture capital dried up. It’s a compelling parallel to the current AI landscape, marked by sky-high expectations and even loftier promises that stand in stark contrast to reality.

A groundbreaking discovery in metamaterial design reveals materials with built-in deformation resistance and mechanical memory, promising advancements in robotics and computing.

Researchers from the University of Amsterdam Institute of Physics and ENS de Lyon have discovered how to design materials that necessarily have a point or line where the material doesn’t deform under stress, and that even remember how they have been poked or squeezed in the past. These results could be used in robotics and mechanical computers, while similar design principles could be used in quantum computers.

The outcome is a breakthrough in the field of metamaterials: designer materials whose responses are determined by their structure rather than their chemical composition. To construct a metamaterial with mechanical memory, physicists Xiaofei Guo, Marcelo Guzmán, David Carpentier, Denis Bartolo, and Corentin Coulais realized that its design needs to be “frustrated,” and that this frustration corresponds to a new type of order, which they call non-orientable order.

OpenAI recently topped $1.6 billion in annualized revenue on strong growth from its ChatGPT product, up from $1.3 billion as of mid-October, according to two people with knowledge of the figure.

The 20% growth over two months represented in that figure—a measure of the prior month’s revenue multiplied by 12—suggests that the company was able to hold onto its business momentum in selling artificial intelligence to enterprises despite a leadership crisis in November that provided an opening for rivals to go after its customers.

Early PDP-11 models were not overly impressive. The first PDP-11 11/20 cost $20,000, but it shipped with only about 4KB of RAM. It used paper tape as storage and had an ASR-33 teletype printer console that printed 10 characters per second. But it also had an amazing orthogonal 16-bit architecture, eight registers, 65KB of address space, a 1.25 MHz cycle time, and a flexible UNIBUS hardware bus that would support future hardware peripherals. This was a winning combination for its creator, Digital Equipment Corporation.

The initial application for the PDP-11 included real-time hardware control, factory automation, and data processing. As the PDP-11 gained a reputation for flexibility, programmability, and affordability, it saw use in traffic light control systems, the Nike missile defense system, air traffic control, nuclear power plants, Navy pilot training systems, and telecommunications. It also pioneered the word processing and data processing that we now take for granted.

And the PDP-11’s influence is most strikingly evident in the device’s assembly programming.