Toggle light / dark theme

The company revealed the bot ahead of its appearance at CES 2024, which it’s touting as an “all-around home manager and companion.”

In addition to serving as a remote monitoring system, LG says the bipedal bot can also interact with humans using voice and image recognition. Apparently, one of its abilities includes greeting users when they arrive home and playing music based on their detected mood.

Renowned journalist and science fiction author Cory Doctorow is convinced that the AI is doomed to drop off a cliff.

“Of course AI is a bubble,” he wrote in a recent piece for sci-fi magazine Locus. “It has all the hallmarks of a classic tech bubble.”

Doctorow likens the AI bubble to the dotcom crisis of the early 2000s, when Silicon Valley firms started dropping like flies when venture capital dried up. It’s a compelling parallel to the current AI landscape, marked by sky-high expectations and even loftier promises that stand in stark contrast to reality.

A groundbreaking discovery in metamaterial design reveals materials with built-in deformation resistance and mechanical memory, promising advancements in robotics and computing.

Researchers from the University of Amsterdam Institute of Physics and ENS de Lyon have discovered how to design materials that necessarily have a point or line where the material doesn’t deform under stress, and that even remember how they have been poked or squeezed in the past. These results could be used in robotics and mechanical computers, while similar design principles could be used in quantum computers.

The outcome is a breakthrough in the field of metamaterials: designer materials whose responses are determined by their structure rather than their chemical composition. To construct a metamaterial with mechanical memory, physicists Xiaofei Guo, Marcelo Guzmán, David Carpentier, Denis Bartolo, and Corentin Coulais realized that its design needs to be “frustrated,” and that this frustration corresponds to a new type of order, which they call non-orientable order.

OpenAI recently topped $1.6 billion in annualized revenue on strong growth from its ChatGPT product, up from $1.3 billion as of mid-October, according to two people with knowledge of the figure.

The 20% growth over two months represented in that figure—a measure of the prior month’s revenue multiplied by 12—suggests that the company was able to hold onto its business momentum in selling artificial intelligence to enterprises despite a leadership crisis in November that provided an opening for rivals to go after its customers.

Early PDP-11 models were not overly impressive. The first PDP-11 11/20 cost $20,000, but it shipped with only about 4KB of RAM. It used paper tape as storage and had an ASR-33 teletype printer console that printed 10 characters per second. But it also had an amazing orthogonal 16-bit architecture, eight registers, 65KB of address space, a 1.25 MHz cycle time, and a flexible UNIBUS hardware bus that would support future hardware peripherals. This was a winning combination for its creator, Digital Equipment Corporation.

The initial application for the PDP-11 included real-time hardware control, factory automation, and data processing. As the PDP-11 gained a reputation for flexibility, programmability, and affordability, it saw use in traffic light control systems, the Nike missile defense system, air traffic control, nuclear power plants, Navy pilot training systems, and telecommunications. It also pioneered the word processing and data processing that we now take for granted.

And the PDP-11’s influence is most strikingly evident in the device’s assembly programming.

Only a year ago, ChatGPT woke the world up to the power of foundation models. But this power is not about shiny, jaw-dropping demos. Foundation models will permeate every sector, every aspect of our lives, in much the same way that computing and the Internet transformed society in previous generations. Given the extent of this projected impact, we must ask not only what AI can do, but also how it is built. How is it governed? Who decides?

We don’t really know. This is because transparency in AI is on the decline. For much of the 2010s, openness was the default orientation: Researchers published papers, code, and datasets. In the last three years, transparency has waned. Very little is known publicly about the most advanced models (such as GPT-4, Gemini, and Claude): What data was used to train them? Who created this data and what were the labor practices? What values are these models aligned to? How are these models being used in practice? Without transparency, there is no accountability, and we have witnessed the problems that arise from the lack of transparency in previous generations of technologies such as social media.

To make assessments of transparency rigorous, the Center for Research on Foundation Models introduced the Foundation Model Transparency Index, which characterizes the transparency of foundation model developers. The good news is that many aspects of transparency (e.g., having proper documentation) are achievable and aligned with the incentives of companies. In 2024, maybe we can start to reverse the trend.

Fourier Intelligence has been manufacturing exoskeletons and rehabilitation devices since 2017. The Singapore-based company launched its first generation of humanoid robots this year, designated the GR-1.

The humanoid platform includes 40 degrees of freedom distributed throughout its body, which measures 1.65 m (5 ft., 5 in.) in height and weighs 55 kg (121.2 lb.). The joint module that is fitted at the hip of the robot is capable of producing a peak torque of 300 Nm, which allows it to walk at a speed of 5 kph (3.1 mph) and carry goods that weigh 50 kg (110.2 lb.).

Making the leap from exoskeleton development to humanoid design is a logical progression, as the humanoid platform shares many of the mechanical and electrical design elements that Fourier developed for its core product line. Actuation is a core competency of the company, and by designing and building actuators, it claimed that it can optimize the cost/performance of the system.

1. AGI could be achieved or we will get even closer. There will OpenAI releasing GPT5 and updates of Google LLM like improved Gemini.

Definition’s for AI AGI = artificial general intelligence = a machine that performs at the level of an average (median) human.

ASI = artificial superintelligence = a machine that performs at the level of an expert human in practically any field.