Toggle light / dark theme

Why the humanoid workforce is running late

Roboticists, from what I’ve seen, are normally a patient bunch. The first Roomba launched more than a decade after its conception, and it took more than 50 years to go from the first robotic arm ever to the millionth in production. Venture capitalists, on the other hand, are not known for such patience.

Perhaps that’s why Bank of America’s new prediction of widespread humanoid adoption was met with enthusiasm by investors but enormous skepticism by roboticists. Aaron Prather, a director at the robotics standards organization ASTM, said on Thursday that the projections were “wildly off-base.”

As we’ve covered before, humanoid hype is a cycle: One slick video raises the expectations of investors, which then incentivizes competitors to make even slicker videos. This makes it quite hard for anyone—a tech journalist, say—to peel back the curtain and find out how much impact humanoids are poised to have on the workforce. But I’ll do my darndest.

Robotic touch sensors are not just skin deep

Researchers at Northwestern University and Israel’s Tel Aviv University have overcome a major barrier to achieving a low-cost solution for advanced robotic touch. The authors argue that the problem that has been lurking in the margins of many papers about touch sensors lies in the robotic skin itself.

In the study, inexpensive silicon rubber composites used to make skin were observed to host an insulating layer on the top and bottom surfaces, which prevented direct electrical contact between the sensing polymer and the monitoring surface electrodes, making accurate and repeatable measurements virtually impossible. With the error eliminated, cheap robotic skins could allow robots to mimic human touch, allowing them to sense an object’s curves and edges, necessary to properly grasp it.


Researchers provide accurate, more reliable method to measure touch reception.

Researchers at Northwestern University and Israel’s Tel Aviv University have discovered a problem with robotic skin, that, once eliminated, could allow robots to better mimic human touch.

Introducing Anthropic’s AI for Science Program

Today, we’re launching Anthropic’s AI for Science program – a new initiative designed to accelerate scientific research and discovery through access to our API. This program will provide free API credits to support researchers working on high-impact scientific projects, with a particular focus on biology and life sciences applications.

Why AI for Science? At Anthropic, we believe that AI has the potential to significantly accelerate scientific progress. Advanced AI reasoning and language capabilities can help researchers analyze complex scientific data, generate hypotheses, design experiments, and communicate findings more effectively. By reducing the time and resources needed for scientific discovery, we can help address some of humanity’s most pressing challenges.


Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

When a robot becomes conscious, how will we know?

We, of course, feel “internal experiences,” that thing we call self, and we know very well when we’ve woken up. Consciousness, in fact, is what we lose when we fall asleep and regain when we wake up. ChatGPT 4.0 is emulating those human feelings. That doesn’t mean it actually feels them. But what if it does feel them one day? Will we listen to it then?

Schneider is among the intellectuals who believe that the question of machine consciousness is worth examining in depth. Not because she believes we’re already there, but because she believes it will happen sooner or later. Like Hassabis, she estimates that artificial general intelligence (AGI) — the name computer scientists give to something close enough to human intelligence to escape the simulacrum label and access a qualitatively different level — is a few decades away.

AGI will be a system capable of learning from experience without having to swallow the entire internet before breakfast; capable of abstracting information, projecting actions, and understanding situations it has never encountered before. And yes, perhaps capable of having “inner experiences,” or what we might call a form of consciousness. Don’t take it out on me; it’s philosophers who are examining this question.

Boosting quantum error correction using AI

A way to greatly enhance the efficiency of a method for correcting errors in quantum computers has been realized by theoretical physicists at RIKEN. This advance could help to develop larger, more reliable quantum computers based on light.

Quantum computers are looming large on the horizon, promising to revolutionize computing within the next decade or so.

“Quantum computers have the potential to solve problems beyond the capabilities of today’s most powerful supercomputers,” notes Franco Nori of the RIKEN Center for Quantum Computing (RQC).

These DNA-sized nanobots are made for walking and sorting molecular cargo

The field of nanotechnology is still in its nascent stages, but recent innovations are increasingly making this science fiction world of tiny robots into a reality. New breakthrough research from a team at Caltech has demonstrated the ability of a robot made of a single strand of DNA to explore a molecular surface, pick up targeted molecules, and move them to another designated location.

“Just like electromechanical robots are sent off to faraway places, like Mars, we would like to send molecular robots to minuscule places where humans can’t go, such as the bloodstream,” says Lulu Qian, co-author on the paper. “Our goal was to design and build a molecular robot that could perform a sophisticated nanomechanical task: cargo sorting.”

Previous work by a variety of researchers has successfully demonstrated the creation of such DNA robots, but this is the first time they have been shown to pick up and transport specific molecules.